Prompt Engineering Basics
Master few-shot prompting, constraint setting, and clarity. Tests the ability to get predictable outputs from LLMs.
System Prompts & Persona Design
Design robust system instructions that govern how an AI behaves across a whole conversation. Tests tone control and boundary setting.
Output Formatting & Extraction
Force an LLM to return strict JSON, CSV, or specific Markdown structures. Tests data extraction and parsing reliability.
Chain of Thought & Reasoning
Design prompts that force the AI to "think step-by-step" before answering. Tests logic, math prompting, and hallucination reduction.
Context Window Management
Summarise and format massive documents to fit within token limits without losing critical information. Tests token efficiency.
RAG (Retrieval-Augmented Generation) Prep
Clean and chunk raw text so it can be effectively embedded and retrieved by a vector database. Tests data structuring for AI.
AI Evaluation & Benchmarking
Create datasets to test if an AI prompt actually works. Tests "LLM-as-a-judge" concepts, test-case creation, and scoring rubrics.
Agent Flow Design
Map out how multiple AI agents (e.g., a Researcher, a Writer, and a Reviewer) pass data to each other. Tests systems thinking.
Guardrails & Safety Prompting
Write prompts that actively prevent prompt injection, jailbreaks, and off-topic responses. Tests adversarial thinking and AI safety.
AI Workflow Integration
Design the logic for connecting LLMs to tools like Zapier, Make, or internal APIs. Tests function calling logic and automation.