The Scenario
A news aggregator is using AI to summarise financial articles. Sometimes the AI invents numbers (hallucinates). You need to build a "Reviewer AI" prompt whose only job is to read the Source Article and the AI Summary, and flag any claims in the summary that are not explicitly stated in the source.
The Brief
Design an advanced "Fact-Checking" prompt. It must break down the summary into individual claims, search for evidence in the source text, and output a strict "Supported" or "Hallucinated" verdict for each claim.
Deliverables
- The Fact-Checking System Prompt
- The specific Chain of Thought instructions (e.g., "Extract Claim 1 -> Quote source text -> Verdict")
- A short test case (Source Text + Flawed Summary) to demonstrate how the prompt catches a hallucination
Submission Guidance
Verification is easier for LLMs than generation. By explicitly forcing the model to quote the source text *before* giving a verdict, you drastically reduce its tendency to rubber-stamp bad summaries.
Submit Your Work
Your submission is graded against the rubric on the right. If you pass, you get a public Badge URL you can share on LinkedIn. There is no draft save, so work offline first and paste your finished response here.