AI & PromptingAdvanced 3 to 5 hours

Build a Hallucination Checker

Create a prompt that reviews another AI's output and flags invented facts.

The Scenario

A news aggregator is using AI to summarise financial articles. Sometimes the AI invents numbers (hallucinates). You need to build a "Reviewer AI" prompt whose only job is to read the Source Article and the AI Summary, and flag any claims in the summary that are not explicitly stated in the source.

The Brief

Design an advanced "Fact-Checking" prompt. It must break down the summary into individual claims, search for evidence in the source text, and output a strict "Supported" or "Hallucinated" verdict for each claim.

Deliverables

  • The Fact-Checking System Prompt
  • The specific Chain of Thought instructions (e.g., "Extract Claim 1 -> Quote source text -> Verdict")
  • A short test case (Source Text + Flawed Summary) to demonstrate how the prompt catches a hallucination

Submission Guidance

Verification is easier for LLMs than generation. By explicitly forcing the model to quote the source text *before* giving a verdict, you drastically reduce its tendency to rubber-stamp bad summaries.

Submit Your Work

Your submission is graded against the rubric on the right. If you pass, you get a public Badge URL you can share on LinkedIn. There is no draft save, so work offline first and paste your finished response here.

This appears on your public Badge.

0/20000 charactersMarkdown supported

One per line or comma separated. Up to 5 links.

By submitting, you agree your submission text, name, and evaluation will appear on a public Badge URL.