The Scenario
LLMs are notoriously bad at math and logic if they try to answer immediately (e.g., the classic "A bat and ball cost R110..." puzzle). You need to write a prompt that forces the AI to slow down and calculate correctly.
The Brief
Take a classic trick math/logic question (provide your own) that LLMs often fail. Write a "Chain of Thought" prompt that successfully guides the model to the correct answer.
Deliverables
- The trick logic/math question
- A baseline prompt (that usually causes the LLM to fail)
- The Chain of Thought prompt (e.g., using "Let's think step by step" or a structured reasoning template)
- An explanation of why CoT improves accuracy in Transformer models
Submission Guidance
LLMs predict the next token. If the first token of their answer is a number, they are guessing. If the first 100 tokens are their working-out, the final number is much more likely to be accurate. Force the working-out.
Submit Your Work
Your submission is graded against the rubric on the right. If you pass, you get a public Badge URL you can share on LinkedIn. There is no draft save, so work offline first and paste your finished response here.