Lesson 710 of 1570
Chain-of-Thought for Builders: Make AI Show Its Reasoning
Force AI to explain its reasoning out loud, and you'll catch its mistakes faster.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The big idea
- 2Prompting AI: the step-back trick
- 3The big idea
- 4AI and Thinking Out Loud: Ask AI to Show Its Reasoning
Concept cluster
Terms to connect while reading
Section 1
The big idea
When AI just blurts an answer, you can't tell if it's right. But if you say 'show your work,' it walks through each step — and that's where you catch the wrong turns. This works for math, logic, code, even arguments.
Some examples
- 'Solve this math problem. Show every step.'
- 'Explain your reasoning before giving the final answer.'
- 'List your assumptions, then your conclusion.'
- 'Walk me through how you'd debug this code, line by line.'
Try it!
Give AI a tricky word problem and just ask for the answer. Then re-ask with 'show every step.' Look for any step that smells off.
Key terms in this lesson
Section 2
Prompting AI: the step-back trick
Section 3
The big idea
Before AI answers a hard question, tell it to first 'step back and consider what general principles apply.' This gets it thinking before talking and almost always improves the answer.
Some examples
- Tell AI 'first list relevant physics principles, then solve'
- Tell AI 'first list common essay structures, then outline'
- Tell AI 'first list types of bias, then critique my idea'
- Tell AI 'first state the goal, then suggest steps'
Try it!
Take a homework question. Ask AI normally and save the answer. Then ask again with 'first step back and identify the principles, then solve.' Compare quality.
Section 4
AI and Thinking Out Loud: Ask AI to Show Its Reasoning
Section 5
The big idea
Thinking out loud means asking AI to walk through its reasoning before answering. On math, logic, and tricky planning, this dramatically reduces dumb mistakes.
Some examples
- 'Think step by step before answering.'
- 'Show your reasoning, then give the final answer last.'
- 'List your assumptions before you start.'
- 'Solve it twice using two different methods. Compare.'
Try it!
Give AI a math or logic problem twice — once normally, once with 'think step by step.' Compare accuracy.
Section 6
Adding 'Think Step by Step' — Still Worth It in 2026?
Section 7
The big idea
In 2023, adding 'let's think step by step' was a magic trick that boosted accuracy. In 2026, reasoning models like OpenAI o1, Claude with extended thinking, and Gemini Thinking already do this *invisibly*. Adding the phrase wastes tokens and sometimes hurts. Know which model you're using.
Some examples
- On GPT-4o or Claude Sonnet without thinking on: yes, 'step by step' still helps for math word problems.
- On o1, o1-mini, or Claude with thinking on: skip it — the model already does it.
- On Gemini 2.5 Thinking: skip it — same reason.
- For creative writing on any model: skip it — slows it down for no benefit.
Try it!
Run the same math word problem on a reasoning model with and without 'think step by step'. Compare the answer and time.
Section 8
Telling Claude to Think Step-by-Step Out Loud
Section 9
The big idea
Chain-of-thought (CoT) prompting asks the AI to reason out loud before answering. It buys the model more compute per token and gives you a visible chain you can audit. Modern reasoning models (o1, Claude with extended thinking) do this automatically — but the prompt still helps with cheaper models.
Some examples
- Math word problems get 30%+ more accurate when you ask for steps first.
- 'Explain your reasoning, then give the final answer' on a logic puzzle — fewer wrong answers.
- On code review, 'list every issue you see, then rank them' beats 'what's wrong with this code?'
- For a debate prompt: 'argue both sides first, then pick' produces less biased takes.
Try it!
Take a tricky question. Ask for a one-line answer. Then ask the same question with 'think step by step.' Compare quality.
Section 10
Asking AI to 'Think Step by Step' (and When It Actually Helps)
Section 11
The big idea
'Think step by step' boosts accuracy on math, multi-step logic, and code tracing. It hurts on creative writing and simple lookups by padding answers with filler. Match the technique to the task.
Some examples
- 'Think step by step' boosts ChatGPT's accuracy on word problems by skipping hallucinated shortcuts.
- Claude with CoT walks through algorithm complexity correctly where it would otherwise guess O(n).
- 'Show your work' on a unit conversion catches the dropped factor of 10.
- CoT on 'write a haiku' just produces a worse haiku — leave it off for creative tasks.
Try it!
Take a math or logic prompt and run it with and without 'think step by step'. Note where it actually helped.
Section 12
When Step-by-Step Actually Helps and When It Does Not
Section 13
The big idea
reasoning prompts help when the answer needs derivation
Some examples
- Math word problems with multi-step logic
- Skipping it for define-this-word style asks
- Adding let's think for code planning
Try it!
Open your favorite AI tool and try one of the examples above. Pick the one that matches what you are actually working on this week. Spend 10 minutes, no more. Notice what worked and what did not — that's the real lesson.
Section 14
Chain of thought: tell AI to think step by step
Section 15
The big idea
Chain-of-thought prompting tells the model to show its reasoning before answering — accuracy goes way up.
Some examples
- Add 'Let's solve this step by step' to math problems.
- Ask 'before you answer, list what you know and what you need.'
- Read the reasoning and catch the wrong step yourself.
Try it!
Take a tricky word problem from class. Ask the model normally. Then ask with 'step by step.' Compare.
Understanding "Chain of thought: tell AI to think step by step" in practice: Prompting is a skill: the more specific and structured your input, the more useful the output. Adding 'think step by step' makes AI smarter on math, logic, and multi-step problems — and knowing how to apply this gives you a concrete advantage.
- Use role, context, task, and format in every prompt
- Iterate: treat first outputs as drafts, not finals
- Use few-shot examples for complex formatting tasks
- Test prompts at different temperatures for creative vs. factual tasks
- 1Rewrite one of your best prompts using role + context + task + format
- 2Ask an AI to critique your prompt and suggest improvements
- 3Compare outputs from two models using the same prompt
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Chain-of-Thought for Builders: Make AI Show Its Reasoning”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Explorers · 40 min
Ask AI to Think Step by Step
When you want AI to do something tricky, ask it to think step by step. The answer comes out smarter.
Builders · 24 min
Chain-of-Thought: Make the AI Show Its Work
Telling the AI to 'think step by step' before answering dramatically improves its accuracy on reasoning problems. Here's why and when.
Creators · 40 min
Few-Shot Example Curation: Quality, Rotation, and Counter-Examples, Part 1
Chain-of-thought prompts show real performance gains on reasoning tasks — and zero benefit on tasks that don't need reasoning. Here's how to tell which is which.
