Loading lesson…
Some problems need more than one prompt. Learn how to design multi-turn reasoning flows — reflection, critique, retry — that give you AI which actually solves hard problems.
A single prompt forces the AI to generate an answer in one go. For hard problems — proofs, long code, research — even the best model gets some parts wrong on the first pass. Multi-turn flows let the model reflect, critique itself, and retry.
TURN 1 (generate):
'Write a 200-word argument for why schools should start later. Make it persuasive.'
--> draft v1
TURN 2 (critique):
'You are a skeptical peer reviewer. List the three weakest points in the argument above. Be specific.'
--> critique list
TURN 3 (revise):
'Using the critique, revise the argument. Address each of the three weaknesses. Keep it under 200 words.'
--> draft v2A classic three-turn improvement cycle.Each turn plays a different role — author, critic, editor. Studies on Claude and GPT consistently show that a two- or three-pass structure outperforms a single pass, especially on reasoning-heavy tasks. The model is effectively its own reviewer.
ReAct interleaves reasoning steps with tool calls. Instead of one big plan, the model thinks a little, uses a tool (search, calculator, code execution), reads the result, thinks again, and so on. This is the foundation of modern AI agents.
THOUGHT: I need to find the CEO of Notion and their hiring trends.
ACTION: search("Notion CEO")
OBSERVATION: Ivan Zhao is CEO of Notion Labs.
THOUGHT: Now I need recent hiring data for Notion.
ACTION: search("Notion hiring 2026")
OBSERVATION: Notion announced 200 new engineering roles in Q1 2026.
THOUGHT: I have enough to answer.
FINAL: Notion, led by CEO Ivan Zhao, announced 200 new engineering roles in Q1 2026.ReAct loop. Thought / Action / Observation repeats until Final.Multi-turn flows accumulate tokens quickly. Strategies: summarize older turns into a running memo; store long data in a scratchpad tool; use XML tags to section the conversation. A 1M-context Claude can hold a lot, but cost and latency still grow.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-prompting-multi-turn-creators
What is the core idea behind "Multi-Turn Reasoning: Agents That Think Across Steps"?
Which term best describes a foundational idea in "Multi-Turn Reasoning: Agents That Think Across Steps"?
A learner studying Multi-Turn Reasoning: Agents That Think Across Steps would need to understand which concept?
Which of these is directly relevant to Multi-Turn Reasoning: Agents That Think Across Steps?
Which of the following is a key point about Multi-Turn Reasoning: Agents That Think Across Steps?
Which of these does NOT belong in a discussion of Multi-Turn Reasoning: Agents That Think Across Steps?
What is the recommended tip about "Practitioner tip" in the context of Multi-Turn Reasoning: Agents That Think Across Steps?
Which statement accurately describes an aspect of Multi-Turn Reasoning: Agents That Think Across Steps?
What does working with Multi-Turn Reasoning: Agents That Think Across Steps typically involve?
Which of the following is true about Multi-Turn Reasoning: Agents That Think Across Steps?
Which best describes the scope of "Multi-Turn Reasoning: Agents That Think Across Steps"?
Which section heading best belongs in a lesson about Multi-Turn Reasoning: Agents That Think Across Steps?
Which section heading best belongs in a lesson about Multi-Turn Reasoning: Agents That Think Across Steps?
Which section heading best belongs in a lesson about Multi-Turn Reasoning: Agents That Think Across Steps?
Which section heading best belongs in a lesson about Multi-Turn Reasoning: Agents That Think Across Steps?