Lesson 467 of 2116
The TodoWrite Tool: When It Actually Helps
TodoWrite gives Claude Code an explicit task list it maintains as it works. It's a tool for long, branching work — and pure noise on simple tasks.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1What TodoWrite does
- 2task list
- 3progress tracking
- 4long-running work
Concept cluster
Terms to connect while reading
Section 1
What TodoWrite does
TodoWrite is a tool the agent can call to maintain an explicit task list during a session. It writes the list, marks items in-progress, marks them complete, and re-reads it as it goes. The user sees the list update in real time, which gives the session structure a normal back-and-forth chat doesn't have.
When it helps
- 1Multi-step work where the agent is genuinely tracking dependencies
- 2Tasks complex enough that you want to see what the agent has and hasn't done
- 3Long sessions where re-reading 'what's left' would otherwise burn context
- 4Hand-offs — you can stop and resume because the list is the state
- 5Branching work where one decision opens or closes other items
When it's overhead
- Single-step tasks ('fix this typo') — the list is performance theater
- Tasks short enough that the trace itself is enough
- Conversational debugging where structure isn't the problem
- Anywhere the list ends up out of date faster than the agent updates it
Apply: a quality test
- 1For your next 5 sessions, note when the agent uses TodoWrite
- 2Mark each as 'helped me track,' 'pure ceremony,' or 'genuinely structured the work'
- 3Tune your prompts to ask for it on real work and discourage it on simple tasks
- 4After a week, you'll know your own threshold
Key terms in this lesson
The big idea: TodoWrite is for genuinely multi-step work. Skip it on small tasks; use it as a debugging surface on big ones.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “The TodoWrite Tool: When It Actually Helps”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 45 min
Structured Outputs: Make the Model Return Data You Can Trust
For production apps, pretty prose is often the wrong output. Learn when to use structured outputs, function calling, and schema validation.
Creators · 9 min
What Perplexity Is: Search-Augmented LLM, Not A Chatbot
Perplexity is built around the idea that every answer should cite its sources. Treating it like ChatGPT misses the point — and the reliability gap that comes with it.
Creators · 9 min
Pro Search vs Default: When To Spend The Compute
Pro Search runs more queries, reads more pages, and routes to a stronger model. It is not always worth the wait — knowing when it is is the skill.
