Lesson 1243 of 1570
Asking AI to Critique Its Own Output Before Returning It
A second pass where Claude grades its first draft catches half the bugs before you see them.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The big idea
- 2self-critique
- 3reflection
- 4loop
Concept cluster
Terms to connect while reading
Section 1
The big idea
LLMs are better critics than authors. Run the output through one more pass with a prompt like 'find three problems with this answer' and you get cleaner results without any extra model.
Some examples
- Claude generates code, then reviews it for null checks — and catches the missing one.
- ChatGPT writes a summary, then re-reads it for tone mismatches and revises.
- Cursor's agent writes a function, then runs the tests itself and patches what fails.
- An agent drafts an email, critiques it for clarity, and shortens it before sending.
Try it!
Take any agent output, feed it back with 'list three flaws and rewrite to fix them'. Compare to the original.
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Asking AI to Critique Its Own Output Before Returning It”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 11 min
AI Agent Self-Reflection: Critique Loops That Actually Improve Output
When and how reflection loops genuinely improve AI agent performance.
Builders · 40 min
AI Agent: Plan Prom Without the Stress, Part 2
An AI agent that handles outfit, group, dinner, and afterparty in one go.
Builders · 7 min
What Makes an AI 'Agent' Different From a Chatbot
An AI agent like Claude Code or Manus runs steps on its own — a chatbot just talks back.
