Lesson 329 of 2116
Rubber-Ducking With AI — Talking Through Bugs Out Loud
The classic debugging trick of explaining the bug to a rubber duck works extra well with AI — if you do it right. Learn the structured talk-it-out method that solves bugs faster than fixing them.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The Duck That Talks Back
- 2rubber duck debugging
- 3structured explanation
- 4hypothesis
Concept cluster
Terms to connect while reading
Section 1
The Duck That Talks Back
Rubber-duck debugging is the classic trick of explaining your bug, line by line, to an inanimate duck on your desk. The act of forced explanation surfaces the bug before the duck has to say anything. AI is a duck that asks follow-up questions. Used correctly, it is one of the highest-leverage debug tools you have.
Why explanation beats fixing
- Most bugs come from a wrong assumption you didn't know you were making
- Explaining forces you to articulate the assumption — at which point you often see it's wrong
- The model can spot inconsistencies between your stated mental model and the code you pasted
- It costs nothing to explain. Fixing prematurely costs the time of any wrong fix
The five-part structured ramble
Five sections force you to organize what you know vs. what you assume. The seams are where bugs live.
# Paste this template into Claude / GPT / Cursor chat:
"I want to rubber-duck a bug. Don't fix anything yet.
I'll explain in five parts. Just listen, then ask the most
useful follow-up question.
1. WHAT I'M BUILDING:
<one paragraph>
2. WHAT I EXPECTED TO HAPPEN:
<specific behavior>
3. WHAT ACTUALLY HAPPENED:
<error / wrong output / silence>
4. WHAT I'VE ALREADY TRIED:
<list>
5. WHAT I CURRENTLY THINK IS WRONG:
<your hypothesis, even if shaky>"What good follow-up questions look like
- "You said you expected X. How did you verify X is what your spec actually demands?"
- "You tried Y. What output did Y produce? Was it different from the original failure?"
- "Your hypothesis assumes Z. What would prove Z true or false?"
- "What does the code do between step 2 and step 3? You skipped that."
The two questions that solve 80% of bugs
After you finish your ramble, ask the AI two questions in sequence. First: "What assumption am I making that the code does not back up?" Second: "What's the simplest experiment that would prove or disprove that assumption?" These two questions, in order, solve the majority of single-developer bugs.
The bug fixed itself the moment I had to articulate the assumption.
# Real session excerpt
Me: <five-part ramble about a Postgres query returning duplicates>
AI: "You said you expected unique users. Your query has no DISTINCT.
What in the schema guarantees uniqueness at this join?"
Me: "...nothing. The join is many-to-many."
AI: "There's your bug. Add DISTINCT or change the join."
Total time: 90 seconds.
Code I wrote: zero.
Lines I read out loud: ~6.When AI rubber-ducking fails
- The bug requires runtime data the AI cannot see (DB state, env vars, network)
- The bug is in code you cannot legally paste (sensitive auth logic)
- You're rambling without including code — duck out, paste the code, ramble again
“If you can explain the bug clearly, you've already mostly fixed it.”
Key terms in this lesson
The big idea: most bugs hide inside an unstated assumption. Forced structured explanation surfaces those assumptions, with or without an AI to listen. Add the AI back in and you get a forcing function plus a critic — a faster path to the wrong assumption that started everything.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Rubber-Ducking With AI — Talking Through Bugs Out Loud”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 11 min
AI and flaky test triage
Feed AI a flaky test plus its recent failure logs and let it propose hypotheses you can verify.
Creators · 11 min
Debugging With AI: Stack Trace In, Hypothesis Out
Turn AI into a structured hypothesis generator for bugs.
Creators · 50 min
The Landscape: Copilot vs. Cursor vs. Windsurf vs. Claude Code
The AI coding tool market fragmented fast. Let's map the 2026 landscape honestly: who is for autocomplete, who is for agents, who wins on cost, and what the tradeoffs actually feel like.
