Lesson 325 of 2116
When Agent Loops Go Wrong — Detecting and Breaking Them
Coding agents can spiral: same edit, same test, same failure, forever. Learn to spot agent loops early, the patterns that cause them, and the interventions that actually break the cycle.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The Agent That Could Not Stop
- 2agent loop
- 3convergence
- 4tool loop
Concept cluster
Terms to connect while reading
Section 1
The Agent That Could Not Stop
You ask Claude Code to fix a failing test. It edits a file. The test still fails. It edits the file again, almost the same way. The test still fails. Forty minutes later, your token bill is ugly, the test is still red, and the file is a worse version of where it started. You hit an agent loop.
The four most common loops
Compare the options
| Loop type | Symptom | Trigger |
|---|---|---|
| Edit-and-retry | Same file edited 5+ times in a row | Test failure the model misreads as a code problem when it's an env problem |
| Tool ping-pong | Same two tools called in alternation | No success criterion — the model has nothing to stop on |
| Search spiral | Endless `Grep` then `Read`, never editing | Vague task that never resolves into action |
| Apology loop | "You're right, let me try again" with the same code | Model trusts your last message more than the file state |
Why loops happen mechanically
- The agent's context fills with stale failures it now considers history, not the current truth
- Long context degrades: information from 50k tokens ago gets blurry
- The model lacks a hard success signal — no test passing, no command exiting 0
- Each turn the model assumes it must act, even when the right action is to stop and ask
Early-warning signs (in the first three turns)
- 1The agent edits a file then immediately re-reads what it just wrote
- 2It runs the same shell command twice with no change between them
- 3It says "let me try a different approach" without diagnosing why the first failed
- 4It fixates on a symptom ("the test fails") instead of the cause ("the env var is missing")
Three interventions that actually work
Each intervention forces the model out of action mode and into reasoning mode.
INTERVENTION 1 — Force a diagnosis
"Stop. Do not edit anything.
In one paragraph, tell me: what do you think is broken,
what evidence supports that, and what would prove you wrong?"
INTERVENTION 2 — Reset the loop
"Discard everything you've tried in this session.
Start from `git status`. Tell me what's changed and what's not."
INTERVENTION 3 — Hand back the keys
"You're stuck. Stop tool use. List the three most likely root causes
and what I — the human — should check on each."Prevent loops with success criteria
Most loops happen because the task has no exit condition. "Make this work" is a loop machine. "Make `pytest tests/test_auth.py::test_login` pass, then stop" is not. Give the agent a specific test, a specific command, or a specific file diff that defines done.
A bounded contract. The agent has somewhere to land and somewhere to fail gracefully.
# Claude Code prompt with a hard exit condition
"Fix the failing test in tests/test_auth.py.
Done is defined as:
1. `pytest tests/test_auth.py -x` exits 0.
2. No other test in the suite changes status.
3. The diff is under 30 lines.
If you cannot meet all three within 5 attempts, stop and report."Loop-resistant agent setup
- Limit tool calls per session (Claude Code: env vars, Cursor: subscription tier)
- Use hooks to log every edit so you can audit a loop after the fact
- Define `/stop` or `/diagnose` slash commands you can fire mid-session
- Keep test runs separate from edit cycles — run tests in a different terminal pane
“An agent loop is not a model failure. It is a missing exit condition.”
Key terms in this lesson
The big idea: agents loop when they have no clear definition of done. Spot the patterns within three turns, intervene with a forced diagnosis or git reset, and design tasks with bounded success criteria. Stopping a loop early is worth more than letting any agent run a marathon.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “When Agent Loops Go Wrong — Detecting and Breaking Them”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 14 min
Local Coding Models Need Smaller Loops
Ollama and local models can help with coding, but they need tighter context, smaller tasks, and clearer tool-call formatting than frontier cloud models.
Creators · 50 min
The Landscape: Copilot vs. Cursor vs. Windsurf vs. Claude Code
The AI coding tool market fragmented fast. Let's map the 2026 landscape honestly: who is for autocomplete, who is for agents, who wins on cost, and what the tradeoffs actually feel like.
Creators · 55 min
Red-Teaming Your AI-Generated Code
Agents ship working code that's also quietly insecure. Red-teaming means actively attacking your own code. Let's build the habits that catch real-world exploits before attackers do.
