Lesson 328 of 2116
Prompt Anti-Patterns That Destroy AI Code Quality
Six prompt habits make AI code reliably worse. Learn the anti-patterns, why each one breaks the model's reasoning, and the small rephrases that fix them.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Garbage In, Garbage Out, At Scale
- 2prompt anti-patterns
- 3specification
- 4scope creep
Concept cluster
Terms to connect while reading
Section 1
Garbage In, Garbage Out, At Scale
Most bad AI code is bad-prompt code. The model is doing exactly what you asked, just for a slightly different question than the one in your head. Six anti-patterns produce most of the damage.
Anti-pattern 1 — The kitchen sink
Asking for ten requirements in one prompt. The model can hold five well, eight badly, ten not at all. By the time it gets to your last bullet it has dropped your third one. Result: code that looks complete but missed half the requirements.
Each turn does one thing well. The model holds the constraint because there's only one.
# Bad: kitchen sink
"Build a user signup endpoint with email validation, password hashing,
rate limiting, CAPTCHA, JWT issuance, refresh tokens, audit logging,
tracing, error handling, and unit tests."
# Better: chunked
"Build a user signup endpoint that accepts {email, password},
validates email format, hashes password with argon2, and returns 201.
Nothing else yet — no rate limit, no JWT, no logging."
# Then in turn 2: "Now add JWT issuance." Etc.Anti-pattern 2 — Leading the witness
"Why is my function broken?" assumes the function is broken. The model will find a reason even if there isn't one. Now you have a fix for a non-bug, on top of the real bug.
Don't tell the model the conclusion. Tell it the evidence.
# Bad: leading
"Why is my login function returning the wrong user?"
# Better: neutral
"My login function returns user A when I pass user B's credentials.
Here is the function. Here is the test that exposes the bug.
Without changing code, list the most likely root causes."Anti-pattern 3 — The vague verb
Compare the options
| Vague | Specific | Why it matters |
|---|---|---|
| "Optimize this function" | "Reduce P95 latency below 50ms; same output, same signature" | Optimize for what? Memory? Latency? Readability? |
| "Refactor for readability" | "Extract the `parse_*` helpers into a separate module; no behavior change" | Readability has a thousand local maxima |
| "Make it more idiomatic" | "Replace manual loops with list comprehensions where it doesn't hurt readability" | Idiomatic to whom? |
| "Clean up the code" | "Remove unused imports and variables; do not change logic" | Cleanup is the most context-dependent verb in software |
Anti-pattern 4 — Pretend the previous turn didn't happen
Models trust your latest message more than the file. If you say "that's wrong, try again" without saying what's wrong, the model will rewrite the working parts and break them, while keeping the broken part. Always preserve what works in your correction.
Tell the model what to keep, not just what to discard.
# Bad: blanket rejection
"That's wrong. Try again."
# Better: scope the correction
"The validation logic in lines 12-18 is correct, keep it.
The error handler at line 24 catches everything as 500 — that's wrong.
Validation errors should return 400, auth errors should return 401.
Only change line 24."Anti-pattern 5 — Asking for code without examples
Almost every confusion in code generation can be killed by one input/output example. "Parse this CSV" is ambiguous. "Parse this CSV — here is a 3-line sample, here is the JSON I want out" is unambiguous. The example does the work that ten paragraphs of constraints would.
Anti-pattern 6 — Ignoring the file you're editing
Asking for code as if the project doesn't exist. The model invents an architecture, picks libraries you don't use, and follows conventions that fight your codebase. Cursor, Claude Code, and Copilot all index the project — but only when you point them at it.
Anchor the model in your codebase or it will write its own.
# Bad: detached
"Write a React component that fetches users."
# Better: anchored
"In src/components/UserList.tsx, write a React component
that follows the conventions in src/components/PostList.tsx.
Use the existing hook in src/hooks/useFetch.ts. Do not add
a new data-fetching library."“Most prompt engineering is just remembering to write down what you already know.”
Key terms in this lesson
The big idea: bad prompts produce bad code with mathematical reliability. The fixes are not exotic — chunk requirements, stay neutral, use specific verbs, scope corrections, include examples, anchor in your codebase. Sixty seconds of prompt hygiene saves an hour of debugging.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Prompt Anti-Patterns That Destroy AI Code Quality”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 12 min
Test-Driven Prompting — Failing Tests Are the Best Spec
Test-driven development meets AI: paste a failing test, ask the agent to make it green, iterate. Learn the discipline that makes AI code reliably correct because correctness is now executable.
Builders · 35 min
Tests as Prompts — an Unexpected Superpower
Writing a test first is not just good engineering. It is the clearest possible prompt for an AI. Let's use tests to make AI code reliable.
Creators · 50 min
The Landscape: Copilot vs. Cursor vs. Windsurf vs. Claude Code
The AI coding tool market fragmented fast. Let's map the 2026 landscape honestly: who is for autocomplete, who is for agents, who wins on cost, and what the tradeoffs actually feel like.
