Loading lesson…
Six prompt habits make AI code reliably worse. Learn the anti-patterns, why each one breaks the model's reasoning, and the small rephrases that fix them.
Most bad AI code is bad-prompt code. The model is doing exactly what you asked, just for a slightly different question than the one in your head. Six anti-patterns produce most of the damage.
Asking for ten requirements in one prompt. The model can hold five well, eight badly, ten not at all. By the time it gets to your last bullet it has dropped your third one. Result: code that looks complete but missed half the requirements.
# Bad: kitchen sink
"Build a user signup endpoint with email validation, password hashing,
rate limiting, CAPTCHA, JWT issuance, refresh tokens, audit logging,
tracing, error handling, and unit tests."
# Better: chunked
"Build a user signup endpoint that accepts {email, password},
validates email format, hashes password with argon2, and returns 201.
Nothing else yet — no rate limit, no JWT, no logging."
# Then in turn 2: "Now add JWT issuance." Etc.Each turn does one thing well. The model holds the constraint because there's only one."Why is my function broken?" assumes the function is broken. The model will find a reason even if there isn't one. Now you have a fix for a non-bug, on top of the real bug.
# Bad: leading
"Why is my login function returning the wrong user?"
# Better: neutral
"My login function returns user A when I pass user B's credentials.
Here is the function. Here is the test that exposes the bug.
Without changing code, list the most likely root causes."Don't tell the model the conclusion. Tell it the evidence.| Vague | Specific | Why it matters |
|---|---|---|
| "Optimize this function" | "Reduce P95 latency below 50ms; same output, same signature" | Optimize for what? Memory? Latency? Readability? |
| "Refactor for readability" | "Extract the `parse_*` helpers into a separate module; no behavior change" | Readability has a thousand local maxima |
| "Make it more idiomatic" | "Replace manual loops with list comprehensions where it doesn't hurt readability" | Idiomatic to whom? |
| "Clean up the code" | "Remove unused imports and variables; do not change logic" | Cleanup is the most context-dependent verb in software |
Models trust your latest message more than the file. If you say "that's wrong, try again" without saying what's wrong, the model will rewrite the working parts and break them, while keeping the broken part. Always preserve what works in your correction.
# Bad: blanket rejection
"That's wrong. Try again."
# Better: scope the correction
"The validation logic in lines 12-18 is correct, keep it.
The error handler at line 24 catches everything as 500 — that's wrong.
Validation errors should return 400, auth errors should return 401.
Only change line 24."Tell the model what to keep, not just what to discard.Almost every confusion in code generation can be killed by one input/output example. "Parse this CSV" is ambiguous. "Parse this CSV — here is a 3-line sample, here is the JSON I want out" is unambiguous. The example does the work that ten paragraphs of constraints would.
Asking for code as if the project doesn't exist. The model invents an architecture, picks libraries you don't use, and follows conventions that fight your codebase. Cursor, Claude Code, and Copilot all index the project — but only when you point them at it.
# Bad: detached
"Write a React component that fetches users."
# Better: anchored
"In src/components/UserList.tsx, write a React component
that follows the conventions in src/components/PostList.tsx.
Use the existing hook in src/hooks/useFetch.ts. Do not add
a new data-fetching library."Anchor the model in your codebase or it will write its own.Most prompt engineering is just remembering to write down what you already know.
— An LLM application engineer
The big idea: bad prompts produce bad code with mathematical reliability. The fixes are not exotic — chunk requirements, stay neutral, use specific verbs, scope corrections, include examples, anchor in your codebase. Sixty seconds of prompt hygiene saves an hour of debugging.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-coding-debug-prompt-anti-patterns-creators
What is the core idea behind "Prompt Anti-Patterns That Destroy AI Code Quality"?
Which term best describes a foundational idea in "Prompt Anti-Patterns That Destroy AI Code Quality"?
A learner studying Prompt Anti-Patterns That Destroy AI Code Quality would need to understand which concept?
Which of these is directly relevant to Prompt Anti-Patterns That Destroy AI Code Quality?
What is the key insight about "One example beats ten adjectives" in the context of Prompt Anti-Patterns That Destroy AI Code Quality?
What is the key insight about "The biggest tell" in the context of Prompt Anti-Patterns That Destroy AI Code Quality?
What is the key insight about "Review date" in the context of Prompt Anti-Patterns That Destroy AI Code Quality?
Which statement accurately describes an aspect of Prompt Anti-Patterns That Destroy AI Code Quality?
What does working with Prompt Anti-Patterns That Destroy AI Code Quality typically involve?
Which of the following is true about Prompt Anti-Patterns That Destroy AI Code Quality?
Which best describes the scope of "Prompt Anti-Patterns That Destroy AI Code Quality"?
Which section heading best belongs in a lesson about Prompt Anti-Patterns That Destroy AI Code Quality?
Which section heading best belongs in a lesson about Prompt Anti-Patterns That Destroy AI Code Quality?
Which section heading best belongs in a lesson about Prompt Anti-Patterns That Destroy AI Code Quality?
Which section heading best belongs in a lesson about Prompt Anti-Patterns That Destroy AI Code Quality?