Loading lesson…
AI is a power tool. Some tasks are wrong for it. Learn the categories where AI assistance reliably makes things worse, and the human-only judgment calls AI cannot replace.
Defaulting to AI for every coding task is like defaulting to a chainsaw for every cut. Sometimes you need a scalpel. Sometimes you need a butter knife. The mature engineer knows when to put the chainsaw down.
| Category | Why AI fails | Use instead |
|---|---|---|
| Cryptography | Off-by-one in a hash invalidates security | Vetted libraries (libsodium, ring, NaCl) — no rolling your own |
| Database migrations on prod data | Irreversible — no test catches lost rows after the fact | Reviewed migrations, dry runs, backups, ops |
| Performance-critical inner loops | Optimizations are non-obvious, profiling-driven | Profiler + human; AI for explanation, not generation |
| Auth and authorization logic | Subtle bugs become CVEs | Established libraries; security review for changes |
| Code that interprets legal/policy text | Hallucination meets liability | Lawyer-reviewed; AI as draft only with human verification |
| Anything you can't roll back | Mistakes compound; review tax can't catch them | Manual + checklists + four-eyes |
Models are trained on public code. Your codebase has private knowledge: which deprecated function is still used by a customer, which weird edge case Sarah from accounting depends on, which test is flaky for a real reason and which is flaky for a fake reason. AI cannot know any of this. If a task hinges on this knowledge, AI's contribution will be plausible noise.
# Risk-graded AI use, in increasing caution:
LOW — Use AI freely:
scaffolds, tests for safe code, internal tooling, docs, DX scripts
MED — Use AI as drafter, human edits final:
business logic, API endpoints, UI components, integrations
HIGH — AI explains, human writes:
novel algorithms, perf-critical code, complex SQL on large tables
PROHIBITED — No AI authorship:
cryptography, auth, prod migrations, financial transfers,
legal/regulatory code, anything irreversible at scaleA simple four-tier rubric you can apply per file or per repo.The skill is not using AI for everything. It's knowing what not to use it for.
— A skeptical principal engineer
The big idea: AI is a tool with a strong default-on bias. The mature move is to actively decide where it does not belong. Cryptography, irreversible operations, compliance-heavy paths, and decisions that need your private context are categories where the right amount of AI is zero.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-coding-debug-when-not-to-use-ai-creators
What is the core idea behind "When NOT to Use AI for Coding"?
Which term best describes a foundational idea in "When NOT to Use AI for Coding"?
A learner studying When NOT to Use AI for Coding would need to understand which concept?
Which of these is directly relevant to When NOT to Use AI for Coding?
Which of the following is a key point about When NOT to Use AI for Coding?
Which of these does NOT belong in a discussion of When NOT to Use AI for Coding?
Which statement is accurate regarding When NOT to Use AI for Coding?
Which of these does NOT belong in a discussion of When NOT to Use AI for Coding?
What is the key insight about "The irreversibility test" in the context of When NOT to Use AI for Coding?
What is the key insight about "Disclosure is becoming standard" in the context of When NOT to Use AI for Coding?
Which statement accurately describes an aspect of When NOT to Use AI for Coding?
What does working with When NOT to Use AI for Coding typically involve?
Which of the following is true about When NOT to Use AI for Coding?
Which best describes the scope of "When NOT to Use AI for Coding"?
Which section heading best belongs in a lesson about When NOT to Use AI for Coding?