Lesson 332 of 2116
When NOT to Use AI for Coding
AI is a power tool. Some tasks are wrong for it. Learn the categories where AI assistance reliably makes things worse, and the human-only judgment calls AI cannot replace.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Not Every Nail Wants This Hammer
- 2judgment
- 3fit-for-purpose
- 4domain knowledge
Concept cluster
Terms to connect while reading
Section 1
Not Every Nail Wants This Hammer
Defaulting to AI for every coding task is like defaulting to a chainsaw for every cut. Sometimes you need a scalpel. Sometimes you need a butter knife. The mature engineer knows when to put the chainsaw down.
Categories where AI reliably degrades quality
Compare the options
| Category | Why AI fails | Use instead |
|---|---|---|
| Cryptography | Off-by-one in a hash invalidates security | Vetted libraries (libsodium, ring, NaCl) — no rolling your own |
| Database migrations on prod data | Irreversible — no test catches lost rows after the fact | Reviewed migrations, dry runs, backups, ops |
| Performance-critical inner loops | Optimizations are non-obvious, profiling-driven | Profiler + human; AI for explanation, not generation |
| Auth and authorization logic | Subtle bugs become CVEs | Established libraries; security review for changes |
| Code that interprets legal/policy text | Hallucination meets liability | Lawyer-reviewed; AI as draft only with human verification |
| Anything you can't roll back | Mistakes compound; review tax can't catch them | Manual + checklists + four-eyes |
Tasks where AI shines vs. where it fails
- Shines: boilerplate, regex, glue code, file format conversion, test scaffolds, refactors with passing tests
- Mixed: novel algorithms, performance optimization, debugging unfamiliar codebases
- Fails: cryptography, novel concurrency primitives, anything safety-critical, anything that depends on private domain knowledge
- Catastrophic: AI-generated database migrations on production, AI-generated security reviews, AI-generated regulatory compliance code
Domain knowledge AI does not have
Models are trained on public code. Your codebase has private knowledge: which deprecated function is still used by a customer, which weird edge case Sarah from accounting depends on, which test is flaky for a real reason and which is flaky for a fake reason. AI cannot know any of this. If a task hinges on this knowledge, AI's contribution will be plausible noise.
Compliance and audit-track work
- HIPAA-regulated handling of PHI: every line needs a documented author
- PCI-regulated payment flow code: same
- SOC 2 audit-trail code: AI authorship complicates traceability
- Open-source projects with copyright provenance requirements
When you DO use AI in dangerous categories
A simple four-tier rubric you can apply per file or per repo.
# Risk-graded AI use, in increasing caution:
LOW — Use AI freely:
scaffolds, tests for safe code, internal tooling, docs, DX scripts
MED — Use AI as drafter, human edits final:
business logic, API endpoints, UI components, integrations
HIGH — AI explains, human writes:
novel algorithms, perf-critical code, complex SQL on large tables
PROHIBITED — No AI authorship:
cryptography, auth, prod migrations, financial transfers,
legal/regulatory code, anything irreversible at scaleThe human-only decisions
- Should we build this at all? (product judgment)
- Is this user request a good use of company time? (priority)
- Is this PR's tradeoff between speed and quality acceptable? (taste)
- Will this change cause political problems with team X? (org context)
- Is this commit message honest? (integrity)
Practical signals that you should put the AI down
- 1You've been at it 90 minutes and the code is worse than when you started
- 2You can't articulate the requirements clearly enough to prompt — go think first
- 3You're rejecting every suggestion — your instinct is telling you something the AI can't help with
- 4The bug is in code that touches money, identity, or safety — get a human reviewer
- 5You're tempted to commit AI code without reading it because you're tired — that's the signal to stop entirely
“The skill is not using AI for everything. It's knowing what not to use it for.”
Key terms in this lesson
The big idea: AI is a tool with a strong default-on bias. The mature move is to actively decide where it does not belong. Cryptography, irreversible operations, compliance-heavy paths, and decisions that need your private context are categories where the right amount of AI is zero.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “When NOT to Use AI for Coding”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 50 min
The Landscape: Copilot vs. Cursor vs. Windsurf vs. Claude Code
The AI coding tool market fragmented fast. Let's map the 2026 landscape honestly: who is for autocomplete, who is for agents, who wins on cost, and what the tradeoffs actually feel like.
Creators · 55 min
Red-Teaming Your AI-Generated Code
Agents ship working code that's also quietly insecure. Red-teaming means actively attacking your own code. Let's build the habits that catch real-world exploits before attackers do.
Creators · 45 min
Building With v0, Lovable, and Bolt (Fast App Prototyping)
AI app builders turn a prompt into a running app in minutes. Learn the strengths, the ceilings, and the moment you should eject to a real IDE.
