Lesson 38 of 2116
When NOT to Use AI for Code
There are real moments where AI coding is slower, worse, or ethically wrong. Naming those moments is as important as naming the hype.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The Question Nobody on LinkedIn Wants to Answer
- 2risk
- 3novice trap
- 4proprietary data
Concept cluster
Terms to connect while reading
Section 1
The Question Nobody on LinkedIn Wants to Answer
Every vendor will tell you to use AI everywhere. The honest answer is narrower. There are real categories of work where AI is slower, riskier, or ethically indefensible. Learning to refuse is a senior skill.
Hard no: do not use AI here
- Regulated industries without a compliant tool (PHI in raw ChatGPT, PCI data in unvetted systems)
- Proprietary competitor code — your agent's training data or logs may leak
- Security-critical cryptography you cannot rigorously verify
- Code your org has explicitly prohibited from AI assistance
- Anything where licensing of AI-generated code is unsettled for your product
Soft no: AI usually makes this worse
- Very short tasks — type-it-yourself is faster than prompt-and-review
- Learning a new language or framework — skipping the pain means skipping the learning
- Highly novel algorithms — AI averages; novel is the opposite
- Performance-critical hot paths — requires measurement, not intuition
- Legacy codebase with zero tests — you cannot verify the agent's output
The skill atrophy problem
Engineers who let agents write everything lose sharpness on fundamentals. Recognizing pointer arithmetic bugs, reading a stack trace cold, debugging a race condition — these remain your skills alone. Use AI to accelerate learning, not to replace the reps.
Data you should never paste
Compare the options
| Category | Example | Why |
|---|---|---|
| PII | Customer names, emails, addresses | Privacy law and consent |
| PHI | Health records, diagnoses | HIPAA and equivalents |
| Credentials | API keys, DB URLs, tokens | May appear in logs or training sets |
| Trade secrets | Proprietary algorithms, competitive info | Potential IP exposure |
| Legal holds | Litigation documents | Privilege and chain of custody |
A 3-second check before pasting into any AI tool. Make it muscle memory.
# Before pasting anything into a public AI tool, grep it for red flags.
# Save this as check.sh and run on any diff before copy-paste.
# Look for common secret patterns
grep -nE 'api[_-]?key|secret|token|password|BEGIN (RSA|EC) PRIVATE' "$1"
# Look for obvious PII
grep -nE '[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}' "$1"
# If anything matches, stop. Redact or use an enterprise tier instead.Enterprise mitigations
- Use business tiers with data-processing addenda (Copilot Business, Claude for Enterprise, ChatGPT Enterprise)
- Turn off training on your data in settings — verify in the DPA
- Deploy self-hosted open models for sensitive workloads (Llama, Qwen)
- Write an AI use policy — explicit allowed and denied categories, review cadence
The licensing gray zone
AI-generated code's copyright status is unsettled and varies by jurisdiction. The US Copyright Office has indicated purely AI-generated work may not be copyrightable. If that matters to your product, document which parts are human-authored and preserve that provenance.
“The ability to say no, with reasons, is the skill that separates engineers from typists.”
Key terms in this lesson
The big idea: AI coding has real limits drawn by privacy law, licensing, learning, and craft. Naming those limits is how you use AI responsibly without pretending they do not exist.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “When NOT to Use AI for Code”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 50 min
The Landscape: Copilot vs. Cursor vs. Windsurf vs. Claude Code
The AI coding tool market fragmented fast. Let's map the 2026 landscape honestly: who is for autocomplete, who is for agents, who wins on cost, and what the tradeoffs actually feel like.
Creators · 55 min
Red-Teaming Your AI-Generated Code
Agents ship working code that's also quietly insecure. Red-teaming means actively attacking your own code. Let's build the habits that catch real-world exploits before attackers do.
Creators · 45 min
Building With v0, Lovable, and Bolt (Fast App Prototyping)
AI app builders turn a prompt into a running app in minutes. Learn the strengths, the ceilings, and the moment you should eject to a real IDE.
