Lesson 271 of 2244
When NOT to Trust AI
Six categories where AI is dangerously wrong often enough that you should always verify — or skip the AI entirely.
Adults & Professionals · Safety & Governance · ~5 min read
Confidence is not the same as correctness
Chatbots write smoothly even when they are wrong. That smooth voice can be more convincing than the truth. Knowing when not to trust them is a skill.
Six high-risk categories
- 1Specific medical dosages or drug interactions — confirm with a pharmacist.
- 2Legal questions about your specific situation — confirm with a lawyer.
- 3Recent news (the AI's information may be months old).
- 4Specific phone numbers, addresses, or business hours — call to verify.
- 5Genealogy facts — the AI may invent ancestors that match your description.
- 6Anything where being wrong would hurt someone.
When AI is usually fine
- Drafting a friendly email or note.
- Explaining a general concept (gravity, photosynthesis, how a bill becomes a law).
- Suggesting recipe substitutions.
- Helping you brainstorm a list of options.
Key terms in this lesson
The big idea: trust AI for ideas. Trust humans and primary sources for facts that matter.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “When NOT to Trust AI”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Adults & Professionals · 40 min
AI Employee Monitoring: Where Surveillance Becomes Counterproductive
AI productivity-monitoring tools have exploded. The research shows they often hurt the productivity they're meant to measure — while damaging trust permanently.
Adults & Professionals · 40 min
Deepfake Detection: What Works, What Doesn't, and Why It Matters
AI-generated media has crossed the perceptual threshold where humans cannot reliably detect it. Detection tools help — but are in an arms race with generation.
Adults & Professionals · 11 min
Prompt Injection Defense: Protecting AI Systems From Malicious Inputs
Prompt injection is the SQL injection of the AI era — and it's already being exploited in production systems. Defending against it requires multiple layers, not a single fix.
