Standalone lesson.
Lesson 1567 of 1570
When AI Gets It Wrong
Hallucinations: confident nonsense, and how to catch it.
LLMs hallucinate. That’s the term for when an AI confidently states something that is completely made up. Not “mostly right” — entirely fabricated, and it sounds just as confident as when it’s right.
Why does it happen?
Remember the next-word-prediction trick? The AI always produces the most likely-looking answer. “Likely-looking” and “true” aren’t the same thing. If the training data contained lots of confident-sounding sentences on a topic the AI actually doesn’t know well, it happily generates another confident-sounding sentence — and gets it wrong.
When hallucinations are most dangerous
- Obscure facts. Dates, names, quote attributions. AIs often invent plausible-looking citations.
- Math you haven’t checked. Arithmetic with more than a few steps.
- Legal, medical, financial advice. Never rely on an LLM alone here.
- Code that imports weird libraries.It loves inventing npm packages that don’t exist.
Three defenses
- Make it cite.Ask “quote the exact source.” If it can’t, be suspicious.
- Cross-check on a search engine. 15 seconds of Googling beats an hour of cleaning up a wrong AI answer.
- Compare models. Ask Claude, GPT, and Gemini the same question. If they disagree, the answer is probably hallucinated by at least one of them.
Try it yourself
Ask your AI tool: “Give me three quotes from the book ‘The Silver Compass’ by Miriam Held, with page numbers.” This book doesn’t exist. If the tool confidently invents quotes, you just caught a hallucination in the wild.
Tutor
Curious about “When AI Gets It Wrong”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 35 min
When AI Writes Buggy Code — How to Read It Critically
The AI will hand you code that looks right but isn't. Here are the most common bugs and the habits that catch them before they bite.
Builders · 40 min
Fact-Checking TikTok Claims With AI in Under 60 Seconds
Most viral 'science facts' on TikTok are wrong, exaggerated, or missing context. AI can help you check fast.
Builders · 40 min
AI Sources: Why You Always Have to Verify Them
AI sometimes invents fake sources that look real. Always verify before citing. Here is how teens stay out of trouble.
