Lesson 1233 of 1570
Why AI Hallucinates: The Three Types You'll Actually See
Not all hallucinations are alike — citation lies, fact lies, and confident-tone lies each need a different defense.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The big idea
- 2hallucination
- 3citation
- 4factuality
Concept cluster
Terms to connect while reading
Section 1
The big idea
AI doesn't 'lie' randomly — it lies in patterns you can predict. Knowing the three types saves you from getting fooled.
Some examples
- Citation hallucination: ChatGPT invents a JSTOR article that fits perfectly
- Fact hallucination: dates, names, statistics confidently wrong
- Confidence hallucination: AI sounds certain about something it guessed
- Recent-event hallucination: AI fills knowledge gaps with plausible fiction
Try it!
Ask ChatGPT for 3 statistics on any topic. Verify each one against a real source. Notice which type of hallucination it gives you.
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Why AI Hallucinates: The Three Types You'll Actually See”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 7 min
AI and hallucination vs mistake: spot when AI is making it up
Learn the difference between an AI hallucination and a regular wrong answer.
Creators · 45 min
Uncertainty Quantification in LLMs
A model that says 'I am 95 percent sure' and is wrong 40 percent of the time is miscalibrated. Measuring that gap is uncertainty quantification.
Explorers · 40 min
AI and the Confidence Trick: Sounding Sure but Being Wrong
Learn that AI can sound super sure even when it is wrong.
