Lesson 445 of 1570
Why AI 'Hallucinates' — and What's Actually Going On
AI confidently makes stuff up sometimes. It's not lying — it's doing exactly what it was built to do.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Why AI 'Hallucinates' — and What's Actually Going On
- 2hallucinations
- 3next-token prediction
- 4calibration
Concept cluster
Terms to connect while reading
Section 1
Why AI 'Hallucinates' — and What's Actually Going On
AI confidently makes stuff up sometimes. It's not lying — it's doing exactly what it was built to do.
What to actually do
- Hallucinations happen most with: facts, dates, names, citations
- Newer models hallucinate less, but no model is at zero
- Good prompting (asking for sources, asking 'are you sure?') reduces it
Key terms in this lesson
The big idea: Hallucinations aren't bugs to fix — they're built into how AI works. You're the verification layer.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Why AI 'Hallucinates' — and What's Actually Going On”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 45 min
Uncertainty Quantification in LLMs
A model that says 'I am 95 percent sure' and is wrong 40 percent of the time is miscalibrated. Measuring that gap is uncertainty quantification.
Creators · 40 min
Calibration
A calibrated model's 70 percent means it is right 70 percent of the time. Most LLMs are not calibrated. Here is what that costs you.
Builders · 30 min
Where Training Data Actually Comes From
You cannot understand modern AI without understanding its diet. Let's map where the data comes from, how it gets cleaned, and what that means.
