Lesson 2066 of 2116
Why AI Hallucinates and What Actually Reduces It
A clear-eyed look at the failure mode and the techniques that actually help.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The premise
- 2hallucination
- 3grounding
- 4verification
Concept cluster
Terms to connect while reading
Section 1
The premise
Hallucination is not a bug to be patched — it is intrinsic to how language models work. Mitigation is real and powerful, but elimination is not.
What AI does well here
- Reducing hallucination significantly via RAG and explicit citation requirements
- Asking the model to express uncertainty when it lacks confidence
- Cross-checking generated facts against trusted sources programmatically
- Designing flows that fail gracefully when the model is wrong
What AI cannot do
- Eliminate hallucination entirely
- Reliably distinguish a model's confident-correct from confident-wrong outputs
- Trust the model's own reports of its uncertainty
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Why AI Hallucinates and What Actually Reduces It”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 40 min
Speculative Decoding: Latency Wins Without Quality Loss
Speculative decoding uses a small draft model to propose tokens that the big model verifies — meaningful latency wins when implemented carefully.
Creators · 40 min
Quantization: Where the Quality Cliff Hides
Quantization reshapes serving and quality tradeoffs. This lesson covers why it matters and how to evaluate adoption.
Creators · 11 min
Process Reward Models: Grading the Steps, Not the Answer
Process Reward Models reshapes serving and quality tradeoffs. This lesson covers why it matters and how to evaluate adoption.
