Lesson 451 of 2116
When Perplexity Hallucinates: Pattern-Spotting And Recovery
Perplexity hallucinates differently than ChatGPT. Recognizing those specific failure modes is the difference between catching them and embedding them in your work.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Why grounded models still hallucinate
- 2hallucination
- 3false attribution
- 4phantom citation
Concept cluster
Terms to connect while reading
Section 1
Why grounded models still hallucinate
Retrieval reduces hallucination, but it doesn't eliminate it. The model can still misread a source, attribute the wrong claim to the right URL, or glue together passages from different pages into a single fluent paragraph that no individual page actually says. Perplexity's failure modes are subtler than pure-LLM hallucinations because the citations make them look authoritative.
The five failure patterns
- 1Phantom citation: the URL exists but the claim is not on the page
- 2Summary drift: the summary subtly overstates what the source actually says
- 3Source soup: a single sentence cites three sources, only one of which actually contains the claim
- 4Outdated as current: a 2022 article cited as if its claims still hold in 2026
- 5Confident no-such-thing: a fabricated entity, person, or paper cited with what looks like a real URL
Recovery moves when you spot a hallucination
- Open the cited URL; if the claim isn't there, ask Perplexity 'find me a source that actually says X'
- If the second answer also fails, the claim probably isn't true; pivot the question
- Search the exact quoted phrase in Google with quotes; check if it appears anywhere
- Switch focus mode (Academic instead of All) and re-run
- If pattern recurs in the thread, start a fresh thread — context can poison subsequent answers
Build a hallucination journal
Keep a running list of hallucinations you've caught — what the prompt was, what the false claim was, what the real answer was. After 20 entries, patterns emerge: certain topics, certain question shapes, certain time windows fail more than others. Knowing your own failure surface beats trusting any benchmark.
Key terms in this lesson
The big idea: cited models still lie. Knowing the specific patterns Perplexity hallucinates against is the verification skill — and it does not transfer cleanly from how you check ChatGPT.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “When Perplexity Hallucinates: Pattern-Spotting And Recovery”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 9 min
Citations And Source Verification: Perplexity's Biggest Win
Citations are the headline feature, but they only deliver if you actually click them. The verification habit is the skill — not the citation list.
Creators · 8 min
Sharing Perplexity Threads: Privacy And Accuracy
Sharable threads make Perplexity feel like a publishing tool. They are — but every share is a public record of your research and its mistakes.
Creators · 10 min
Codex With Custom Tools And MCP
Codex's real power shows when you connect it to your own tools — internal APIs, datastores, ticketing systems — usually via Model Context Protocol.
