Confidence hallucination: AI sounds certain about something it guessed
Recent-event hallucination: AI fills knowledge gaps with plausible fiction
Try it!
Ask ChatGPT for 3 statistics on any topic. Verify each one against a real source. Notice which type of hallucination it gives you.
End-of-lesson check
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-builders-foundations-AI-and-hallucination-types-r8a10-teen
What makes the three main types of AI hallucinations useful to know?
They appear in different colors on your screen
They only happen when the AI is offline
Each type requires a different strategy to catch and correct
They always include warning labels
An AI tells you about a scientific study from a journal that doesn't exist. What type of hallucination is this?
Recent-event hallucination
Fact hallucination
Confidence hallucination
Citation hallucination
A friend asks why you shouldn't just trust an AI when it sounds completely sure about something. What should you say?
AI is always right when it sounds confident
You should only verify confident answers
Confidence and accuracy are not the same thing
Confident AI answers take longer to generate
Your classmate claims AI can only make up fake dates and names. What's a more complete understanding?
AI only invents recent events
AI only makes errors about history
AI can hallucinate citations, facts, and confident-sounding guesses
AI never makes mistakes about real events
An AI provides population statistics for a country that are numerically precise but completely wrong. What kind of hallucination is this?
Recent-event hallucination
Confidence hallucination
Fact hallucination
Citation hallucination
Why does the lesson recommend asking AI for statistics and then checking them yourself?
To practice using AI with your eyes open and catch hallucinations
To find more accurate sources
To make the AI work harder
To impress your teachers
You ask an AI about something that happened yesterday, and it gives you a detailed but fictional news story. What's happening?
The AI has access to tomorrow's news
This is a recent-event hallucination where AI fills knowledge gaps with plausible fiction
This isn't a hallucination, just a guess
The AI is testing you
What pattern does the lesson say exists in how AI hallucinates?
Random and unpredictable
Predictable patterns you can learn to recognize
Only happens with very specific technical terms
Only happens with controversial topics
Which of these would be most useful to verify immediately?
An AI admitting uncertainty
An AI giving a very specific statistic with no source mentioned
An AI repeating what you just said
An AI saying it doesn't know something
The lesson mentions a 'Try It!' activity involving statistics. What skill is this building?
Ignoring AI outputs
Writing better AI prompts
Verifying AI claims against real sources
Memorizing numbers
Your AI provides a quote from an author that you can't find anywhere online. What should you suspect?
This might be a citation hallucination
The AI is lying on purpose
The internet is incomplete
The author is very private
What does the lesson say about AI's relationship with the truth?
AI only lies about politics
AI lies randomly
AI always tells the truth
AI doesn't lie randomly but in predictable patterns
A friend says they never check AI answers because 'it sounds smart.' What habit from the lesson would help them?
Asking AI to repeat itself
Only checking controversial topics
Trusting confident answers
Verifying anything that feels too perfect
The lesson mentions that AI sometimes fills knowledge gaps with something. What?
Plausible fiction
Technical jargon
Warning labels
Honest uncertainty
Why does knowing the three types of hallucinations matter for a teen using AI?
To become an AI developer
To avoid using AI altogether
To avoid getting fooled when using AI for homework or life questions