Loading lesson…
Sometimes AI sounds sure but gets facts wrong — how to notice.
AI sometimes makes up facts that sound real — check important things with a grown-up or a book.
Ask AI a question you already know the answer to. See if it gets it right!
AI chatbots are very good at sounding confident. They use words in a smooth, sure-sounding way even when the information they give you is completely wrong. This is called a hallucination — AI invents facts that sound real because they fit the pattern of the words around them, not because they are true. AI has made up book titles, named scientists who do not exist, given wrong dates for real events, and described places it got completely backward. The tricky part is that AI does not know when it is hallucinating — it cannot tell the difference between something real and something invented. That is why you are the fact-checker. For any important fact, the rule is: check it somewhere else. Use an encyclopedia, a trusted website, or ask an adult. Good sources include National Geographic Kids, Britannica, or your school library's databases. If AI gives you a surprising fact — something you did not know before — that is exactly the kind of thing to check. Surprising facts from AI are often wrong.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-explorers-ethics-safety-AI-and-spotting-when-AI-makes-stuff-up-r11a7
What is the core idea behind "AI and spotting when AI makes stuff up"?
Which term best describes a foundational idea in "AI and spotting when AI makes stuff up"?
A learner studying AI and spotting when AI makes stuff up would need to understand which concept?
Which of these is directly relevant to AI and spotting when AI makes stuff up?
Which of the following is a key point about AI and spotting when AI makes stuff up?
What is one important takeaway from studying AI and spotting when AI makes stuff up?
Which of these does NOT belong in a discussion of AI and spotting when AI makes stuff up?
What is the key insight about "The rule" in the context of AI and spotting when AI makes stuff up?
What is the key warning about "Confident does not mean correct" in the context of AI and spotting when AI makes stuff up?
Which statement accurately describes an aspect of AI and spotting when AI makes stuff up?
What does working with AI and spotting when AI makes stuff up typically involve?
Which of the following is true about AI and spotting when AI makes stuff up?
Which best describes the scope of "AI and spotting when AI makes stuff up"?
Which section heading best belongs in a lesson about AI and spotting when AI makes stuff up?
Which section heading best belongs in a lesson about AI and spotting when AI makes stuff up?