Loading lesson…
When AI mentions a study, book, or article, your job is to verify the source actually exists — not just trust AI's summary of it.
AI sometimes fabricates citations. The book doesn't exist, the author is invented, and the journal name is plausible but wrong. This has caused real problems — including lawyers being sanctioned for citing AI-invented court cases.
The defense is simple but non-negotiable: when AI mentions a source, click through and verify before citing it.
The big idea: AI gives you leads. You verify the leads. Cite only what you've actually read.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-builders-research-ai-sources
What does it mean when an AI 'hallucinates' a citation?
Why is verifying a source's existence before citing it considered 'non-negotiable'?
A student finds the exact title AI mentioned in Google Scholar. What is the next critical step?
Which of these is a sign that a citation might be fabricated?
If you cannot find a source after searching its title in Google Scholar and the author's name on university websites, what should you do?
What is the proper first action when AI mentions a source you want to use?
Why might a fabricated citation include a real author's name?
A DOI resolves to a webpage. Does this guarantee the source is legitimate?
What is the 'big idea' of this lesson in one sentence?
Why is it insufficient to only verify that a source exists?
Which verification step comes LAST in the recommended flow?
What type of journal name should raise suspicion during verification?
A lawyer was sanctioned for citing AI-invented court cases. What does this demonstrate?
What information should you read before quoting AI's interpretation of a source?
If you find a source in Google Scholar that matches the title AI provided, what should you check to ensure it's the right paper?