Lesson 323 of 2116
Hallucinated Imports — When the AI Invents a Library
AI models confidently call libraries that do not exist. Learn the patterns of hallucinated imports, the verification habits that catch them, and the supply-chain attack this opens up.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The Library That Was Never Born
- 2hallucination
- 3package verification
- 4slopsquatting
Concept cluster
Terms to connect while reading
Section 1
The Library That Was Never Born
Ask Claude or GPT for a function that fetches and parses RSS feeds in Python and you might get an import for `feedfetcher` or `pyfeed-parser`. They sound plausible. They do not exist. The model interpolated between real package names it has seen and produced a hybrid that compiles in its head but fails at `pip install`.
Why this happens
- Models predict the next token, not real-world existence
- Training data contains millions of import lines from millions of repos — the model knows the shape, not the registry
- Library versions drift: `langchain.vectorstores` was real in 2023, gone by 2024
- Common verbs like `parse`, `fetcher`, `client` recombine into convincing fakes
The five patterns of hallucinated imports
Compare the options
| Pattern | Example | Reality |
|---|---|---|
| Plausible compound | `import jwt_decoder` | Real package is `PyJWT`, you `import jwt` |
| Wrong submodule | `from pandas.io.json import read_json` | Moved to `pandas.read_json` years ago |
| Renamed package | `import sklearn.cross_validation` | Renamed to `sklearn.model_selection` in 2018 |
| Invented method | `requests.get_json(url)` | Real call is `requests.get(url).json()` |
| Phantom version flag | `openai.ChatCompletion.create(...)` | Removed in OpenAI Python SDK v1.0 |
The 30-second verification habit
- 1Before installing, search the package on pypi.org or npmjs.com
- 2Check the publish date, weekly downloads, and maintainer
- 3If a package has 50 downloads and was published last week, do not trust it
- 4Cross-check the install name with the import name — they often differ
- 5If the AI cited a method, search that exact method in the official docs
The fast smoke test
A throwaway venv is the cheapest hallucination detector that exists.
# Before trusting AI-generated imports, run them in isolation
mkdir /tmp/smoke && cd /tmp/smoke
python -m venv .venv && source .venv/bin/activate
# Try to install only the imports the AI suggested
pip install feedfetcher pyfeed-parser
# ERROR: Could not find a version that satisfies the requirement feedfetcher
# Now you know — and you found out in 10 seconds, not in a PR review.What good agents do
Claude Code, Cursor Agent, and Codex CLI can all run `pip install` and report back. Use that. If the agent ran the install and tests, the import is real. If it just wrote code, the import is unverified. This is the fastest reliability lift you can get from an agent loop.
A 20-second prompt edit eliminates an entire class of bug.
Bad prompt: "Write code to parse RSS feeds."
Better prompt: "Write code to parse RSS feeds. Use feedparser, version 6.x.
Run pip install and a smoke test before showing me the code."
The better prompt forces ground truth checks the bad one skips.When you do find a hallucinated import
- 1Do not let the AI "fix" by guessing another package — same risk
- 2Tell it the install failed and ask: "What real library does this functionality?"
- 3Cross-check the answer against the official docs of that library
- 4Pin the version in your lockfile so the same drift cannot recur
“The model has seen a million imports and remembered ninety percent of them perfectly.”
Key terms in this lesson
The big idea: imports are the surface where the model's confidence meets the registry's reality. Verify package names, pin versions, and let agents run installs. The thirty seconds you spend confirming an import is the cheapest debugging you will ever do.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Hallucinated Imports — When the AI Invents a Library”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 13 min
Security Review of AI-Generated Code
AI happily writes code with classic vulnerabilities. Learn the OWASP-aligned review checklist for AI output, the prompts that catch issues early, and the tools that automate the rest.
Builders · 35 min
When AI Writes Buggy Code — How to Read It Critically
The AI will hand you code that looks right but isn't. Here are the most common bugs and the habits that catch them before they bite.
Creators · 50 min
The Landscape: Copilot vs. Cursor vs. Windsurf vs. Claude Code
The AI coding tool market fragmented fast. Let's map the 2026 landscape honestly: who is for autocomplete, who is for agents, who wins on cost, and what the tradeoffs actually feel like.
