Lesson 324 of 2116
Stale Training Data — When the AI Lives in 2023
Models freeze at their training cutoff. The libraries you use have not. Recognize the patterns of outdated code suggestions and the prompt habits that pull the model into the present.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The Time Capsule Problem
- 2training cutoff
- 3deprecation
- 4version drift
Concept cluster
Terms to connect while reading
Section 1
The Time Capsule Problem
Every model has a training cutoff. Claude Sonnet 4.7 cuts off in early 2026, GPT-5 around mid-2025, older models earlier still. After that date the model knows nothing. Worse, before that date it has seen years of stack overflow answers about deprecated APIs. Both bias it toward yesterday's code.
Real examples of stale defaults in 2026
Compare the options
| Library | What models still suggest | What is current |
|---|---|---|
| Next.js | `pages/api/route.ts` and `getServerSideProps` | App Router, server actions, `app/` dir (Next 13+, default since 14) |
| OpenAI Python SDK | `openai.ChatCompletion.create(...)` | `OpenAI().chat.completions.create(...)` since v1.0 (Nov 2023) |
| LangChain | `from langchain import OpenAI, LLMChain` | Modular split into `langchain-openai`, LCEL syntax with `|` |
| React | `useEffect` for everything | Server components, `use` hook, suspense for data |
| Tailwind | `tailwind.config.js` with hand-tuned theme | Tailwind v4 zero-config, CSS-first `@theme` |
Tells that the model is stuck in the past
- Class-based React components in greenfield code
- Webpack-specific imports like `require.context`
- Express 4 syntax when you are on Express 5 (no more `body-parser` install)
- Python `from __future__ import annotations` in 3.12 code
- Reaching for Lodash for things ES2023 has natively
Pull the model into the present
Anchoring the model with versions and constraints stops it from defaulting to its mode.
# A prompt that resets the model's defaults:
"I'm using Next.js 16 with the App Router and server actions.
I'm on React 19 with the `use` hook available. No `getServerSideProps`,
no `pages/` directory. My package.json shows:
"next": "^16.0.0",
"react": "^19.0.0"
Given those constraints, write a server component that..."Use tools that read your actual code
Cursor and Claude Code both index your repo, including your `package.json`, `requirements.txt`, and `Cargo.toml`. They can see what version you are on. The catch: they only check if you ask. Make a habit of starting refactor sessions with: "Read my package.json and tell me which major versions I'm using before suggesting code."
When to bring in a fresh source
- 1Use Perplexity, web search, or `Read` on official docs for any API the model wrote
- 2Ask the agent to fetch the docs URL itself if it has web tools
- 3For breaking-change-prone libraries (Next.js, LangChain, OpenAI SDK), always check the changelog from the last 6 months
- 4Treat any code touching authentication, payments, or AI providers as guilty until docs prove it innocent
Both compile. Only one runs on a 2026 machine. The model will write either depending on what you primed it with.
# Stale: from before OpenAI SDK v1
import openai
openai.api_key = "sk-..."
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "user", "content": "hi"}],
)
# Current (2026): SDK v1+
from openai import OpenAI
client = OpenAI() # reads OPENAI_API_KEY env var
response = client.chat.completions.create(
model="gpt-5.5",
messages=[{"role": "user", "content": "hi"}],
)“The model is a brilliant friend who has been on a remote island since the cutoff. Catch it up before you ask for help.”
Key terms in this lesson
The big idea: AI suggestions arrive frozen at the cutoff. Anchor every nontrivial coding session in your real versions, your real config, and current docs. The model is good at writing code. You are responsible for placing it in the right century.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Stale Training Data — When the AI Lives in 2023”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 11 min
API Design Review With AI: Catching the Decisions You'll Regret in 18 Months
API decisions are hard to undo. AI can review API designs against established patterns, surface forward-compatibility risks, and identify the decisions that look fine now but will hurt in production.
Creators · 11 min
AI-Assisted GraphQL Schema Evolution
Use Claude to plan deprecations, breaking changes, and consumer migration in GraphQL.
Creators · 11 min
AI and API deprecation communications
Use LLMs to draft consistent deprecation notices for external API changes.
