Loading lesson…
AI has a memory limit called the context window. Hitting it explains a LOT of weird behavior.
AI has a memory limit called the context window. Hitting it explains a LOT of weird behavior.
The big idea: AI has a memory limit. Knowing where it is keeps you from being surprised when it forgets.
AI models can only 'see' a fixed number of tokens at once — that's the context window. When chats get long, the AI literally forgets the top.
Open your longest ChatGPT thread and ask 'what was the very first thing I asked you in this chat?' — see if it gets it right.
AI has a limited context window — usually 100k to 1M tokens. Once you hit it, the model starts forgetting the early part of your chat. Knowing this changes how you structure long projects.
In a long chat, ask AI to summarize your conversation so far. Then ask 'what was my first question' and check accuracy.
The context window is how much the AI can 'remember' in one conversation. In 2026 the spread between models is huge, and choosing wrong wastes hours.
Pick your biggest text source (a long PDF). Try summarizing it in ChatGPT first. If it errors on length, retry in Claude. Note the difference.
Every LLM has a 'context window' — a maximum number of tokens (roughly: word-pieces) it can hold in its short-term memory at once. When the conversation runs longer than that, the oldest messages drop off and the AI 'forgets' them. Modern models range from 8K tokens (very small) to 1M+ tokens (Gemini, Claude Sonnet 4.5). Knowing your model's window prevents the classic bug: 'wait, I told you that an hour ago, why don't you remember?' For long projects, use a model with a big window or save context in a 'system message' or Project.
Try this in ChatGPT: paste a long article (5,000+ words), ask 5 questions about it, then ask 'what was the second paragraph about?' If it stumbles, you just hit the context-window edge.
Every AI model has a 'context window' — the amount of text it can hold in attention at once. When you go over, the model literally forgets the start of your conversation or document. Knowing the limit, and how to manage what fills it, separates power users from frustrated ones.
Look up the context window of the AI tool you use most. Plan your next big project around it.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-builders-foundations-AI-and-context-window-explained-teen
What is the core idea behind "Why AI 'Forgets' Halfway Through a Long Chat"?
Which term best describes a foundational idea in "Why AI 'Forgets' Halfway Through a Long Chat"?
A learner studying Why AI 'Forgets' Halfway Through a Long Chat would need to understand which concept?
Which of these is directly relevant to Why AI 'Forgets' Halfway Through a Long Chat?
Which of the following is a key point about Why AI 'Forgets' Halfway Through a Long Chat?
What is the key insight about "Real talk" in the context of Why AI 'Forgets' Halfway Through a Long Chat?
What is the key insight about "Heads up" in the context of Why AI 'Forgets' Halfway Through a Long Chat?
What is the key insight about "Review date" in the context of Why AI 'Forgets' Halfway Through a Long Chat?
Which statement accurately describes an aspect of Why AI 'Forgets' Halfway Through a Long Chat?
What does working with Why AI 'Forgets' Halfway Through a Long Chat typically involve?
Which best describes the scope of "Why AI 'Forgets' Halfway Through a Long Chat"?
Which section heading best belongs in a lesson about Why AI 'Forgets' Halfway Through a Long Chat?
Which of the following is a concept covered in Why AI 'Forgets' Halfway Through a Long Chat?
Which of the following is a concept covered in Why AI 'Forgets' Halfway Through a Long Chat?
Which of the following is a concept covered in Why AI 'Forgets' Halfway Through a Long Chat?