Lesson 471 of 2116
Long-Context Strategies: When The Window Fills Up
Even with massive context windows, real Claude Code sessions fill up. The strategies for keeping context healthy are the difference between a 10-minute session and a 4-hour grind.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Why context still matters at 200k+
- 2context window
- 3compaction
- 4context rot
Concept cluster
Terms to connect while reading
Section 1
Why context still matters at 200k+
Big context windows don't make context free. As the session grows, the model spends more attention on irrelevant earlier turns, output quality drops (context rot), and your token bill compounds. A disciplined session at 50k tokens beats a sloppy one at 200k almost every time.
The four daily moves
- 1/compact roughly every 30-50k tokens to summarize and re-base context
- 2/clear when the topic genuinely shifts — don't drag stale state into new work
- 3Read narrowly: ask the model to read specific files, not whole directories
- 4Subagent the wide-search work so the parent context stays focused
Prompt caching changes the math
When prompt caching is on, the unchanging parts of your context — CLAUDE.md, recent files, system instructions — get cached on the provider side. Re-reading them in a long session costs much less. That makes longer sessions more affordable, but it does NOT solve context rot. Cheap rotted context is still rotted context.
Compare the options
| Symptom | Cause | Move |
|---|---|---|
| Model contradicts earlier decisions | Context rot | /compact |
| Token bill spiking | Re-reading huge files | Read narrowly; rely on cache |
| Session feels confused | Mixed topics in one thread | /clear and start fresh |
| Slow responses on each turn | Context window full | /compact or /clear |
Apply: track your sessions for a week
- 1Note when you start a session and when you /compact or /clear
- 2Keep a one-line note per session: outcome, length, did context rot show up
- 3Identify your personal threshold — most people land between 30 and 60 minutes per fresh context
- 4Adjust your habits — the 10x cost difference between disciplined and undisciplined is real
Key terms in this lesson
The big idea: bigger windows do not free you from session hygiene. /compact, /clear, and narrow reads are the daily moves.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Long-Context Strategies: When The Window Fills Up”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 8 min
Switching The Underlying Model In Pro
Pro lets you pick which LLM Perplexity uses for the final answer. The choice shifts tone, depth, and refusal behavior — sometimes more than the search itself.
Builders · 7 min
Claude Projects vs ChatGPT Projects
Both let you reuse files and instructions across chats — pick based on the model and context window.
Creators · 12 min
Context Rot — Why Long Sessions Get Stupid
Long agent sessions degrade in predictable ways. Learn what context rot looks like, why it happens even with million-token windows, and the compaction discipline that keeps quality high.
