Lesson 2103 of 2116
AI Pricing Models: Per-Token, Cached, Batch, and Reserved Capacity
Understand the AI pricing landscape across input, output, cached, batch, and reserved tiers.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The premise
- 2pricing
- 3prompt caching
- 4reserved capacity
Concept cluster
Terms to connect while reading
Section 1
The premise
AI provider pricing now spans per-token, cached-token, batch, and reserved-capacity tiers — each with distinct fit for different workload patterns.
What AI does well here
- Per-token: low-volume, sporadic workloads
- Cached tokens: repeated long contexts at much lower cost
- Batch APIs: high-volume async work at deep discounts
- Reserved: predictable steady-state high volume
What AI cannot do
- Optimize pricing tier choice without workload data
- Predict its own input and output token usage precisely
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “AI Pricing Models: Per-Token, Cached, Batch, and Reserved Capacity”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 40 min
Prompt Caching Comparison: Anthropic, OpenAI, Gemini
How prompt caching works across vendors and where it pays off.
Creators · 10 min
Frontier Cost Optimization: Caching, Compression, And Fallback
Frontier model bills can dwarf engineering payroll for high-volume products. Caching, prompt compression, and model fallback are the three big levers.
Creators · 11 min
AI context cache pricing across model families
Compare context caching pricing on Claude, Gemini, and others.
