Lesson 1290 of 2116
Small Language Models on Device: Phi, Gemma, Llama 3.2 in Production
When a 3B-7B model on-device wins over an API call to a frontier model.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The premise
- 2SLM
- 3on-device
- 4Phi
Concept cluster
Terms to connect while reading
Section 1
The premise
Small models run free, fast, and offline — but they're only enough for narrow, well-scoped tasks.
What AI does well here
- Run private text classification offline on user devices
- Provide instant autocomplete with no network round-trip
- Cut cost to zero for high-volume, low-stakes tasks
- Comply with strict data-residency requirements
What AI cannot do
- Compete with frontier models on open-ended reasoning
- Handle long context — most are capped at 8-32K tokens
- Stay current — they don't learn from new data without re-training
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Small Language Models on Device: Phi, Gemma, Llama 3.2 in Production”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 11 min
AI On-Device: Phi, Gemma, and When Tiny Models Make Sense
4B-parameter models run on your laptop and phone. They're not GPT-5 — but they're surprisingly useful.
Creators · 17 min
Local Model Family: Microsoft Phi
Phi models show why small language models matter: they are designed for efficient local and edge scenarios, not for winning every frontier benchmark.
Creators · 40 min
Local Model Family: Gemma
Gemma is Google DeepMind open-model family, useful for local and single-accelerator experiments when students want polished small models.
