Lesson 2025 of 2116
AI On-Device: Phi, Gemma, and When Tiny Models Make Sense
4B-parameter models run on your laptop and phone. They're not GPT-5 — but they're surprisingly useful.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The premise
- 2on-device
- 3Phi
- 4Gemma
Concept cluster
Terms to connect while reading
Section 1
The premise
Small models running locally trade peak quality for privacy, offline capability, and zero per-call cost.
What AI does well here
- Privacy-sensitive text processing
- Offline summarization and classification
- Local autocomplete and quick assistants
- Edge devices and mobile apps
What AI cannot do
- Match frontier models on hard reasoning
- Handle very long contexts comfortably
- Replace cloud models for ambiguous, complex prompts
- Stay current — they don't auto-update
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “AI On-Device: Phi, Gemma, and When Tiny Models Make Sense”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 11 min
Small Language Models on Device: Phi, Gemma, Llama 3.2 in Production
When a 3B-7B model on-device wins over an API call to a frontier model.
Creators · 40 min
Local Model Family: Gemma
Gemma is Google DeepMind open-model family, useful for local and single-accelerator experiments when students want polished small models.
Creators · 11 min
AI On-Device Models: Phi, Gemma, and the Edge Tradeoff
What current on-device AI models can do — and where edge inference falls short.
