Lesson 94 of 1570
DeepSeek R1 reasoning open-weights
R1 was the open-weights reasoning shock of early 2025. A year later it is still the default for anyone who needs o-series reasoning without paying o-series prices.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Why R1 matters
- 2DeepSeek R1
- 3reasoning model
- 4open weights
Concept cluster
Terms to connect while reading
Section 1
Why R1 matters
DeepSeek R1 showed that an open-weights team could ship o1-class reasoning on a shoestring. The weights are downloadable, the quality is genuine, and the pricing on DeepSeek's own API is roughly 1/20th of OpenAI o-series.
- Thinks in visible chain-of-thought before answering
- Strong on math, code, and logic benchmarks
- Downloadable weights for self-hosted reasoning
- Distilled smaller versions (R1-Distill) run on consumer GPUs
Compare the options
| Option | DeepSeek R1 | OpenAI high-effort reasoning | GPT-5.5 |
|---|---|---|---|
| Cost per M output | Very low | High | High |
| Latency | Slow (thinks) | Slow to moderate | Moderate |
| Open weights | Yes | No | No |
| Quality | Near-frontier on selected reasoning tasks | Frontier | Frontier |
The API returns thinking and final answer separately.
resp = client.chat.completions.create(
model="deepseek-reasoner",
messages=[{"role": "user", "content": hard_problem}],
)
# response includes reasoning_content + contentWhen to still pay for high-effort GPT
Frontier competition math, novel scientific reasoning, and any benchmark where the last 3 points of accuracy matter. For everyday hard problems, R1 is enough.
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “DeepSeek R1 reasoning open-weights”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 28 min
Llama 4 Scout vs. Maverick
Meta's Llama 4 family splits into Scout (lean) and Maverick (flagship). Here is how to choose between them for self-hosted work.
Builders · 26 min
DeepSeek V3.5 coding
DeepSeek V3.5 is the open-weights model that keeps punching above its weight class on coding benchmarks at a fraction of the cost.
Builders · 26 min
Qwen 3 Max — Chinese-English multilingual
Alibaba's Qwen 3 Max is the leading open-weights model for high-quality Chinese work and does English surprisingly well.
