Loading lesson…
Understand DeepSeek and why China's AI models surprised the world.
DeepSeek is a Chinese AI lab that released models rivaling OpenAI's at a fraction of the cost. It shocked the industry. Their R1 reasoning model is open and very good at math and code.
Try DeepSeek's free chat. Give it a hard math problem and watch the reasoning trace. Notice how it 'thinks out loud' before answering.
In early 2025, China's DeepSeek released R1 — a reasoning model that performed nearly as well as OpenAI's o1 but cost a small fraction to train and run, AND the weights were open. Stocks dropped, Twitter melted down, and every lab had to rethink pricing. R1 proved frontier capability isn't only for $100B labs.
Try DeepSeek R1 (free) at chat.deepseek.com. Ask it a hard reasoning question. Compare to ChatGPT.
China has a thriving AI scene. DeepSeek shocked the world in 2025 with cheap-to-train reasoning. Qwen leads many open-weight benchmarks. Kimi has long-context expertise. GLM-4 is a strong all-rounder. They're often free to try and have generous API limits.
Try DeepSeek's chat (chat.deepseek.com) or Qwen's chat (chat.qwen.ai) for free. Compare to ChatGPT.
price-per-token matters when you do many calls
Open your favorite AI tool and try one of the examples above. Pick the one that matches what you are actually working on this week. Spend 10 minutes, no more. Notice what worked and what did not — that's the real lesson.
DeepSeek-V3 and R1 are open-weights Chinese models that opened up AI pricing wars.
Try DeepSeek on a reasoning problem you've also tried on Claude. Compare.
Understanding "DeepSeek: shockingly cheap, surprisingly good" in practice: Understanding AI in this area gives you a real advantage in how you work and think. DeepSeek's models cost a fraction of OpenAI/Anthropic and rival them on many tasks — and knowing how to apply this gives you a concrete advantage.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-builders-model-families-AI-and-deepseek-teen
What is DeepSeek?
What surprised the AI industry about DeepSeek's models?
What is DeepSeek R1 specialized in?
What are "thinking" tokens that DeepSeek displays?
Why is the lesson's note about "AI leadership isn't just American" important?
What was the "hard math problem" in the lesson used to demonstrate?
What makes DeepSeek's cost achievement significant?
What does the lesson suggest users do with DeepSeek's free chat?
Why was the release of DeepSeek described as a "shock" to the industry?
What does it mean that DeepSeek R1 is "open"?
What comparison did the lesson suggest making between DeepSeek and GPT-5?
What was the main takeaway from trying DeepSeek's free chat?
What does the lesson say about the cost of building AI models?
Why might seeing DeepSeek's reasoning trace be useful?
What did the lesson mean by the "global AI race"?