DeepSeek
Updated May 2026DeepSeek
The Chinese lab that shocked Silicon Valley
DeepSeek is a Hangzhou-based lab funded by the quant hedge fund High-Flyer. In January 2025 they released R1 — an open-weight reasoning model that matched OpenAI's o1 at a fraction of the training cost — and single-handedly triggered a trillion-dollar market selloff. They publish papers, release weights, and have become proof that frontier AI is no longer a US-only game.
Variants
4
Best at
truly cheap API pricing
Max context
128K
tokens
Pricing
Free
$0
per chat.deepseek.com
V3.2 API
~$0.27 in / $1.10 out
per million tokens
Self-host
$0
per weights on HuggingFace
Variants
Sort the table by context window or cost to find the right variant. Click any version below for a battle card with ranks, pricing notes, and official links.
| Modalities | ||||
|---|---|---|---|---|
DeepSeek V3.2 deepseek-v3-2 | 128K | $0.27 / $1.10 | 2025 | textcode |
DeepSeek V3.2-Speciale deepseek-v3-2-speciale | 128K | varies / varies | 2025 | textcode |
DeepSeek R1 deepseek-r1 | 128K | varies / varies | 2025 | text |
DeepSeek V3 deepseek-v3 | 128K | $0.27 / $1.10 | 2024 | text |
Battle card
Context rank
#1
within DeepSeek
Capability rank
#1
modalities + reasoning
Weights
Open
self-hostable if licensed
Best fights to pick
- self-hosted coding agents
- multilingual reasoning
- long-document analysis
Rankings are Tendril directory ranks, computed from the model data shown here. Public benchmark leaderboards change often, so official docs and current benchmark pages should be checked before buying or deploying.
Learn
Lessons about this model
Structured lessons that cover DeepSeek directly or put it in context alongside its rivals.
Check yourself
Quizzes
Short, mixed-difficulty quiz sets on DeepSeek and its model family.
Open-Weight Families: Llama, Mistral, Qwen, DeepSeek, Gemma
7 questions
The open ecosystem that shook the industry.
Start quiz →Hands-on
Try these prompts
Ready-made prompts that show DeepSeek at its best. Use them in your own AI workspace, then compare the output with what you learned in Tendril.
DeepSeek R1 on a reasoning puzzle
CreatorsR1 was the surprise open-weights reasoning model — ask it to show its work.
A palindromic number reads the same forwards and backwards. What's the smallest palindromic number that is the sum of two different 3-digit palindromes? Show your reasoning.
DeepSeek V3.5 for coding at budget
BuildersV3.5 is cheap per token and strong on code — compare against Claude Sonnet on a refactor.
Refactor this Python function for readability. Don't change behavior.
def p(d):
r = []
for x in d:
if x.get('a') and x['a'] > 0:
r.append({'n': x['n'], 'v': x['a']*2})
return sorted(r, key=lambda z: -z['v'])Cost math: DeepSeek vs. GPT-5.5
CreatorsAsk DeepSeek about its own pricing and do the math on a realistic monthly workload.
I send about 5 million input tokens and 1 million output tokens per month through a coding assistant. Compute the monthly cost for DeepSeek V3.5, GPT-5.5, and Claude Sonnet 4.6 using April 2026 prices. Show the math.
Print & keep
Printable reference
One-page summaries and flowcharts — great for desks, classrooms, or study sessions.
Go deeper
Official resources
Straight from the lab — docs, API references, and the chat surfaces you can try today.
Strengths
- truly cheap API pricing
- open weights at frontier quality
- reasoning innovation outside US labs
Limits
- PRC censorship on political topics
- US data-residency concerns for enterprises
- less English-language polish
