Lesson 180 of 1570
Compute Thresholds: Regulating by FLOPs
Almost every AI regulation uses training compute as a trigger. 10^25 here, 10^26 there. Why compute, and why those numbers?
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Why Compute
- 2compute threshold
- 3FLOP
- 4frontier model
Concept cluster
Terms to connect while reading
Section 1
Why Compute
Regulators want to catch frontier risk without freezing the whole field. They need a trigger that's measurable, hard to game, and correlated with capability. Training compute — floating-point operations used during model training — is the best proxy currently available.
The main thresholds (as of ~2025)
Compare the options
| Regime | Threshold | Applies to |
|---|---|---|
| EU AI Act (systemic risk) | 10^25 FLOPs | General-purpose models |
| Biden EO 14110 (rescinded) | 10^26 FLOPs | Reporting to US government |
| Biden EO (bio subset) | 10^23 FLOPs | Biologically-focused models |
| California SB 1047 (vetoed) | 10^26 FLOPs + $100M | Covered models |
Why this approach has critics
- Algorithmic efficiency means same capability at lower compute over time — thresholds drift
- Small specialized models can be dangerous without being compute-heavy
- Inference compute (test-time reasoning like o1) is not captured by training FLOPs
- Distillation can transfer capability from a big model to a small one
- Open-source releases create compute-independent diffusion
Key terms in this lesson
The big idea: compute is the only thing regulators can easily count. That makes it the default regulatory hook, for better and worse.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Compute Thresholds: Regulating by FLOPs”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 25 min
Logit Lens: Peeking at Predictions Mid-Forward-Pass
A transformer processes a token through many layers before outputting a prediction. The logit lens shows you what the model would predict if it stopped at each layer along the way.
Builders · 24 min
Federal Procurement and AI
The US government is the largest single buyer of software in the world. What it buys and what it refuses to buy shapes the whole industry. That includes AI.
Builders · 25 min
Japan's Soft-Law AI Framework
Japan chose light-touch, guideline-based AI governance built on existing laws. Understanding why illuminates a real alternative to comprehensive AI acts.
