Lesson 20 of 2116
The EU AI Act: The Global Floor, Whether You Like It or Not
The EU AI Act is the most sweeping AI law in the world. It will set the compliance floor for anyone who ships globally. Here is the architecture, the timeline, and what it gets right and wrong.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Why This Law Matters Outside Europe
- 2EU AI Act
- 3risk tiers
- 4GPAI
Concept cluster
Terms to connect while reading
Section 1
Why This Law Matters Outside Europe
The EU has ~450 million consumers and a regulatory track record of projecting rules beyond its borders. GDPR went into effect in 2018 and became the de facto global privacy template — companies rarely build two separate data stacks, so the EU version wins by default. The same Brussels Effect is now unfolding for AI.
Whatever you think of the AI Act on the merits, if you deploy AI to EU users, you will comply. And because two product SKUs are expensive, you will probably comply everywhere.
The risk tier architecture
- 1Unacceptable risk: banned outright. Social scoring by governments, manipulative subliminal techniques, untargeted facial scraping, emotion recognition at work/school, most real-time biometric ID in public.
- 2High risk: heavily regulated. AI in hiring, credit, law enforcement, migration, education, critical infrastructure, medical devices. Risk management, logging, human oversight, conformity assessment.
- 3Limited risk: transparency obligations. Chatbots must tell you they are AI. Deepfakes must be labeled. AI-generated content must be machine-detectable.
- 4Minimal risk: no obligations. Most AI (spam filters, recommender tweaks) falls here.
General-purpose AI (GPAI) obligations
The Act added a separate track for GPAI models (what we call foundation models). Effective August 2025, GPAI providers must publish technical documentation, respect copyright opt-outs, and disclose a summary of training content. Models deemed to pose systemic risk (10^25 FLOPs compute threshold) face additional obligations: model evaluations, risk mitigation, serious incident reporting.
The Code of Practice
Published July 2025 by the Commission, the GPAI Code of Practice gives providers a voluntary roadmap — sign it and you get a presumption of compliance. Most major labs signed (Anthropic, Google, Microsoft, OpenAI); a notable holdout was Meta, which publicly rejected the Code's copyright provisions.
Compare: US, EU, and China approaches
Compare the options
| Dimension | EU | US | China |
|---|---|---|---|
| Style | Comprehensive, risk-tiered | Sectoral + executive orders | Sectoral + national review |
| Foundation models | Covered (GPAI rules) | Voluntary commitments | Licensed, security review |
| High-risk lists | Enumerated in law | NIST voluntary framework | Algorithm registry |
| Penalty ceiling | Up to 7% of global revenue | Varies by agency | License suspension |
What the Act gets right
- Risk-based: low-risk AI is not bogged down in paperwork
- Clear bans: no ambiguity on social scoring or manipulative systems
- Transparency requirements that enable downstream auditing
- Extraterritorial scope: keeps providers from avoiding via geography
What critics say it gets wrong
- Compliance burden may concentrate market among well-resourced labs
- FLOPs thresholds for systemic risk are a moving target as efficiency improves
- Enumerated high-risk lists will lag novel deployments
- Definitions (emotion recognition, biometric categorization) are fuzzy in practice
- Member states implement enforcement unevenly
“The AI Act is not a final answer. It is the first serious attempt to write rules for systems that did not exist when we started writing.”
Key terms in this lesson
The big idea: the AI Act is simultaneously the most ambitious and most criticized AI regulation in the world. Whether you love it or hate it, you will likely build to its standards, and the arguments about it are the shape of AI governance for the rest of this decade.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “The EU AI Act: The Global Floor, Whether You Like It or Not”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 40 min
Data Poisoning: Attacking AI Through Its Training Set
The attacker does not need access to the model. They only need to put a few carefully chosen examples into its training data. Here is how that works and why it is unsolved.
Creators · 36 min
The AI Insurance Industry
Insurers price risk. As AI starts causing real losses, they are being forced to do it for AI. The resulting contracts are quietly becoming a major governance force.
Creators · 40 min
China's Generative AI Regulations
China was the first major jurisdiction to regulate generative AI specifically. Its rules reflect a very different governance philosophy than the West, but the mechanics matter.
