Lesson 62 of 1550
EU AI Act and Global Regulation: What Deployers Must Track
The EU AI Act is the world's first comprehensive AI regulation, and its effects reach well beyond Europe. Here's what deployers worldwide need to understand right now.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The EU AI Act in plain terms
- 2EU AI Act
- 3risk tiers
- 4high-risk AI
Concept cluster
Terms to connect while reading
Section 1
The EU AI Act in plain terms
The EU AI Act, fully in force from 2026, classifies AI systems into four risk tiers: unacceptable (banned outright), high-risk (allowed with strict requirements), limited risk (transparency obligations), and minimal risk (unregulated). The regulation applies to any AI system placed on the EU market or used by people in the EU — including systems built and operated entirely outside the EU.
Risk tier summary
Compare the options
| Tier | Examples | Key requirements |
|---|---|---|
| Unacceptable | Social scoring by governments, real-time biometric surveillance in public spaces | Banned — cannot deploy |
| High-risk | Hiring AI, credit scoring, education assessment, medical devices, critical infrastructure | Conformity assessment, human oversight, data governance, post-market monitoring |
| Limited risk | Chatbots, deepfake generators, emotion recognition | Transparency: disclose AI interaction and synthetic content |
| Minimal risk | Spam filters, AI in video games | No mandatory requirements |
General purpose AI (GPAI) provisions
Foundation models like GPT-4, Claude, and Gemini are classified as GPAI under the Act. Models with systemic risk (very large compute or wide deployment) face additional requirements: adversarial testing, incident reporting to EU authorities, energy consumption disclosure, and cybersecurity measures. Model providers bear primary responsibility; deployers building on top of them inherit reduced but real obligations.
Global snapshot beyond Europe
- United States: no federal AI act yet; sector-specific guidance from FTC, EEOC, CFPB. Executive Order on AI safety remains the primary federal framework. Several states (California, Colorado, Texas) have passed or are advancing AI regulation.
- United Kingdom: sector-based approach; regulators (FCA, CQC, ICO) apply existing rules to AI within their domains.
- China: separate regulations for generative AI (2023) and algorithmic recommendation. Mandatory security assessments for models released publicly.
- India: currently advisory; mandatory disclosure framework under development.
- Brazil: pending AI bill modeled partly on GDPR + EU AI Act.
Key terms in this lesson
The big idea: the EU AI Act is the global floor. Complying with it protects you in Europe and usually satisfies the emerging requirements in other jurisdictions. Start with a risk-tier classification for every AI feature you deploy.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “EU AI Act and Global Regulation: What Deployers Must Track”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Adults & Professionals · 40 min
Model Cards and Transparency Reports: Reading the Fine Print
Model cards and transparency reports are how AI providers document what their systems can and can't do. Knowing how to read them — and what's missing — is a core deployer skill.
Adults & Professionals · 11 min
EU AI Act: Compliance for US Companies Doing Business in Europe
EU AI Act applies to US companies serving European users. Compliance is complex and the penalties significant.
Adults & Professionals · 11 min
Vendor AI Act Compliance Verification
AI Act compliance applies to vendors too. Verifying vendor compliance protects against downstream exposure.
