Loading lesson…
The EU AI Act is the world's first comprehensive AI regulation, and its effects reach well beyond Europe. Here's what deployers worldwide need to understand right now.
The EU AI Act, fully in force from 2026, classifies AI systems into four risk tiers: unacceptable (banned outright), high-risk (allowed with strict requirements), limited risk (transparency obligations), and minimal risk (unregulated). The regulation applies to any AI system placed on the EU market or used by people in the EU — including systems built and operated entirely outside the EU.
| Tier | Examples | Key requirements |
|---|---|---|
| Unacceptable | Social scoring by governments, real-time biometric surveillance in public spaces | Banned — cannot deploy |
| High-risk | Hiring AI, credit scoring, education assessment, medical devices, critical infrastructure | Conformity assessment, human oversight, data governance, post-market monitoring |
| Limited risk | Chatbots, deepfake generators, emotion recognition | Transparency: disclose AI interaction and synthetic content |
| Minimal risk | Spam filters, AI in video games | No mandatory requirements |
Foundation models like GPT-4, Claude, and Gemini are classified as GPAI under the Act. Models with systemic risk (very large compute or wide deployment) face additional requirements: adversarial testing, incident reporting to EU authorities, energy consumption disclosure, and cybersecurity measures. Model providers bear primary responsibility; deployers building on top of them inherit reduced but real obligations.
The big idea: the EU AI Act is the global floor. Complying with it protects you in Europe and usually satisfies the emerging requirements in other jurisdictions. Start with a risk-tier classification for every AI feature you deploy.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-ethics-safety-eu-ai-act-global-regulation-adults
A company based in Canada develops an AI-powered hiring tool that it sells exclusively to companies in Brazil and Australia. Under what circumstances would this company become subject to the EU AI Act?
Which of the following AI systems would be classified as 'limited risk' under the EU AI Act?
According to the regulatory timeline, when do the prohibitions on unacceptable risk AI practices take effect?
Under the EU AI Act, which entity bears primary responsibility for compliance with GPAI (general purpose AI) model requirements such as adversarial testing and incident reporting?
Which of the following is explicitly banned under the EU AI Act's unacceptable risk category?
What specific obligations does the EU AI Act impose on providers of very large foundation models that are deemed to have 'systemic risk'?
A fintech startup in Germany wants to deploy an AI system that analyzes transaction patterns to detect fraud. What risk tier and key requirements would apply?
How does the current US federal approach to AI regulation compare to the EU AI Act?
What approach has the United Kingdom taken to AI regulation?
China has implemented specific regulations for AI systems. Which statement accurately reflects China's regulatory approach?
A company is developing an AI-powered resume screening tool for hiring. Under the EU AI Act, what would be the primary compliance requirement before deploying this system in the EU market?
Which of the following best describes the current state of AI regulation in Brazil?
An AI-powered video game uses machine learning for non-player character behavior. How would this be classified under the EU AI Act?
What is the deadline for full implementation of the EU AI Act across all provisions?
Which of the following is a transparency obligation under the EU AI Act for limited risk systems?