Lesson 239 of 2116
Know-Your-Customer Rules for AI Compute
If you sell cloud GPUs, the US government may soon require you to verify who your customers are. Know-your-customer rules from finance are being ported into AI infrastructure.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The Proposed Rules
- 2KYC
- 3IaaS
- 4export controls
Concept cluster
Terms to connect while reading
Section 1
The Proposed Rules
In January 2024, the US Department of Commerce released a notice of proposed rulemaking requiring US Infrastructure-as-a-Service providers — AWS, Google Cloud, Microsoft Azure, Lambda, CoreWeave, and others — to verify foreign customers' identities and report large AI training runs. The rule was finalized in January 2025 as the IaaS rule.
What it requires
- Verify identity of foreign customers opening new accounts (name, address, ID documents)
- Retain records for five years
- Report to Commerce when foreign customers train models above specific compute thresholds
- Report transactions that could enable cyberattacks
- Certify CIP (Customer Identification Program) annually
Why this is contested
- 1Compliance cost falls hardest on small cloud providers
- 2Definition of 'training run' is technical and can be circumvented with chunked jobs
- 3Enforcement relies on providers knowing what customers are doing with the compute
- 4Privacy and civil liberties concerns from international customers
- 5Coordinated action needed with EU, UK, Singapore to avoid arbitrage
“If you can buy unlimited compute anonymously, no frontier model regulation is enforceable.”
Key terms in this lesson
The big idea: governments are treating compute like a strategic resource on par with uranium or semiconductors. Expect the rules to keep tightening and keep changing.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Know-Your-Customer Rules for AI Compute”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 45 min
Constitutional AI: A Deep Dive on Anthropic's Approach
What a constitution actually contains, how the training loop works, where the research is now, and the honest trade-offs.
Creators · 40 min
Data Poisoning: Attacking AI Through Its Training Set
The attacker does not need access to the model. They only need to put a few carefully chosen examples into its training data. Here is how that works and why it is unsolved.
Creators · 40 min
Cyber Risk and Autonomous AI Attackers
AI agents can already find some software vulnerabilities and write exploits. What happens when those capabilities scale? A clear-eyed walk through the data.
