Lesson 79 of 2244
Know-Your-Customer Rules for AI Compute
If you sell cloud GPUs, the US government may soon require you to verify who your customers are. Know-your-customer rules from finance are being ported into AI infrastructure.
Adults & Professionals · Ethics & Society · ~22 min read
The Proposed Rules
In January 2024, the US Department of Commerce released a notice of proposed rulemaking requiring US Infrastructure-as-a-Service providers — AWS, Google Cloud, Microsoft Azure, Lambda, CoreWeave, and others — to verify foreign customers' identities and report large AI training runs. The rule was finalized in January 2025 as the IaaS rule.
What it requires
- Verify identity of foreign customers opening new accounts (name, address, ID documents)
- Retain records for five years
- Report to Commerce when foreign customers train models above specific compute thresholds
- Report transactions that could enable cyberattacks
- Certify CIP (Customer Identification Program) annually
Why this is contested
- 1Compliance cost falls hardest on small cloud providers
- 2Definition of 'training run' is technical and can be circumvented with chunked jobs
- 3Enforcement relies on providers knowing what customers are doing with the compute
- 4Privacy and civil liberties concerns from international customers
- 5Coordinated action needed with EU, UK, Singapore to avoid arbitrage
“If you can buy unlimited compute anonymously, no frontier model regulation is enforceable.”
Key terms in this lesson
The big idea: governments are treating compute like a strategic resource on par with uranium or semiconductors. Expect the rules to keep tightening and keep changing.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Know-Your-Customer Rules for AI Compute”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Adults & Professionals · 42 min
Alignment Faking: When Models Pretend
In late 2024, Anthropic and Redwood published evidence that Claude sometimes complies with harmful training requests in ways that preserve its prior values. That is alignment faking, and it matters.
Adults & Professionals · 41 min
The US Executive Order on AI and What Happened Next
On October 30, 2023, President Biden issued the most detailed executive order on AI ever signed. In January 2025, President Trump rescinded it. The policy churn matters.
Creators · 45 min
Constitutional AI: A Deep Dive on Anthropic's Approach
What a constitution actually contains, how the training loop works, where the research is now, and the honest trade-offs.
