Lesson 862 of 2116
SB 1047: California's AI Safety Bill
In 2024, California almost passed the first US state law targeting frontier AI safety. Governor Newsom vetoed it. The fight reshaped the AI policy landscape.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1What the Bill Proposed
- 2SB 1047
- 3frontier AI
- 4AI regulation
Concept cluster
Terms to connect while reading
Section 1
What the Bill Proposed
California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, introduced by State Senator Scott Wiener in February 2024, would have applied to 'covered models' — models trained with more than 10^26 FLOPs or costing over $100 million to train. It required safety testing, a kill switch, and legal liability for catastrophic harms caused by models that had not taken reasonable precautions.
Key provisions
- Safety and security protocols submitted to the state Attorney General
- A capability to fully shut down a covered model
- Third-party auditing of safety practices
- Whistleblower protections for lab employees
- Tort liability for serious harms where developers acted unreasonably
- A Board of Frontier Models within the California Government Operations Agency
The veto and aftermath
The bill passed both California chambers in August 2024. On September 29, 2024, Governor Gavin Newsom vetoed it. His veto message argued the bill's size threshold was a poor proxy for risk and that narrower, more technical regulation was preferable. He commissioned a working group, which released recommendations in 2025.
- 1SB 53 (Wiener, 2025): narrower follow-up focusing on transparency and whistleblowers — signed into law
- 2Frontier labs published or expanded their Responsible Scaling Policies in 2024-2025
- 3Federal AI Action Plan and executive orders filled some of the gap
- 4Other states (Colorado, New York) moved on different AI regulations
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security.”
Key terms in this lesson
The big idea: SB 1047 did not become law, but it established the vocabulary and battle lines for every frontier AI regulation that follows. Read the bill if you want to understand current policy debates.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “SB 1047: California's AI Safety Bill”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 45 min
Constitutional AI: A Deep Dive on Anthropic's Approach
What a constitution actually contains, how the training loop works, where the research is now, and the honest trade-offs.
Creators · 40 min
Data Poisoning: Attacking AI Through Its Training Set
The attacker does not need access to the model. They only need to put a few carefully chosen examples into its training data. Here is how that works and why it is unsolved.
Creators · 36 min
Know-Your-Customer Rules for AI Compute
If you sell cloud GPUs, the US government may soon require you to verify who your customers are. Know-your-customer rules from finance are being ported into AI infrastructure.
