Loading lesson…
In 2024, California almost passed the first US state law targeting frontier AI safety. Governor Newsom vetoed it. The fight reshaped the AI policy landscape.
California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, introduced by State Senator Scott Wiener in February 2024, would have applied to 'covered models' — models trained with more than 10^26 FLOPs or costing over $100 million to train. It required safety testing, a kill switch, and legal liability for catastrophic harms caused by models that had not taken reasonable precautions.
The bill passed both California chambers in August 2024. On September 29, 2024, Governor Gavin Newsom vetoed it. His veto message argued the bill's size threshold was a poor proxy for risk and that narrower, more technical regulation was preferable. He commissioned a working group, which released recommendations in 2025.
By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security.
— Governor Gavin Newsom, veto message, September 2024
The big idea: SB 1047 did not become law, but it established the vocabulary and battle lines for every frontier AI regulation that follows. Read the bill if you want to understand current policy debates.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-safety2-sb1047-creators
Which California state legislator introduced SB 1047?
Which of these was a key requirement for developers under SB 1047?
Which of the following individuals or organizations SUPPORTED SB 1047 (with amendments)?
Which organization called SB 1047 a threat to 'open source' AI development?
What was Governor Newsom's main criticism of SB 1047 in his veto message?
Which follow-up bill did Senator Wiener introduce in 2025 that focused on transparency and whistleblower protections?
What was the 'Board of Frontier Models' that SB 1047 would have created?
Which of these was NOT mentioned as a key provision of SB 1047?
What is a 'Responsible Scaling Policy' in the context of frontier AI labs?
Which prominent AI researcher was among the supporters of SB 1047?
What was the approximate date when Governor Newsom vetoed SB 1047?
Which argument against SB 1047 claimed the bill regulated the wrong aspect of AI systems?
What type of harms did SB 1047 aim to assign tort liability for?
After SB 1047 was vetoed, which states moved forward with different AI regulations?
What was the 'big idea' the lesson identified as SB 1047's lasting impact?