Loading lesson…
The big international AI summits produce non-binding declarations. Even so, they shape the rules. Here is what each one did.
International AI governance has so far happened mostly at summits, not in treaties. The summits produce declarations, which are promises without enforcement. They still matter because they set the agenda and pull labs into public commitments.
Held at Bletchley Park, the WWII codebreaking site, this was the first global AI safety summit. 28 countries plus the EU signed the Bletchley Declaration, including the UK, US, China, France, Germany, India, Japan, and Saudi Arabia. The big story was not the text (which was general) but the fact that the US and China signed the same document about AI safety.
The follow-up summit added teeth. 16 frontier AI companies signed the Frontier AI Safety Commitments: publish a safety framework describing when they would not deploy a model, explain capability thresholds, and report to governments. Signatories include Anthropic, OpenAI, Google, Microsoft, Meta, Amazon, Mistral, xAI, Samsung, Zhipu.ai.
Hosted by France. The tone shifted from safety to action: France pushed innovation and competitiveness framing, partially in reaction to the new US administration's hands-off posture. 61 countries signed a Statement on Inclusive and Sustainable AI. The US and UK did not sign. The soft split into two camps was the headline.
| Summit | Focus | Key output |
|---|---|---|
| Bletchley 2023 | Safety risks | First US/China co-signed AI declaration |
| Seoul 2024 | Lab commitments | Frontier AI Safety Commitments, 16 labs |
| Paris 2025 | Innovation vs. safety rebalance | 61-country declaration, US/UK abstained |
We learn more in a weekend at Bletchley than in six months of working papers. The room changes the math.
— A participant in the Bletchley Summit
The big idea: international AI governance in the mid-2020s is a pattern of soft commitments by a small circle of wealthy countries. It is a starting point, not a finish line.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-safety-bletchley-summits-builders
What is the core idea behind "Bletchley, Seoul, Paris: How Countries Talk About AI"?
Which term best describes a foundational idea in "Bletchley, Seoul, Paris: How Countries Talk About AI"?
A learner studying Bletchley, Seoul, Paris: How Countries Talk About AI would need to understand which concept?
Which of these is directly relevant to Bletchley, Seoul, Paris: How Countries Talk About AI?
What is the key insight about "The AISI network" in the context of Bletchley, Seoul, Paris: How Countries Talk About AI?
What is the key insight about "Declarations are not laws" in the context of Bletchley, Seoul, Paris: How Countries Talk About AI?
What is the recommended tip about "Key insight" in the context of Bletchley, Seoul, Paris: How Countries Talk About AI?
Which statement accurately describes an aspect of Bletchley, Seoul, Paris: How Countries Talk About AI?
What does working with Bletchley, Seoul, Paris: How Countries Talk About AI typically involve?
Which of the following is true about Bletchley, Seoul, Paris: How Countries Talk About AI?
Which best describes the scope of "Bletchley, Seoul, Paris: How Countries Talk About AI"?
Which section heading best belongs in a lesson about Bletchley, Seoul, Paris: How Countries Talk About AI?
Which section heading best belongs in a lesson about Bletchley, Seoul, Paris: How Countries Talk About AI?
Which section heading best belongs in a lesson about Bletchley, Seoul, Paris: How Countries Talk About AI?
Which of the following is a concept covered in Bletchley, Seoul, Paris: How Countries Talk About AI?