Lesson 260 of 1570
Bletchley, Seoul, Paris: How Countries Talk About AI
The big international AI summits produce non-binding declarations. Even so, they shape the rules. Here is what each one did.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1A Series of Handshakes
- 2Bletchley Declaration
- 3Seoul Declaration
- 4AI summit
Concept cluster
Terms to connect while reading
Section 1
A Series of Handshakes
International AI governance has so far happened mostly at summits, not in treaties. The summits produce declarations, which are promises without enforcement. They still matter because they set the agenda and pull labs into public commitments.
Bletchley, November 2023
Held at Bletchley Park, the WWII codebreaking site, this was the first global AI safety summit. 28 countries plus the EU signed the Bletchley Declaration, including the UK, US, China, France, Germany, India, Japan, and Saudi Arabia. The big story was not the text (which was general) but the fact that the US and China signed the same document about AI safety.
Seoul, May 2024
The follow-up summit added teeth. 16 frontier AI companies signed the Frontier AI Safety Commitments: publish a safety framework describing when they would not deploy a model, explain capability thresholds, and report to governments. Signatories include Anthropic, OpenAI, Google, Microsoft, Meta, Amazon, Mistral, xAI, Samsung, Zhipu.ai.
Paris, February 2025
Hosted by France. The tone shifted from safety to action: France pushed innovation and competitiveness framing, partially in reaction to the new US administration's hands-off posture. 61 countries signed a Statement on Inclusive and Sustainable AI. The US and UK did not sign. The soft split into two camps was the headline.
Compare the options
| Summit | Focus | Key output |
|---|---|---|
| Bletchley 2023 | Safety risks | First US/China co-signed AI declaration |
| Seoul 2024 | Lab commitments | Frontier AI Safety Commitments, 16 labs |
| Paris 2025 | Innovation vs. safety rebalance | 61-country declaration, US/UK abstained |
“We learn more in a weekend at Bletchley than in six months of working papers. The room changes the math.”
Key terms in this lesson
The big idea: international AI governance in the mid-2020s is a pattern of soft commitments by a small circle of wealthy countries. It is a starting point, not a finish line.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Bletchley, Seoul, Paris: How Countries Talk About AI”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 30 min
When AI Decides Something That Matters
AI is now involved in hiring, loans, medical care, and criminal sentencing. Here are the documented cases and the frameworks being built in response.
Builders · 25 min
Red-Teaming: People Paid to Break AI
Red-teamers try to make models misbehave before bad actors do. Here is how the job works, who does it, and what they look for.
Builders · 25 min
Provenance: How the Internet Plans to Label AI Content
C2PA, SynthID, and Content Credentials are the quiet standards deciding what is real online. Here is what they do and where the gaps are.
