Lesson 63 of 1550
Dual-Use Research Disclosure: When Publishing AI Capabilities Creates Risk
Publishing AI research or releasing models creates benefits and risks simultaneously. The norms for when to disclose, delay, or withhold are evolving — deployers need a framework.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1What dual-use means in AI
- 2dual-use
- 3responsible disclosure
- 4staged release
Concept cluster
Terms to connect while reading
Section 1
What dual-use means in AI
Dual-use research produces knowledge or tools that have both beneficial and harmful applications. In AI, this applies to capability research (models that can generate convincing synthetic media, summarize technical literature at expert level, or assist with complex planning) as well as to security research (attack techniques, jailbreaks, adversarial examples). Publishing either can simultaneously advance the field and give bad actors an edge.
The disclosure spectrum
- 1Full open release: publish everything — code, weights, paper, data. Maximizes scientific progress and peer verification. Appropriate when the capability is already widely available and the marginal uplift from publication is low.
- 2Staged release: release the paper and high-level findings; hold back model weights or fine-tuning details. Allows scientific scrutiny without immediate democratization of the capability.
- 3Coordinated disclosure: notify affected parties (other labs, governments, security vendors) before public release, giving them time to prepare defenses.
- 4Redacted publication: publish the paper with specific harmful details removed. Common for vulnerability research.
- 5No release: when the harm potential is sufficiently extreme and irreversible, withhold entirely. Rare but not unprecedented — some dual-use biology research has been suppressed.
Deployer obligations beyond research labs
Dual-use considerations don't only apply to academic publications. Deployers must ask: if a user discovers a way to use our product to cause harm, what are our obligations? Publishing use case documentation? Posting mitigation guides? Notifying the model provider? Most deployers have no formal process for this. Building one before you need it is the move.
Red lines: capabilities that should not be released
The AI safety community broadly agrees on certain red lines: AI systems that provide meaningful uplift for weapons of mass destruction, that meaningfully undermine oversight of powerful AI systems, or that enable mass-scale manipulation with no defensive dual use. These are not just research norms — they are increasingly being encoded into usage policies and, in some jurisdictions, law.
Key terms in this lesson
The big idea: disclosure decisions require an explicit benefit-harm calculus, not a default of publish-everything or share-nothing. Build the calculus before the capability ships, not after.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Dual-Use Research Disclosure: When Publishing AI Capabilities Creates Risk”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Adults & Professionals · 9 min
Copyright and Training Data: What Deployers Actually Need to Know
Training data copyright is actively litigated. While courts work it out, deployers face practical decisions about outputs that copy protected material.
Adults & Professionals · 40 min
Model Cards and Transparency Reports: Reading the Fine Print
Model cards and transparency reports are how AI providers document what their systems can and can't do. Knowing how to read them — and what's missing — is a core deployer skill.
Adults & Professionals · 11 min
EU AI Act and Global Regulation: What Deployers Must Track
The EU AI Act is the world's first comprehensive AI regulation, and its effects reach well beyond Europe. Here's what deployers worldwide need to understand right now.
