Lesson 246 of 2116
Bio Risk and AI: A Measured Look
Could AI help someone build a bioweapon? It's a serious question with a boring, important answer. Here is what the evidence shows without the scare quotes.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The Actual Concern
- 2biosecurity
- 3uplift study
- 4dual-use
Concept cluster
Terms to connect while reading
Section 1
The Actual Concern
The worry is not that ChatGPT tells a student to mix bleach and ammonia. The worry is uplift: that a non-expert adversary could use an advanced AI to bridge gaps that currently keep them from causing mass-casualty harm — protocol design, troubleshooting, reagent sourcing, lab technique.
What the evaluations actually measure
- RAND (2023): LLMs with jailbreak access provided some planning assistance but did not enable novel pathways that open literature couldn't
- OpenAI GPT-4 preparedness framework (2023-2024): 'mild uplift' on bio tasks for a PhD in biology; larger uplift for novices on narrow tasks
- Anthropic's Claude 3.5 Sonnet system card (2024): measured uplift below thresholds for restricted release in biology specifically
- UK and US AISI bio evaluations: structured red-teaming with subject-matter experts and bio-graduate-level testers
What current models cannot do (as of ~2025)
- 1Design a novel pathogen from scratch without experimental validation
- 2Guide synthesis without access to restricted precursors and equipment
- 3Replace the tacit knowledge that wet-lab work requires
- 4Bypass the biosecurity gate-keeping at DNA synthesis companies (IGSC member screening)
“The honest answer is that current LLMs provide real but modest uplift for a determined adversary. The honest concern is that 'current' is a word with a short half-life.”
Key terms in this lesson
The big idea: bio risk from AI is neither hysteria nor hypothetical. It's a measurable, instrument-able concern where the numbers so far have been modest, the trajectory matters, and the countermeasures are real work already underway.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Bio Risk and AI: A Measured Look”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 50 min
AI Alignment: The Actual Technical Problem
Alignment is not a vibes debate. It is a concrete technical problem about getting systems to pursue goals we actually want. Here is what researchers work on when they say they work on alignment.
Creators · 40 min
Jailbreak Case Studies: What Actually Broke
Abstract jailbreak theory is less useful than real cases. Here are the techniques that worked on production models, what they taught us, and what is still unsolved.
Creators · 55 min
Alignment: The Full Technical Picture
What alignment actually is as a research program, how it is done in practice, what the open problems are, and where the actual papers live. A model that is always helpful will help you do harmful things.
