Loading lesson…
Could AI help someone build a bioweapon? It's a serious question with a boring, important answer. Here is what the evidence shows without the scare quotes.
The worry is not that ChatGPT tells a student to mix bleach and ammonia. The worry is uplift: that a non-expert adversary could use an advanced AI to bridge gaps that currently keep them from causing mass-casualty harm — protocol design, troubleshooting, reagent sourcing, lab technique.
The honest answer is that current LLMs provide real but modest uplift for a determined adversary. The honest concern is that 'current' is a word with a short half-life.
— Helena Fu / US CAISI framing, paraphrased from public remarks
The big idea: bio risk from AI is neither hysteria nor hypothetical. It's a measurable, instrument-able concern where the numbers so far have been modest, the trajectory matters, and the countermeasures are real work already underway.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-safety2-bio-risk-ai-creators
In the context of AI biosecurity, what does the term 'uplift' refer to?
According to the 2023 RAND evaluation, what did jailbroken LLMs provide regarding bioweapon development?
What does 'dual-use' mean when applied to biological research?
Which of the following is identified in the material as something current large language models CANNOT do?
What is the role of the International Gene Synthesis Consortium (IGSC) in biosecurity?
What did OpenAI's GPT-4 preparedness framework find about 'uplift' in biological tasks?
Why is uplift for 'non-expert adversaries' considered the primary policy concern?
What did Anthropic's Claude 3.5 Sonnet system card measure regarding biology-related tasks?
What concern is raised about biology-specific AI models like AlphaFold3, ESM-3, and RFDiffusion?
The lesson states that 'current' is a word with a short half-life. What does this suggest?
Which of the following is mentioned as an existing countermeasure already underway?
What is the 'big idea' presented in the lesson about AI and bio risk?
What do structured red-teaming evaluations by UK and US AI Safety Institutes involve?
What type of knowledge does the lesson say large language models cannot replace for wet-lab biological work?
Why might DNA synthesis screening requirements be tightened?