Loading lesson…
Measured people at serious labs and universities publicly worry about AI going very wrong. Here is what they mean, what they disagree about, and how to read the headlines.
You have seen the headlines: AI could end humanity. You have also seen the opposite: AI doom is sci-fi nonsense. Neither slogan is how working researchers talk. They talk about a spectrum of severity and a set of specific concrete pathways.
OpenAI's Preparedness Framework and Anthropic's Responsible Scaling Policy both enumerate similar categories for high-severity risk. In rough agreement:
| Claim | Honest version | Dishonest version |
|---|---|---|
| AI could help with bioweapons | Measurable uplift on novice tasks; weapons still require wet-lab capability not in the model | AI will design pandemics by Tuesday |
| AI could do long autonomous projects | Task horizons growing exponentially, hours-scale today | AI is about to be CEO |
| AI could destabilize elections | Persuasion and personalized disinfo are cheaper | AI is why your candidate lost |
I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.
— Geoffrey Hinton, 2023 (on leaving Google)
The big idea: catastrophic risk is a real research agenda with real evidence. The honest version is smaller than the panicky version and bigger than the dismissive version. Read the papers, not the tweets.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-safety-catastrophic-risk-intro-builders
What is the core idea behind "Catastrophic Risk, Without the Panic"?
Which term best describes a foundational idea in "Catastrophic Risk, Without the Panic"?
A learner studying Catastrophic Risk, Without the Panic would need to understand which concept?
Which of these is directly relevant to Catastrophic Risk, Without the Panic?
Which of the following is a key point about Catastrophic Risk, Without the Panic?
Which of these does NOT belong in a discussion of Catastrophic Risk, Without the Panic?
Which statement is accurate regarding Catastrophic Risk, Without the Panic?
Which of these does NOT belong in a discussion of Catastrophic Risk, Without the Panic?
What is the key insight about "What the 2023 statement actually said" in the context of Catastrophic Risk, Without the Panic?
What is the key insight about "Two takes you should distrust equally" in the context of Catastrophic Risk, Without the Panic?
What is the recommended tip about "Key insight" in the context of Catastrophic Risk, Without the Panic?
Which statement accurately describes an aspect of Catastrophic Risk, Without the Panic?
What does working with Catastrophic Risk, Without the Panic typically involve?
Which of the following is true about Catastrophic Risk, Without the Panic?
Which best describes the scope of "Catastrophic Risk, Without the Panic"?