Loading lesson…
Insurers price risk. As AI starts causing real losses, they are being forced to do it for AI. The resulting contracts are quietly becoming a major governance force.
Insurance underwriters ask: what can go wrong, how often, how badly? Until recently they had no data for AI. By 2024-2025, with AI-caused losses (hallucinated legal citations, defamatory outputs, agentic misbehavior, discriminatory hiring tools) multiplying, the industry started writing AI-specific policies.
You will see more AI governance from underwriters than from regulators for the next several years.
— Industry analyst quote, reported widely in 2024-2025
The big idea: insurance is slow, boring, and powerful. When insurers decide what is insurable, they partly decide what is deployable. Watch this space for quieter but durable governance.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-safety2-ai-insurance-creators
What is the core question insurance underwriters ask when pricing risk for any policy?
Which of the following is an example of 'affirmative AI coverage'?
What must a company typically provide to obtain AI insurance coverage?
Why is systemic risk particularly difficult to insure in AI?
What is 'moral hazard' in the context of AI insurance?
What does Munich Re's aiSure product provide?
Why is 'cascading agent losses' particularly challenging for insurers?
What is the primary concern about 'emergent capability' as an insurable risk?
What is required to prevent moral hazard in well-designed AI insurance?
Why is copyright exposure difficult for insurers to price?
What typically happens to older E&O and cyber policies regarding AI losses?
What is an 'AI performance guarantee' in insurance terms?
What does the lesson identify as the 'big idea' about insurance's role in AI?
What type of risk are Lloyd's of London experiments with frontier-AI risk syndicates attempting to address?
Why do insurers generally exclude intentional harm and nation-state attacks from AI policies?