Lesson 1227 of 1550
AI Suicide Hotline Handoff: Mandatory Protocol
Why AI chat triage on crisis lines must hand off to humans on any safety signal.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The premise
- 2crisis
- 3imminent risk
- 4handoff
Concept cluster
Terms to connect while reading
Section 1
The premise
On any imminent-risk signal, the AI must hand off to a trained human counselor immediately and stay quiet until they arrive.
What AI does well here
- Triage non-crisis conversations
- Translate between languages
- Surface local resources
What AI cannot do
- Provide therapy
- Decide if intent is genuine
- Replace a trained crisis counselor
Designing a crisis handoff that does not harm while it helps
Crisis line AI integration has grown rapidly since 988 launched in 2022, driven by volume that exceeds counselor capacity. AI triage can meaningfully extend capacity for low-acuity contacts — routine check-ins, resource requests, general emotional support — while preserving human counselor time for imminent-risk contacts. The handoff design is where most implementations fail. Three common failure modes appear in deployed crisis AI: false-negative risk detection (missing implicit suicidal ideation that does not use explicit language), handoff delay (the system takes too long to escalate while the person in crisis waits), and abandoned handoff (the system pages a counselor but the person disconnects during the wait). A well-designed handoff protocol identifies imminent-risk signals broadly — including implicit signals like goodbye statements, giving away possessions, or expressions of burden — halts AI generation immediately on any signal, dispatches to an on-call counselor in under 90 seconds, displays a holding message that acknowledges the person and names what is happening ('I'm getting someone for you right now'), and includes an escalation path to 911 dispatch if the counselor cannot take the call within the defined window. Post-call review of every handoff is essential for protocol improvement. The AI must also be explicitly prohibited from building a safety plan without a counselor present — safety planning requires real-time clinical judgment about factors the AI cannot assess.
- Detect implicit as well as explicit imminent-risk signals — goodbye statements, burden expressions
- Halt AI generation immediately on any risk signal — do not generate more content
- Dispatch to a live counselor within 90 seconds; display a warm holding message
- Prohibit AI from autonomous safety planning — humans own that call
Key terms in this lesson
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “AI Suicide Hotline Handoff: Mandatory Protocol”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 40 min
AI 'companion' apps: what they want from you
AI girlfriend / boyfriend / friend apps are designed to be addictive. Here's what they're actually doing.
Adults & Professionals · 10 min
Bias Auditing in LLM Outputs: Seeing What the Model Can't
LLMs inherit the skews of their training data and RLHF feedback. Auditing for bias isn't a one-time test — it's an ongoing practice that belongs in every deployment.
Adults & Professionals · 40 min
Deepfake Detection: What Works, What Doesn't, and Why It Matters
AI-generated media has crossed the perceptual threshold where humans cannot reliably detect it. Detection tools help — but are in an arms race with generation.
