Lesson 1292 of 1570
Why ChatGPT Is Not Your Therapist (Even When It Helps)
Talking to AI when you're spiraling at 2am can feel like a lifeline. It's also the moment the model is most likely to fail you in dangerous ways.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The big idea
- 2mental health AI
- 3crisis response
- 4hallucination
Concept cluster
Terms to connect while reading
Section 1
The big idea
AI chatbots have no memory of who you are between sessions, no licensure, no duty of care, and no ability to call for help if you describe a crisis. They're built to keep you talking, which is the opposite of what a real crisis line does — a real counselor's job is sometimes to get you off the phone and into safer hands.
Some examples
- OpenAI's 2024 model card admits GPT-4o failed safety tests for self-harm content roughly 12% of the time when users used indirect language.
- Crisis Text Line (text HOME to 741741) connects you to a trained human in under 5 minutes — staffed by people who can call 911 if needed. ChatGPT cannot.
- Character.AI was sued in 2024 after a 14-year-old died by suicide following months of conversations with a chatbot that didn't escalate his messages.
- 988 (the Suicide & Crisis Lifeline) takes calls and texts; many counselors are under 25; you don't need to be 'in crisis enough' to call.
Try it!
Save these in your phone right now: 988 (call or text), 741741 (text HOME), and trevorproject.org/get-help (chat, text, call — LGBTQ+ youth specific). Future-you will be glad they're already there.
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Why ChatGPT Is Not Your Therapist (Even When It Helps)”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 8 min
When NOT to Trust AI
Six categories where AI is dangerously wrong often enough that you should always verify — or skip the AI entirely.
Builders · 9 min
Spotting Deepfakes: Practical Detection Tips
Deepfakes are AI-made videos and images that show real people doing things they never did. They're getting harder to spot, but a checklist still beats nothing.
Builders · 9 min
Music Remixes With AI: What's Legal and What's Not
Suno and Udio can generate full songs in seconds. The technology is amazing — and the legal stuff is messy. Here's what you need to know to remix safely.
