Lesson 690 of 1570
AI and Mental Health Bots: When AI Is Not a Therapist
How teens think clearly about AI chatbots that act like emotional support.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The big idea
- 2mental health
- 3limits
- 4trusted adults
Concept cluster
Terms to connect while reading
Section 1
The big idea
AI chatbots can feel comforting at 2am — they listen, they never judge, they always reply. But they can't truly know you, can't intervene in a crisis, and shouldn't replace a real human or professional when things get hard.
Some examples
- Talking to AI about a bad day can be okay.
- If you're in crisis, talk to a real person — call or text a hotline.
- AI can't notice if your voice is shaking or you've stopped eating.
- A school counselor or trusted adult should know what's actually going on.
Try it!
Save one real human number you'd call in a hard moment. AI doesn't replace that.
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “AI and Mental Health Bots: When AI Is Not a Therapist”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Explorers · 40 min
When to Tell a Grown-Up About Something AI Did
Sometimes AI says or shows weird, scary, or wrong stuff. Telling a trusted grown-up is the right move — always.
Creators · 40 min
AI for Vendor Model Card Reviews: Reading Between the Lines
Use AI to systematically extract and compare what vendor model cards do and do not say.
Builders · 7 min
AI, Authenticity, and Why Online Honesty Matters
AI lets you be anyone online — different name, different face, different voice. But the ethical question is: should you?
