Lesson 1314 of 1570
Telling a Parent You've Been Talking to a Chatbot About Hard Stuff
Lots of teens use AI as their first stop for anxiety, depression, or relationship pain. Telling a parent you've been doing this is hard. Doing it well matters.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The big idea
- 2disclosure
- 3mental health
- 4therapist referral
Concept cluster
Terms to connect while reading
Section 1
The big idea
Many teens use ChatGPT, Character.AI, or Pi to vent about mental health stuff because it's available at 2am, doesn't judge, and doesn't tell parents. None of those are the same as actual care. Telling a parent you've been doing this — and asking for real help — is the move that opens the door to a therapist, not the move that gets you grounded.
Some examples
- The framing that works: 'I've been talking to ChatGPT about [specific thing] because I didn't know how to start the conversation with you. Can we talk about getting me a therapist?'
- Most parents respond to that by getting a therapist, not by panicking. The 'I have a problem and need help' framing reads as maturity.
- If you're not ready to name the specific thing, even 'I want to start seeing a therapist' is a complete sentence. You don't have to justify it.
- If your insurance is through a parent and you want privacy, ask about a sliding-scale community therapist — many take cash, no parent involvement.
Try it!
If any of this resonates: write down one sentence that names the thing. Say it out loud to yourself once. That sentence is the hardest part. After that, the conversation has somewhere to go.
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Telling a Parent You've Been Talking to a Chatbot About Hard Stuff”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 8 min
Bark, Aura, and the Tradeoff Between Trust and Safety
Parental monitoring software now uses AI to flag 'concerning' messages. The pros are real. The costs to trust and to LGBTQ+ kids especially are also real.
Builders · 22 min
Explaining AI to Parents Who Think It's Just ChatGPT
Most parents have a five-year-out-of-date picture of AI. Updating them helps them parent better and trust you more.
Builders · 40 min
When AI Is the Wrong Helper for the Real Stuff
There are some conversations AI can't replace — even though it's tempting to ask the bot first.
