Lesson 26 of 1570
Kids, AI, and the Rights That Should Matter
Children are using AI more than any other group, and have less legal protection. Here is what current laws cover, what they miss, and what is being debated.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The Age Gap in AI Policy
- 2COPPA
- 3child safety
- 4age verification
Concept cluster
Terms to connect while reading
Section 1
The Age Gap in AI Policy
AI products are designed by adults, tested mostly by adults, and governed by laws written when you could not do homework on a phone. The group that uses chatbots most — kids and teens — is the group whose interests are least represented in how these systems get built.
The laws that exist
- COPPA (US, 1998): protects data of kids under 13 — predates modern AI by 25 years
- GDPR-K (EU): stronger protections for under-16s, varies by country
- California AADC (2022): age-appropriate design code for online services
- UK Children's Code (2021): 15 standards for online services used by kids
- UN Convention on the Rights of the Child General Comment 25 (2021): applies children's rights to the digital environment
The gaps these laws leave
COPPA regulates data collection but not AI outputs, so it does not directly address a chatbot giving a 10-year-old bad advice. Most laws depend on age verification, which in practice means a checkbox that says I am 13. Almost nobody does real age verification, and the ones that try use biometrics that raise their own issues.
What researchers say kids actually need
- 1Age-appropriate defaults, not age-appropriate toggles
- 2Data minimization: collect as little as possible, delete soon
- 3Clear, kid-readable explanations of what the AI can and cannot do
- 4No dark patterns that push prolonged engagement
- 5Meaningful refusal: the AI should refuse sensitive topics firmly and offer real help, not just disclaim
- 6Human oversight paths that do not require a credit card
Compare: adult vs. kid-appropriate defaults
Compare the options
| Feature | Default for adults | Default for kids (ideal) |
|---|---|---|
| Data retention | 30 days or opt-out | Minimum viable, session-only |
| Training on chats | Opt-out | Opt-in with parental approval |
| Persistent memory | On | Off |
| Mature content | Configurable | Off, unbypassable |
| Engagement design | Optimized | Plain |
| Crisis handling | Resource links | Resource links + human path |
The hidden question: who decides?
A real tension runs through this space. Parents want control. Teens want privacy. Both wants are legitimate. Kids fleeing abusive homes need private information channels. Parents of younger kids need visibility. No single policy covers both cases well, and designers are mostly avoiding the hard choice.
“Children have rights online, not as a courtesy, but as a matter of law and dignity.”
Key terms in this lesson
The big idea: kids use AI the most and shape its rules the least. Closing that gap is partly law, partly design, and partly youth showing up in the rooms where decisions get made.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Kids, AI, and the Rights That Should Matter”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 30 min
Misinformation at Industrial Scale
Before AI, lies took time to make. Now they take seconds and come in infinite variations. Here is how the information ecosystem is changing.
Builders · 24 min
Federal Procurement and AI
The US government is the largest single buyer of software in the world. What it buys and what it refuses to buy shapes the whole industry. That includes AI.
Builders · 25 min
Japan's Soft-Law AI Framework
Japan chose light-touch, guideline-based AI governance built on existing laws. Understanding why illuminates a real alternative to comprehensive AI acts.
