Loading lesson…
Children are using AI more than any other group, and have less legal protection. Here is what current laws cover, what they miss, and what is being debated.
AI products are designed by adults, tested mostly by adults, and governed by laws written when you could not do homework on a phone. The group that uses chatbots most — kids and teens — is the group whose interests are least represented in how these systems get built.
COPPA regulates data collection but not AI outputs, so it does not directly address a chatbot giving a 10-year-old bad advice. Most laws depend on age verification, which in practice means a checkbox that says I am 13. Almost nobody does real age verification, and the ones that try use biometrics that raise their own issues.
| Feature | Default for adults | Default for kids (ideal) |
|---|---|---|
| Data retention | 30 days or opt-out | Minimum viable, session-only |
| Training on chats | Opt-out | Opt-in with parental approval |
| Persistent memory | On | Off |
| Mature content | Configurable | Off, unbypassable |
| Engagement design | Optimized | Plain |
| Crisis handling | Resource links | Resource links + human path |
A real tension runs through this space. Parents want control. Teens want privacy. Both wants are legitimate. Kids fleeing abusive homes need private information channels. Parents of younger kids need visibility. No single policy covers both cases well, and designers are mostly avoiding the hard choice.
Children have rights online, not as a courtesy, but as a matter of law and dignity.
— UN Committee on the Rights of the Child, General Comment 25
The big idea: kids use AI the most and shape its rules the least. Closing that gap is partly law, partly design, and partly youth showing up in the rooms where decisions get made.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-ethics-kids-rights-builders
What is the core idea behind "Kids, AI, and the Rights That Should Matter"?
Which term best describes a foundational idea in "Kids, AI, and the Rights That Should Matter"?
A learner studying Kids, AI, and the Rights That Should Matter would need to understand which concept?
Which of these is directly relevant to Kids, AI, and the Rights That Should Matter?
Which of the following is a key point about Kids, AI, and the Rights That Should Matter?
Which of these does NOT belong in a discussion of Kids, AI, and the Rights That Should Matter?
Which statement is accurate regarding Kids, AI, and the Rights That Should Matter?
Which of these does NOT belong in a discussion of Kids, AI, and the Rights That Should Matter?
What is the key insight about "Documented harms with kids" in the context of Kids, AI, and the Rights That Should Matter?
What is the key insight about "Your voice matters here" in the context of Kids, AI, and the Rights That Should Matter?
What is the recommended tip about "Key insight" in the context of Kids, AI, and the Rights That Should Matter?
Which statement accurately describes an aspect of Kids, AI, and the Rights That Should Matter?
What does working with Kids, AI, and the Rights That Should Matter typically involve?
Which of the following is true about Kids, AI, and the Rights That Should Matter?
Which best describes the scope of "Kids, AI, and the Rights That Should Matter"?