Loading lesson…
AI can help you write nicer messages, understand others' feelings, and find good things to say. Kind use of AI makes the internet better.
Some kids use AI to be mean (deepfakes, harassment). The opposite is also true: AI can help you be MORE kind. Better thank-you notes, calmer disagreements, finding the right words for hard moments.
Write a kind note to someone using AI for help. Send it. Notice how it feels.
AI can write mean messages, fake photos, and copy people's voices. Just because it can doesn't mean we should. Real friends never use AI to embarrass or trick people.
Think of one kind way you can use AI this week — like writing a thank-you poem for someone.
Sometimes it won't be your own idea to use AI in a harmful way — someone else will suggest it or dare you to do it. 'Just ask AI to write a mean message about [person]' or 'use AI to make a fake photo of [person].' In those moments, you're being asked to be the person who does the harmful thing, even if the idea wasn't yours. The consequences still follow the person who actually sends the message or posts the image. AI makes harmful content easy to create, but it doesn't make the harm less real. The best response to pressure like this is short and clear: 'No, I don't want to do that.' You don't need to explain yourself or give a big speech. You can also report the suggestion to a trusted adult, because a kid asking others to use AI to harm someone is itself a warning sign worth taking seriously.
AI can write almost anything — including mean stuff. But mean words hurt real people, no matter who wrote them. Using AI to be cruel is still being cruel.
Ask AI to write a kind note for someone you know. Send it (or a version of it) to make their day better.
A common mistake kids (and some adults) make is thinking that because AI generated the words, the person who prompted it isn't responsible for the harm. That reasoning doesn't work. When you give AI a prompt asking it to write something mean about a specific person and then you send that message, you made every choice that mattered: you decided to do it, you directed the AI, and you delivered the result. AI is just the tool. The accountability stays with the person holding the tool. Schools, platforms, and parents are increasingly clear on this: using AI to produce harassing or bullying content toward a real person is treated as harassment, full stop. The same applies to fake images — using an AI tool to create a fake photo of a real classmate that's designed to embarrass them is cyberbullying, regardless of whether you drew it yourself or clicked a button.
Being polite with AI is good practice for being kind to people.
Next AI question, type 'please' at the start. See how it feels.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-explorers-ethics-AI-and-being-kind
What is the core idea behind "Use AI to Be More Kind, Not Less"?
Which term best describes a foundational idea in "Use AI to Be More Kind, Not Less"?
A learner studying Use AI to Be More Kind, Not Less would need to understand which concept?
Which of these is directly relevant to Use AI to Be More Kind, Not Less?
Which of the following is a key point about Use AI to Be More Kind, Not Less?
Which of these does NOT belong in a discussion of Use AI to Be More Kind, Not Less?
What is the key insight about "The rule" in the context of Use AI to Be More Kind, Not Less?
Which statement accurately describes an aspect of Use AI to Be More Kind, Not Less?
What does working with Use AI to Be More Kind, Not Less typically involve?
Which best describes the scope of "Use AI to Be More Kind, Not Less"?
Which section heading best belongs in a lesson about Use AI to Be More Kind, Not Less?
Which section heading best belongs in a lesson about Use AI to Be More Kind, Not Less?
Which of the following is a concept covered in Use AI to Be More Kind, Not Less?
Which of the following is a concept covered in Use AI to Be More Kind, Not Less?
Which of the following is a concept covered in Use AI to Be More Kind, Not Less?