Loading lesson…
When you use AI to do something, ask: who wins and who loses? Simple test that catches a lot.
Most AI choices have winners and losers. Asking 'who wins, who loses' helps you spot when AI use is unfair — even if you did not mean it to be.
Pick one AI use you are considering. Run the test. Notice if it changes your decision.
Some AI uses feel fair at first but become unfair when you look more carefully. Using AI to help you brainstorm a science project idea seems totally fine — no one loses. But what if everyone in your class uses AI to brainstorm, and some kids have better AI tools than others? Now the kids with the best tools might have an unfair advantage. That is a sneakier kind of unfairness. The fairness test works best when you also ask: what happens if EVERYONE does this? Would it still be fair? Great thinkers throughout history have called this idea the 'veil of ignorance' — imagining you don't know which person you'd be in the situation. If you would be okay being anyone in the scenario, it's probably fair. If you'd only be okay if you were the winner, that's worth thinking harder about.
AI should treat everyone the same way. But sometimes AI gives worse answers about certain groups, languages, or skin colors. Adults work to fix this. You can speak up if you see it.
Ask AI to draw 'a doctor', 'a teacher', and 'a scientist'. Are the people different? Talk about what you notice.
AI learns from data that humans created — and human data is full of patterns, assumptions, and stereotypes built up over a long time. When an AI is trained on decades of job postings where certain roles were mostly filled by men or certain roles were mostly filled by women, it starts to assume those patterns are correct. When a facial recognition AI is trained mostly on lighter-skinned faces, it becomes less accurate at recognizing darker-skinned faces. These aren't intentional choices — they're the AI absorbing the biases already present in the data. What makes this important is that AI is increasingly used in systems that make real decisions: who gets a loan, who gets flagged by a security camera, which neighborhoods get more resources. When those systems have bias baked in, the unfairness gets amplified at scale. You can't fix AI bias on your own, but noticing it is the first step. If you see AI giving answers that seem unfair to a group of people, that's worth naming out loud.
AI learns from stuff people wrote, and that stuff is not always fair. So AI might describe boys and girls differently, or some jobs as 'for' certain people. That's called bias, and it's not okay.
Ask AI to describe 'a scientist' and 'a kindergarten teacher.' Did anything feel unfair or stereotyped? Ask AI to redo it more fairly.
Asking AI to help you get better at chess between games is great practice. Asking AI to tell you the right move during a real match against a friend is cheating — they don't have a robot whispering to them.
If you play games with friends, agree out loud whether AI help is allowed before you start. Stick to the deal.
Maybe your family pays for fancy AI apps. Maybe a classmate's family can't. That's not because they're not smart — it's because AI costs money or fast internet. Be kind, share, and never brag.
Think of one way you could share what you learned with AI without giving someone your account or password. Maybe a printed cheat sheet?
Winning a contest with secret AI help is not really winning.
Think of one skill you want to grow on your own — drawing, writing, or counting fast. Practice it for 5 minutes today, no AI.
AI sometimes guesses about people in unfair ways — that's not fair.
Notice one AI image or answer this week that seems unfair. Tell a grown-up about it.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-explorers-ethics-AI-and-fairness-test
What is the first question to ask in the fairness test for AI use?
Using AI to help you write a thank-you note for a teacher — who wins and who loses?
Using AI to create a fake photo making a classmate look silly — who wins and who loses?
What is the 'what if everyone did this?' question designed to find?
Using AI to fake positive product reviews on a competitor's website — how does the fairness test evaluate this?
The 'veil of ignorance' idea means:
Using AI to help plan a charity fundraiser — who wins and who loses?
You use AI to write a history essay for a classmate in exchange for money. The fairness test shows:
Your school has students with and without AI tool access. Which statement best reflects a fairness concern?
The fairness test asks you to pause and reconsider when:
Using AI to help you brainstorm ideas for a group project — is this fair?
Which of the following is the LEAST fair use of AI?
After running the fairness test, you realize an AI use is unfair. What should you do?
An AI use helps you a lot and causes no harm to any specific person. It still might be worth thinking harder about fairness if:
The fairness test is most useful as: