Lesson 394 of 1234
The Fairness Test for AI: Who Wins, Who Loses
When you use AI to do something, ask: who wins and who loses? Simple test that catches a lot.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The big idea
- 2AI Should Be Fair to Everyone
- 3The big idea
- 4AI and Being Fair: How AI Can Be Unfair Without Knowing
Concept cluster
Terms to connect while reading
Section 1
The big idea
Most AI choices have winners and losers. Asking 'who wins, who loses' helps you spot when AI use is unfair — even if you did not mean it to be.
Some examples
- AI homework helper: you win, nobody loses. ✓
- AI to make a deepfake of a classmate: you win, they lose. ✗
- AI for a fundraiser: you win, charity wins. ✓
- AI to fake reviews on a competitor: you win, they lose unfairly. ✗
Try it!
Pick one AI use you are considering. Run the test. Notice if it changes your decision.
When the fairness test gets tricky
Some AI uses feel fair at first but become unfair when you look more carefully. Using AI to help you brainstorm a science project idea seems totally fine — no one loses. But what if everyone in your class uses AI to brainstorm, and some kids have better AI tools than others? Now the kids with the best tools might have an unfair advantage. That is a sneakier kind of unfairness. The fairness test works best when you also ask: what happens if EVERYONE does this? Would it still be fair? Great thinkers throughout history have called this idea the 'veil of ignorance' — imagining you don't know which person you'd be in the situation. If you would be okay being anyone in the scenario, it's probably fair. If you'd only be okay if you were the winner, that's worth thinking harder about.
- Ask 'who wins, who loses?' as the first step
- Ask 'what if everyone did this?' as the second step
- Notice when fairness depends on having better tools or more access
- Talk to a trusted adult if you're unsure whether something is fair
Key terms in this lesson
Key terms in this lesson
Section 2
AI Should Be Fair to Everyone
Section 3
The big idea
AI should treat everyone the same way. But sometimes AI gives worse answers about certain groups, languages, or skin colors. Adults work to fix this. You can speak up if you see it.
Some examples
- AI might miss faces with some skin tones.
- AI might know less about smaller languages.
- AI may show only one kind of doctor or scientist.
- Tell a grown-up if AI seems unfair.
Try it!
Ask AI to draw 'a doctor', 'a teacher', and 'a scientist'. Are the people different? Talk about what you notice.
How AI bias gets into systems and what it looks like
AI learns from data that humans created — and human data is full of patterns, assumptions, and stereotypes built up over a long time. When an AI is trained on decades of job postings where certain roles were mostly filled by men or certain roles were mostly filled by women, it starts to assume those patterns are correct. When a facial recognition AI is trained mostly on lighter-skinned faces, it becomes less accurate at recognizing darker-skinned faces. These aren't intentional choices — they're the AI absorbing the biases already present in the data. What makes this important is that AI is increasingly used in systems that make real decisions: who gets a loan, who gets flagged by a security camera, which neighborhoods get more resources. When those systems have bias baked in, the unfairness gets amplified at scale. You can't fix AI bias on your own, but noticing it is the first step. If you see AI giving answers that seem unfair to a group of people, that's worth naming out loud.
- AI bias comes from the training data, not from the AI having bad intentions
- Facial recognition and language AI both have documented bias issues across skin tone and language groups
- Bias in AI compounds unfairness when it's used in decisions about loans, hiring, or policing
- Noticing bias and naming it is a real and meaningful action
Section 4
AI and Being Fair: How AI Can Be Unfair Without Knowing
Section 5
The big idea
AI learns from stuff people wrote, and that stuff is not always fair. So AI might describe boys and girls differently, or some jobs as 'for' certain people. That's called bias, and it's not okay.
Some examples
- AI might draw doctors as one gender and nurses as another.
- AI might use 'old-fashioned' ideas about who does what.
- Notice when AI's answer feels unfair.
- Ask AI to try again with a fairer view.
Try it!
Ask AI to describe 'a scientist' and 'a kindergarten teacher.' Did anything feel unfair or stereotyped? Ask AI to redo it more fairly.
Section 6
AI and fairness in games: when is using AI cheating?
Section 7
The big idea
Asking AI to help you get better at chess between games is great practice. Asking AI to tell you the right move during a real match against a friend is cheating — they don't have a robot whispering to them.
Some examples
- AI study sessions before a tournament: fair
- Using AI hints during a real game: unfair
- AI explaining a board game's rules: fair
- AI playing FOR you in an online tournament: unfair
Try it!
If you play games with friends, agree out loud whether AI help is allowed before you start. Stick to the deal.
Section 8
AI and the kids who don't have it
Section 9
The big idea
Maybe your family pays for fancy AI apps. Maybe a classmate's family can't. That's not because they're not smart — it's because AI costs money or fast internet. Be kind, share, and never brag.
Some examples
- Don't tease classmates without AI access
- Share what AI helped you learn (not just the answer)
- Suggest free AI options to a friend
- Notice when school AI helps level the playing field
Try it!
Think of one way you could share what you learned with AI without giving someone your account or password. Maybe a printed cheat sheet?
Section 10
AI and being fair when you win with AI help
Section 11
The big idea
Winning a contest with secret AI help is not really winning.
Some examples
- A drawing contest is for the artists, not the apps
- If the rules say 'no AI', then no AI
- Sharing AI prompts after a contest is good sportsmanship
- Real practice grows your skill; AI shortcuts do not
Try it!
Think of one skill you want to grow on your own — drawing, writing, or counting fast. Practice it for 5 minutes today, no AI.
Section 12
AI and knowing AI can be wrong about people
Section 13
The big idea
AI sometimes guesses about people in unfair ways — that's not fair.
Some examples
- AI might guess jobs by how someone looks
- AI can mix up names from different cultures
- Real people are way more than guesses
- Tell a grown-up if AI says something unfair
Try it!
Notice one AI image or answer this week that seems unfair. Tell a grown-up about it.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “The Fairness Test for AI: Who Wins, Who Loses”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Explorers · 40 min
AI Is Sometimes Unfair
AI learned from things humans wrote and pictures humans made.
Explorers · 40 min
Use AI to Be More Kind, Not Less
AI can help you write nicer messages, understand others' feelings, and find good things to say. Kind use of AI makes the internet better.
Explorers · 40 min
Stay Curious About People (Not Just AI)
AI is interesting. People are way more interesting. Stay curious about real people in your life.
