Loading lesson…
How AI can sometimes be unfair — and what to do.
AI learns from huge piles of info from people. If that info is unfair, AI can be unfair too. Knowing this helps you spot it.
Ask an AI to draw 'a doctor'. Look at who shows up. Talk about it with a grown-up.
If your class voted on a pizza topping but only asked five of the thirty students, that vote wouldn't be fair — it wouldn't represent everyone. AI can have the same problem. It learns from huge piles of writing and images, but if those piles mostly included one kind of person or one kind of story, the AI might act like that's 'normal' and leave others out. This is called bias. You might notice bias when AI only shows doctors as men, or when voice recognition works better for some accents than others, or when AI makes art that ignores whole groups of people. Bias in AI can feel small, but it adds up. When some people are left out or shown unfairly over and over, it can affect how they see themselves and how others see them. Noticing unfairness in AI — and saying so — is something YOU can do, even as a kid.
AI is not always right about people. It might call your friend's drawing 'bad' or your sister's voice 'weird'. Don't believe it — your real opinion of people matters way more.
Think of one nice thing about a friend that AI couldn't possibly know. Tell them!
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-explorers-ethics-safety-AI-and-being-fair
What does the word 'bias' mean when talking about AI?
An AI always guesses certain jobs for boys and different jobs for girls based on their names. What is this an example of?
A student asks an AI to draw 'a doctor' and all the pictures show people with light skin. What might this tell us?
What makes an AI 'unfair'?
Can AI be unfair even if the people who made it did not WANT it to be unfair?
What is the main idea of this lesson about AI and fairness?
What is 'representation' in the context of AI fairness?
How can unfair AI affect real people?
If you notice something AI does seems unfair, what should you do?
A voice recognition AI works perfectly with some kids' accents but poorly with others'. This is an example of:
Why does the lesson say AI might 'leave out languages'?
Imagine AI is used to help a school decide which students get extra tutoring. If the AI has bias, what might happen?
Being a 'fairness fighter' when it comes to AI means:
True or false: If AI seems fair to you and your friends, it must be fair to everyone.
Which scenario is the BEST example of spotting AI bias?