Lesson 523 of 1234
Why AI Can Be Unfair Without Meaning To
AI can pick up unfair ideas from the writing it learned from.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The big idea
- 2bias
- 3fairness
- 4training data problems
Concept cluster
Terms to connect while reading
Section 1
The big idea
AI learns from writing made by humans. And humans aren't always fair. So AI can sometimes act unfair without knowing it — like always picking boy names for doctors and girl names for nurses. This is called bias.
Some examples
- Old picture-makers used to draw mostly white people for 'CEO'.
- AI might assume all chefs are men or all nannies are women.
- Engineers work hard to fix bias when they find it.
- If something feels unfair from AI, it's okay to push back.
Try it!
Ask an AI to describe 'a hero' or 'a scientist'. Did it make any assumptions about who they are? Try asking again with different words and see if you get a different answer.
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Why AI Can Be Unfair Without Meaning To”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Explorers · 5 min
AI Speaks Hundreds of Languages — Some Better Than Others
AI knows tons of languages, but it's best at the ones it read the most of.
Creators · 7 min
Plain-English Summaries of News Articles
Following American news in English builds vocabulary and civic understanding. AI can shrink long articles into clear summaries.
Explorers · 18 min
Prompt Builder Arcade
Snap prompt pieces together to make AI give you what you actually want.
