Standalone lesson.
Lesson 1568 of 1570
Bias in AI
Dataset origins and why outputs can be unfair.
AI learns from human-made data. Human-made data is full of human biases — things we got wrong, things that were unfair. When the AI learns from that data, it learns the biases too.
Famous examples
- An early resume-screening AI at a big tech company learned to rate resumes with women’s names lower — because it had been trained on historical hires that skewed male.
- Early image-generation models drew “a doctor” as a white man and “a nurse” as a white woman, every time. They had learned those associations from photo captions online.
- Speech-recognition models have historically been worse at understanding Black American English — because the training audio didn’t represent it.
Three kinds of bias
- Representation bias.The training data doesn’t include everyone equally.
- Measurement bias. The labels humans added are themselves unfair.
- Aggregation bias.The AI learns one rule for “everyone” when different groups need different rules.
What companies do about it
Every major AI company now tests their models for bias before release. They hire red-teamers— people whose job is to try to get the AI to do something unfair — and fix what’s found. They’re not perfect, but the problem is visible now in a way it wasn’t five years ago.
What you can do
When you use an AI to make decisions about real people, stop and ask: what’s the bias hiding in its training data? Would this answer change if the person it’s about were a different gender, race, age, or language? If you don’t know, don’t trust the answer.
Tutor
Curious about “Bias in AI”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Explorers · 40 min
AI Is Sometimes Unfair
AI learned from things humans wrote and pictures humans made.
Adults & Professionals · 40 min
Deepfake Detection: What Works, What Doesn't, and Why It Matters
AI-generated media has crossed the perceptual threshold where humans cannot reliably detect it. Detection tools help — but are in an arms race with generation.
Builders · 40 min
Who Owns AI-Generated Art?
This is one of the biggest legal questions of 2026 — and the courts are still figuring it out..
