Lesson 20 of 1570
When AI Decides Something That Matters
AI is now involved in hiring, loans, medical care, and criminal sentencing. Here are the documented cases and the frameworks being built in response.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The Stakes Went Up Fast
- 2algorithmic decision-making
- 3COMPAS
- 4Amazon hiring
Concept cluster
Terms to connect while reading
Section 1
The Stakes Went Up Fast
Recommender systems picking the next video are low-stakes — if the model is wrong you watch something you did not want to. Systems that decide who gets a job, a loan, a kidney transplant, or a longer prison sentence are not low-stakes. AI is now involved in all of those, and getting it wrong has changed lives.
Case 1: COMPAS and criminal risk scoring
COMPAS is a tool used in US courts since the early 2010s to predict whether a defendant will commit another crime. A 2016 ProPublica investigation found that Black defendants were almost twice as likely as white defendants to be labeled high-risk when they in fact did not reoffend. The company disputed the analysis. Multiple states still use the tool.
Case 2: Amazon's scrapped hiring model
Amazon spent years building an AI to screen engineering resumes. It was trained on ten years of past hires — which were mostly men. The model learned to penalize resumes that contained the word women's (as in women's chess club) and to downrank graduates of two women's colleges. Amazon killed the project in 2018.
Case 3: healthcare risk prediction
A 2019 Science paper looked at an algorithm used by US hospitals to flag patients for extra care. The model used past healthcare spending as a proxy for medical need. Because Black patients had historically received less care for the same conditions, the model systematically under-flagged Black patients. An estimated 200 million people had been scored by similar systems.
How the frameworks are responding
- EU AI Act: classifies hiring, credit, law enforcement, and healthcare as high-risk — mandatory risk management, logging, and human oversight from August 2026
- NYC Local Law 144 (2023): requires bias audits for automated hiring tools
- Colorado AI Act (2024): consumer notification and impact assessments for high-risk AI
- FDA (US): regulatory pathway for AI as a medical device, with post-market monitoring
Compare: automated vs. human decisions
Compare the options
| Dimension | Human judge/loan officer | AI system |
|---|---|---|
| Speed | Slow | Instant |
| Consistency | Varies person to person | Consistent within version |
| Bias | Documented, familiar | Documented, harder to see |
| Appealability | You can ask why | Often opaque |
| Scalability | One case at a time | Millions per day |
Design questions the field is converging on
- 1Is the AI advising a human or deciding alone? High-stakes systems should advise.
- 2Can the subject see what the model used and why it concluded what it did?
- 3Is there a real appeals process that can overturn the model?
- 4Is the model regularly audited on outcomes by demographic group?
- 5Is performance on the most-affected group disclosed in public documentation?
“Weapons of math destruction are opaque, they scale, and they damage.”
Key terms in this lesson
The big idea: when AI touches real lives, the math problems become policy problems. Knowing the canonical cases lets you spot the same patterns when they show up in a new product.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “When AI Decides Something That Matters”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 25 min
Red-Teaming: People Paid to Break AI
Red-teamers try to make models misbehave before bad actors do. Here is how the job works, who does it, and what they look for.
Builders · 25 min
Provenance: How the Internet Plans to Label AI Content
C2PA, SynthID, and Content Credentials are the quiet standards deciding what is real online. Here is what they do and where the gaps are.
Builders · 22 min
Bletchley, Seoul, Paris: How Countries Talk About AI
The big international AI summits produce non-binding declarations. Even so, they shape the rules. Here is what each one did.
