Loading lesson…
AI is now involved in hiring, loans, medical care, and criminal sentencing. Here are the documented cases and the frameworks being built in response.
Recommender systems picking the next video are low-stakes — if the model is wrong you watch something you did not want to. Systems that decide who gets a job, a loan, a kidney transplant, or a longer prison sentence are not low-stakes. AI is now involved in all of those, and getting it wrong has changed lives.
COMPAS is a tool used in US courts since the early 2010s to predict whether a defendant will commit another crime. A 2016 ProPublica investigation found that Black defendants were almost twice as likely as white defendants to be labeled high-risk when they in fact did not reoffend. The company disputed the analysis. Multiple states still use the tool.
Amazon spent years building an AI to screen engineering resumes. It was trained on ten years of past hires — which were mostly men. The model learned to penalize resumes that contained the word women's (as in women's chess club) and to downrank graduates of two women's colleges. Amazon killed the project in 2018.
A 2019 Science paper looked at an algorithm used by US hospitals to flag patients for extra care. The model used past healthcare spending as a proxy for medical need. Because Black patients had historically received less care for the same conditions, the model systematically under-flagged Black patients. An estimated 200 million people had been scored by similar systems.
| Dimension | Human judge/loan officer | AI system |
|---|---|---|
| Speed | Slow | Instant |
| Consistency | Varies person to person | Consistent within version |
| Bias | Documented, familiar | Documented, harder to see |
| Appealability | You can ask why | Often opaque |
| Scalability | One case at a time | Millions per day |
Weapons of math destruction are opaque, they scale, and they damage.
— Cathy O'Neil
The big idea: when AI touches real lives, the math problems become policy problems. Knowing the canonical cases lets you spot the same patterns when they show up in a new product.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-ethics-high-stakes-decisions-builders
What is the core idea behind "When AI Decides Something That Matters"?
Which term best describes a foundational idea in "When AI Decides Something That Matters"?
A learner studying When AI Decides Something That Matters would need to understand which concept?
Which of these is directly relevant to When AI Decides Something That Matters?
Which of the following is a key point about When AI Decides Something That Matters?
Which of these does NOT belong in a discussion of When AI Decides Something That Matters?
Which statement is accurate regarding When AI Decides Something That Matters?
Which of these does NOT belong in a discussion of When AI Decides Something That Matters?
What is the key insight about "The pattern" in the context of When AI Decides Something That Matters?
What is the key insight about "Human in the loop is not automatic" in the context of When AI Decides Something That Matters?
What is the recommended tip about "Key insight" in the context of When AI Decides Something That Matters?
Which statement accurately describes an aspect of When AI Decides Something That Matters?
What does working with When AI Decides Something That Matters typically involve?
Which of the following is true about When AI Decides Something That Matters?
Which best describes the scope of "When AI Decides Something That Matters"?