Lesson 369 of 1570
When AI Predicts Child Welfare Risk
Some states use AI to predict which families need child protective services attention.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1When AI Predicts Child Welfare Risk
- 2predictive risk modeling
- 3child welfare
- 4disparate impact
Concept cluster
Terms to connect while reading
Section 1
When AI Predicts Child Welfare Risk
Some states use AI to predict which families need child protective services attention. The use is deeply controversial.
Allegheny County (Pittsburgh) uses an AI tool. Critics found it disproportionately flags families in poverty — even when child welfare isn't actually at risk.
Three concerns
- Poverty and risk get conflated
- Once flagged, families face real consequences
- Hard to challenge or appeal an algorithm
Key terms in this lesson
The big idea: Predictive child welfare AI affects the most vulnerable families. The systems need careful oversight.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “When AI Predicts Child Welfare Risk”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 18 min
AI Bias That Hurt Real People
AI bias isn't just a theory.
Builders · 18 min
When AI Is Used in Court
Some courts use AI to recommend bail amounts and sentences.
Builders · 18 min
When AI Decides Who Gets Housing
Landlords increasingly use AI tenant-screening tools that pull court records, eviction history, and credit.
