Lesson 1077 of 1550
AI and Foster Care Risk Scoring: Allegheny's Lessons Generalized
Predictive child-welfare scores embed historical bias; mandate appeal rights and human-final-call before deployment.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The premise
- 2child welfare
- 3predictive scoring
- 4racial bias
Concept cluster
Terms to connect while reading
Section 1
The premise
The Allegheny Family Screening Tool taught the field hard lessons about racial disparities in child-welfare AI. Newer tools still under-test for bias and over-trust the score.
What AI does well here
- Aggregate referral history into a single workload signal
- Help screeners triage incoming hotline calls
- Track outcomes for retrospective audit
What AI cannot do
- Distinguish poverty signals from neglect signals
- Correct for over-reporting of Black and Indigenous families
- Operate ethically without independent demographic audits
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “AI and Foster Care Risk Scoring: Allegheny's Lessons Generalized”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Adults & Professionals · 11 min
AI Automated-Decision Explanation Letters: Why Was I Denied?
AI can draft automated-decision explanation letters, but the underlying decision logic and appeal process must be humanly governed.
Adults & Professionals · 9 min
AI and Content Moderation Appeals: Drafting Defensible Responses
AI helps creators draft moderation appeals that cite policy precisely instead of pleading.
Adults & Professionals · 10 min
Bias Auditing in LLM Outputs: Seeing What the Model Can't
LLMs inherit the skews of their training data and RLHF feedback. Auditing for bias isn't a one-time test — it's an ongoing practice that belongs in every deployment.
