Lesson 302 of 1550
AI Recommendation Systems: When Engagement Optimization Harms Users
Recommendation AI optimized for engagement can promote harmful content. Designing systems that resist this requires deliberate trade-offs.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The premise
- 2recommendation AI
- 3engagement optimization
- 4harm reduction
Concept cluster
Terms to connect while reading
Section 1
The premise
Pure engagement optimization in recommendation AI predictably promotes harmful content; better systems require explicit harm-reduction trade-offs.
What AI does well here
- Define harm categories explicitly (self-harm content, eating disorders, extremism) and de-promote them
- Trade off engagement for user wellbeing on identified harm vectors
- Maintain human review of edge-case content rather than pure-AI moderation
- Surface metrics tracking both engagement AND user wellbeing
What AI cannot do
- Eliminate all harm without sacrificing some engagement
- Substitute pure-AI moderation for human judgment on novel cases
- Make recommendation systems neutral (they always have values)
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “AI Recommendation Systems: When Engagement Optimization Harms Users”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Adults & Professionals · 10 min
Bias Auditing in LLM Outputs: Seeing What the Model Can't
LLMs inherit the skews of their training data and RLHF feedback. Auditing for bias isn't a one-time test — it's an ongoing practice that belongs in every deployment.
Adults & Professionals · 40 min
Deepfake Detection: What Works, What Doesn't, and Why It Matters
AI-generated media has crossed the perceptual threshold where humans cannot reliably detect it. Detection tools help — but are in an arms race with generation.
Adults & Professionals · 11 min
Prompt Injection Defense: Protecting AI Systems From Malicious Inputs
Prompt injection is the SQL injection of the AI era — and it's already being exploited in production systems. Defending against it requires multiple layers, not a single fix.
