Lesson 356 of 1570
Reporting Bad AI Behavior
When AI says or does something harmful, you can report it.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Reporting Bad AI Behavior
- 2report
- 3flag
- 4transparency report
Concept cluster
Terms to connect while reading
Section 1
Reporting Bad AI Behavior
When AI says or does something harmful, you can report it. Most major AI tools have a flag button — and humans review the reports.
OpenAI, Anthropic, and Google all publish quarterly transparency reports about reports they receive and how they respond.
Three things to always report
- AI giving unsafe medical or legal advice
- AI showing bias toward a group of people
- AI being used to harass or scam people
Key terms in this lesson
The big idea: Reporting bad AI output is one of the most effective things you can do.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Reporting Bad AI Behavior”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 18 min
Spotting AI-Generated Faces
AI now makes photorealistic faces of people who don't exist.
Builders · 18 min
Content Watermarks (C2PA)
C2PA is an industry standard that adds an invisible 'this is real' or 'this was AI-made' label to images and videos..
Builders · 18 min
When Someone Clones a Voice
AI now needs only 3 seconds of audio to clone a voice.
