Lesson 1597 of 2116
Jailbreak Categories: Mapping the Adversarial Surface
Jailbreak attacks fall into recognizable families — role-play, encoding, persona, multi-turn pressure. A category map drives durable defense.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The premise
- 2jailbreak
- 3role-play attack
- 4encoding attack
Concept cluster
Terms to connect while reading
Section 1
The premise
AI can map jailbreak categories and defensive postures, but your specific safety policy must define what counts as a successful attack.
What AI does well here
- Generate per-category jailbreak example sets for red-team use.
- Draft defensive-posture summaries by category.
What AI cannot do
- Define what content your platform considers harmful.
- Substitute for ongoing red-team practice.
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Jailbreak Categories: Mapping the Adversarial Surface”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 32 min
Jailbreak Mechanisms and Defenses: How Adversaries Bypass AI Safety
Jailbreaks exploit prompt-format, role, and capability gaps; understand the mechanism categories to evaluate vendor defenses critically.
Creators · 9 min
AI for Resume English (Immigrant Career Edition)
American resumes look different from many other countries. AI can format your work history in the U.S. style and translate foreign job titles.
Creators · 8 min
When AI Gives Bad Advice About Rural Life
AI can be confidently wrong about country life — winterizing, livestock, well water, septic, you name it. Knowing where models break is part of using them well.
