The growing field of keeping AI from harming users — and the paths in.
7 min · Reviewed 2026
The big idea
Every major AI lab has a Trust and Safety team larger than most startups, and they're hiring people who understand both technology and human harm. You don't need a CS PhD — many roles want sociologists, lawyers, linguists, and people with lived experience of online harm. It's one of the fastest-growing AI specialties.
Some examples
AI red teamer: get paid to break models by trying to make them say harmful things.
Policy specialist: write the rules that guide what models will and won't do.
Child safety researcher: protect kids from emerging AI threats specifically.
Crisis response: handle real-time incidents when AI causes harm in the wild.
Try it!
Read one AI lab's safety blog this week. Notice which job titles keep appearing in their team bios.
End-of-lesson check
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-ai-trust-and-safety-careers-teens-final2-teen
Which is true about T&S team size at major AI labs?
Always one person
Often larger than most startups
Always zero
Banned in most labs
What kinds of backgrounds are sought beyond CS?
Only PhDs in physics
Only ex-athletes
Sociologists, lawyers, linguists, and people with lived experience of online harm
Only musicians
What does an AI red teamer do?
Paints models red
Sells subscriptions
Coaches sports
Tries to make models say or do harmful things to find weaknesses
What does a policy specialist do at an AI lab?
Writes the rules guiding what models will and won't do
Negotiates trade deals
Files patents
Designs logos
What does a child safety researcher focus on?
Marketing
Protecting kids from emerging AI threats specifically
Logistics
Office furniture
What does crisis response handle?
Birthday parties
Office maintenance
Real-time incidents when AI causes harm in the wild
Travel booking
What's a useful starter step for a curious teen?
Apply to be CEO
Buy stocks
Memorize the model weights
Read one AI lab's safety blog and notice recurring job titles
What's the formula 'pair what makes you angry about the internet with one technical skill'?
A motivation-plus-skill recipe for finding T&S work
A legal requirement
A hiring guarantee
A salary formula
Why is T&S 'one of the fastest-growing AI specialties'?
T&S is shrinking
Risk grows as products grow; teams must too
Risk decreased
Teams are staying flat
Which is NOT a typical T&S role mentioned?
Red teamer
Policy specialist
Race car driver
Crisis responder
What kind of skill makes lived experience of online harm valuable?
Coding speed
Typing speed
Drawing speed
Insight into harms that catalog-based teams miss
What is the cardinal value of a T&S role?
Keeping AI from harming users
Maximizing profit
Deleting features
Hiding bugs
Why don't you need a CS PhD?
CS is illegal
The role mix includes humanistic and policy expertise
PhDs are banned
Only PhDs help
Which mindset best fits a teen interested in T&S?
Pick a flashy title and wait
Ignore harms
Care about a real harm + acquire one technical skill + apply
Avoid technology
What's a clear sign T&S is hiring at a lab?
Random job listings
No signs anywhere
Only word-of-mouth
Recurring T&S job titles in team bios on the lab's safety blog