Lesson 533 of 1234
Why a Good AI Agent Knows What It Can't Do
The smartest agents know when to stop and say 'I can't help with that'.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The big idea
- 2limits
- 3honesty
- 4AI safety
Concept cluster
Terms to connect while reading
Section 1
The big idea
A great AI agent doesn't pretend to do things it can't. If you ask it to do something dangerous, illegal, or that needs a real human, a good agent says so. Knowing your limits is a sign of being smart.
Some examples
- A good agent won't pretend to be a doctor or lawyer.
- A good agent won't help with mean or dangerous stuff.
- A good agent says 'I'm not sure' when it really isn't sure.
- Saying 'no' is sometimes the safest answer.
Try it!
Make a list of 3 things you'd want an AI agent to refuse to do — like sending mean messages or sharing your address. Why is each one important?
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Why a Good AI Agent Knows What It Can't Do”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Explorers · 5 min
AI Agents That Watch the Clock for You
Agents can set a time limit so they don't take all day on one task.
Explorers · 5 min
Agents and Being Honest About Mistakes
Good agents tell you when something went wrong.
Explorers · 40 min
What Is an AI Agent? (And Why It Is Different From a Chatbot), Part 1
A chatbot answers questions. An AI agent goes off and DOES things for you. Big difference. Here is what that means.
