Lesson 1242 of 1570
Giving an AI Agent Shell Access Without Letting It Wreck Your Machine
Sandbox, allowlist, and confirm — three guardrails that make shell access safe enough to use.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The big idea
- 2shell
- 3sandbox
- 4allowlist
Concept cluster
Terms to connect while reading
Section 1
The big idea
Giving an agent shell access is powerful and terrifying. Run it in a sandbox, allowlist the commands it can use, and require human confirmation for anything destructive — three rules that turn 'never' into 'sometimes'.
Some examples
- Claude Code runs in a Docker sandbox so even an `rm -rf` only nukes the container.
- Cursor's agent mode requires you to click Approve before it runs anything outside an allowlist.
- An agent's shell tool is wrapped to reject any command containing rm, mv, or sudo.
- ChatGPT in code interpreter mode runs in a fresh container that resets between sessions.
Try it!
If you've given an agent shell access, audit it: is there a sandbox, an allowlist, and a confirm step? Add what's missing.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Giving an AI Agent Shell Access Without Letting It Wreck Your Machine”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 21 min
Tool Registries and Permissioned Toolsets
Teach students how an agent safely discovers tools, validates calls, and limits what any session may do.
Explorers · 40 min
AI Agents Should Have a Permission List
Tell AI what it can and can't touch — like rules on a babysitter's note.
Creators · 11 min
AI and agent tool allowlist design
Design the tool allowlist for a coding agent so it can do the job without scope creep.
