Lesson 1428 of 1570
Giving an agent the right tools (and only those)
Agents are only as safe as the tools they can call — pick the smallest set that works.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The big idea
- 2least privilege
- 3tool call
Concept cluster
Terms to connect while reading
Section 1
The big idea
When you give an agent tools (web search, file edit, send email), each tool is a way it can mess up. Less is more.
Some examples
- Start with read-only tools (search, fetch) before write tools (send, delete).
- Never give a hobby agent tools that cost money without a hard limit.
- Log every tool call so you can audit later.
Try it!
Build an agent that summarizes your inbox. Give it READ access only — no reply, no delete.
Understanding "Giving an agent the right tools (and only those)" in practice: AI agents don't just answer questions — they can do things, like looking things up, writing files, or talking to apps. Agents are only as safe as the tools they can call — pick the smallest set that works — and knowing how to apply this gives you a concrete advantage.
- Design clear agent goals before adding tools
- Define permissions and scope before deploying any agent
- Build in human-approval checkpoints for high-stakes actions
- Understand when to use an agent vs. a simple chat prompt
- 1Design an agent spec: goal, tools, permissions, stop condition
- 2Run a simple web-search agent in a sandbox environment
- 3Instrument an existing workflow to identify where an agent could save time
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Giving an agent the right tools (and only those)”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 10 min
Agent Tool Permission Design: Least Privilege for Autonomous Systems
An agent with broad tool access has a broad blast radius when it goes wrong. Designing tool permissions following least-privilege principles is the single most important agent safety control.
Creators · 40 min
Agent-Specific Prompt Injection Defenses: Why Standard LLM Defenses Aren't Enough
Prompt injection in agents is more dangerous than in chatbots — because agents take actions. The defenses must account for indirect injection from tool outputs, web content, and user-uploaded files.
Creators · 11 min
Scoping Blast Radius When You Give Agents Write Access
Decide what an agent is allowed to break, then enforce it with scoped credentials and dry-run modes.
