Lesson 46 of 2116
Tool Use at the API Level: The Primitive
Underneath every agent framework is the same primitive — the model returns a structured tool call, you execute it, you feed the result back. Master this loop and every framework looks familiar.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The primitive under every agent
- 2tool use
- 3function calling
- 4tool schema
Concept cluster
Terms to connect while reading
Section 1
The primitive under every agent
All three major API providers — Anthropic, OpenAI, Google — implement tool use the same way. You send a request that includes tool definitions. The model either replies with text OR returns a 'tool_use' block with a name and arguments. Your code runs the tool and sends the result back in a 'tool_result' block. Repeat until the model returns a final text answer.
The canonical Claude example
The agent loop, from scratch, in ~25 lines. Everything else is sugar on top.
import anthropic
client = anthropic.Anthropic()
tools = [
{
"name": "get_weather",
"description": "Get current weather for a city.",
"input_schema": {
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"},
"unit": {"type": "string", "enum": ["C", "F"]}
},
"required": ["city"]
}
}
]
messages = [{"role": "user", "content": "What's the weather in Tokyo?"}]
while True:
resp = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
tools=tools,
messages=messages,
)
if resp.stop_reason == "end_turn":
print(resp.content[0].text)
break
if resp.stop_reason == "tool_use":
tool_use = next(b for b in resp.content if b.type == "tool_use")
result = run_tool(tool_use.name, tool_use.input) # your code
messages.append({"role": "assistant", "content": resp.content})
messages.append({
"role": "user",
"content": [{
"type": "tool_result",
"tool_use_id": tool_use.id,
"content": str(result)
}]
})The three provider shapes
Compare the options
| Aspect | Anthropic | OpenAI | |
|---|---|---|---|
| Field name | tools + tool_use + tool_result | tools + function_call + tool response | tools + functionCall + functionResponse |
| Schema style | JSON Schema under input_schema. | JSON Schema under parameters. | JSON Schema under parameters. |
| Parallel calls | Yes (multiple tool_use blocks). | Yes (tool_calls array). | Yes (functionCalls array). |
| Forced tool | tool_choice: {type: 'tool', name: X}. | tool_choice: {type: 'function', function: {name: X}}. | function_calling_config: {mode: 'ANY'}. |
| Streaming tool calls | Yes, with delta events. | Yes. | Yes. |
Writing good tool schemas
- Description is a prompt — models decide based on it. Be specific about when to use the tool, when NOT to.
- Use enums to constrain string params — eliminates spelling mistakes.
- Require only fields the tool truly needs. Optional fields are fine.
- Name things like you'd name a function: send_email, query_db, list_files.
- Return errors as structured JSON, not paragraphs. 'error_code' + 'message' + 'retriable: bool'.
Parallel tool calls
All three providers emit multiple tool calls per turn when independent. If the model needs to look up weather in Tokyo, London, and Paris, it returns three tool_use blocks at once. You execute them in parallel and return all three tool_result blocks in the next user message. This is 3x faster than sequential.
Run parallel tool calls in parallel. Don't serialize by accident.
# Parallel execution pattern
import asyncio
async def handle_tools(tool_uses):
results = await asyncio.gather(*[
run_tool_async(t.name, t.input) for t in tool_uses
])
return [
{"type": "tool_result", "tool_use_id": t.id, "content": str(r)}
for t, r in zip(tool_uses, results)
]Extended / adaptive thinking + tools
Claude's extended thinking (Opus 4.7, Opus 4.6, Sonnet 4.6) enables interleaved thinking between tool calls. The model can reason about a tool result before deciding the next action. Anthropic recommends adaptive thinking (default 'high' effort) for any agentic workload. Pass thinking: {type: 'enabled', budget_tokens: 8000} to turn it on.
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Tool Use at the API Level: The Primitive”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 48 min
Claude Code CLI as an Agent Platform
Claude Code isn't just a coding assistant — it's a general agent runtime with MCP, subagents, hooks, and skills. Treat it that way and you get a free, powerful platform.
Creators · 75 min
Capstone: Build and Ship a Real Agent
Everything comes together. Design, code, test, secure, and ship a production-quality agent with open-source code you can fork today.
Creators · 48 min
Computer Use API: Letting AI Click Through GUIs
Computer Use lets Claude see your screen and use it — mouse, keyboard, apps. The capability is real, the gotchas are real. A hands-on look at what works in 2026.
