Lesson 48 of 2116
Multi-Agent Orchestration: Planner + Executor + Verifier
One smart agent is fine. Two agents checking each other's work is better. Master the canonical orchestration patterns: planner/executor, judge/worker, debate, and swarm.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Why multi-agent
- 2orchestration
- 3planner-executor
- 4verifier
Concept cluster
Terms to connect while reading
Section 1
Why multi-agent
A single agent trying to do everything in one context window hits limits: context bloat, role confusion, weak self-critique. Splitting into specialized agents with narrow roles is the cheapest way to add reliability. The patterns below are well-attested in production at Anthropic, OpenAI, and research papers.
Pattern 1 — Planner / Executor / Verifier
Compare the options
| Agent | Role | Model tier |
|---|---|---|
| Planner | Breaks the goal into ordered steps. | Smartest model (Opus 4.7, GPT-5). |
| Executor | Runs each step. Uses tools. | Mid tier (Sonnet 4.6, GPT-5-mini). |
| Verifier | Checks result against original goal. | Smart + strict (Opus 4.7 at low temp). |
Planner writes the plan. Executor runs it. Verifier checks. Replan on failure.
# Simplified planner/executor/verifier loop
goal = "Migrate all CSV files in /data to parquet, preserving schemas."
plan = planner(goal) # returns ordered steps
for step in plan.steps:
result = executor(step, tools=TOOLS) # has MCP + shell + file
ok, notes = verifier(step, result, goal)
if not ok:
fix = planner(f"Step failed: {notes}. Replan from here.")
plan.splice(step, fix)
log(step, result, ok)
final_ok, summary = verifier("final", plan.history, goal)Pattern 2 — Judge / Worker (competitive)
Spawn N workers to attempt the same task with different prompts or temperatures. A judge scores their outputs and returns the best. Used in AlphaCode, Anthropic's research tooling, and most SWE-bench top submissions. More compute, better results.
Pattern 3 — Debate
Two agents argue opposite sides of a question. A third agent reads the debate and picks a winner. Effective for subjective tasks (editorial decisions, design tradeoffs) where a single pass lacks rigor. OpenAI's 'debate' research and Anthropic's CAI pipeline both use variants.
Pattern 4 — Swarm (parallel specialists)
A coordinator sends the same input to specialist agents (e.g., 'legal reviewer', 'UX reviewer', 'accessibility reviewer') and merges their feedback. Better than one generalist because each specialist can have a narrower, sharper system prompt and different MCP toolset. CrewAI and Microsoft Agent Framework lean into this pattern.
LangGraph state skeleton
Planner/executor/verifier as an explicit state machine. Checkpointers let you pause, rewind, and resume.
from langgraph.graph import StateGraph, END
from typing import TypedDict, List
class State(TypedDict):
goal: str
plan: List[str]
current_step: int
results: List[dict]
verdict: str
graph = StateGraph(State)
graph.add_node("plan", plan_fn)
graph.add_node("execute", execute_fn)
graph.add_node("verify", verify_fn)
graph.add_node("replan", replan_fn)
graph.set_entry_point("plan")
graph.add_edge("plan", "execute")
graph.add_conditional_edges("verify",
lambda s: "execute" if s["current_step"] < len(s["plan"])
else "replan" if s["verdict"] == "fail"
else END,
)
graph.add_edge("execute", "verify")
graph.add_edge("replan", "execute")
app = graph.compile(checkpointer=MemorySaver()) # durable stateCoordination pitfalls
- Context duplication — N agents each get the full history → N× cost. Use summaries.
- Role leakage — executor starts planning, verifier starts executing. Tighten system prompts.
- Infinite replans — cap replan attempts (e.g., 3) before escalating to human.
- Verifier sycophancy — a verifier trained by the same lab often over-approves. Mix providers.
- Serial bottlenecks — if only the planner can proceed, you lose the parallelism you paid for.
Next lesson: how to actually build the planner/executor/verifier in LangGraph.
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Multi-Agent Orchestration: Planner + Executor + Verifier”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 50 min
Tool Use at the API Level: The Primitive
Underneath every agent framework is the same primitive — the model returns a structured tool call, you execute it, you feed the result back. Master this loop and every framework looks familiar.
Creators · 55 min
MCP Deep Dive: The USB-C for AI Tools
Model Context Protocol is the most important open standard in agents. One protocol, 1,200+ servers, and your agent can plug into almost any system. Here's how it actually works.
Creators · 55 min
Building with LangGraph
LangGraph became the production favorite in 2026 for good reasons — explicit state, checkpointing, first-class MCP. Build a real agent end-to-end and learn why.
