Lesson 49 of 2116
Building with LangGraph
LangGraph became the production favorite in 2026 for good reasons — explicit state, checkpointing, first-class MCP. Build a real agent end-to-end and learn why.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Why LangGraph won the enterprise seat
- 2LangGraph
- 3state graph
- 4checkpointer
Concept cluster
Terms to connect while reading
Section 1
Why LangGraph won the enterprise seat
LangGraph models agents as explicit state machines. Every node mutates typed state. Every edge is conditional. Every run is checkpointed. This maps directly to what enterprises need: audit trails, pausable workflows, rollback, human approval gates. By early 2026, LangGraph surpassed CrewAI in GitHub stars for exactly these reasons.
The core objects
- StateGraph — the graph itself. Nodes, edges, state type.
- State — a TypedDict describing everything the graph carries.
- Checkpointer — persists state between node runs (memory, SQLite, Postgres).
- Command — controls flow; can jump to a node, update state, interrupt.
- Interrupt — pauses the graph pending human input.
A real agent: researcher that cites sources
A working LangGraph research agent using Claude + MCP (Brave search). ~50 lines of actual logic.
from langgraph.graph import StateGraph, END
from langgraph.checkpoint.memory import MemorySaver
from langchain_anthropic import ChatAnthropic
from langchain_mcp_adapters.client import MultiServerMCPClient
from typing import TypedDict, Annotated
import operator
class State(TypedDict):
question: str
plan: list[str]
findings: Annotated[list[dict], operator.add]
draft: str
verdict: str
model = ChatAnthropic(model="claude-opus-4-7", temperature=0)
async def plan_node(state: State):
resp = await model.ainvoke(
f"Break this research question into 3-5 sub-questions:\n{state['question']}"
)
return {"plan": resp.content.split("\n")}
async def research_node(state: State):
mcp = MultiServerMCPClient({
"brave": {"command": "npx", "args": ["-y", "@mcp/brave-search"]},
})
tools = await mcp.get_tools()
agent = create_react_agent(model, tools)
findings = []
for sub in state["plan"]:
r = await agent.ainvoke({"messages": [("user", sub)]})
findings.append({"sub": sub, "answer": r["messages"][-1].content})
return {"findings": findings}
async def synthesize_node(state: State):
context = "\n\n".join(f.get("answer", "") for f in state["findings"])
resp = await model.ainvoke(
f"Write a cited answer to: {state['question']}\n\nNotes:\n{context}"
)
return {"draft": resp.content}
async def verify_node(state: State):
resp = await model.ainvoke(
f"Rate this answer's faithfulness to the notes (pass/fail + reason):\n"
f"Q: {state['question']}\nA: {state['draft']}"
)
return {"verdict": resp.content}
graph = StateGraph(State)
graph.add_node("plan", plan_node)
graph.add_node("research", research_node)
graph.add_node("synthesize", synthesize_node)
graph.add_node("verify", verify_node)
graph.set_entry_point("plan")
graph.add_edge("plan", "research")
graph.add_edge("research", "synthesize")
graph.add_edge("synthesize", "verify")
graph.add_edge("verify", END)
app = graph.compile(checkpointer=MemorySaver())Human-in-the-loop with interrupt
interrupt() pauses the graph. The checkpointer stores state. Resume with Command(resume=...).
from langgraph.types import interrupt, Command
async def approve_node(state: State):
response = interrupt({
"draft": state["draft"],
"ask": "Approve to publish? Edit below if needed.",
})
return {"draft": response["edited_draft"]}
# In the client:
config = {"configurable": {"thread_id": "run-42"}}
async for event in app.astream({"question": "..."}, config):
... # stream until interrupt
# Inspect state, get human input, then resume:
await app.ainvoke(
Command(resume={"edited_draft": "<human's edits>"}),
config
)Streaming modes
Compare the options
| Mode | Emits | Use for |
|---|---|---|
| values | Full state after each node. | Audit UIs, full snapshots. |
| updates | Partial state diffs. | Efficient live feeds. |
| messages | LLM tokens as they stream. | ChatGPT-style UIs. |
| debug | Internal events. | Debugging, tracing. |
Production checkpointers
- MemorySaver — in-process only. Great for tests and demos.
- SqliteSaver — single-machine durability. Good for dev.
- PostgresSaver — production multi-replica. Versioned state.
- Redis-based community savers — low-latency distributed.
Next, we look at Claude Code — a commercial agent platform that ships with its own idioms for tools, subagents, and MCP.
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Building with LangGraph”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 52 min
Production Agent Patterns: Queues, Retries, Idempotency
A prototype agent and a production agent have the same LLM. What's different is everything around it — durable state, retries, idempotency, observability. The real engineering.
Creators · 75 min
Capstone: Build and Ship a Real Agent
Everything comes together. Design, code, test, secure, and ship a production-quality agent with open-source code you can fork today.
Creators · 48 min
Computer Use API: Letting AI Click Through GUIs
Computer Use lets Claude see your screen and use it — mouse, keyboard, apps. The capability is real, the gotchas are real. A hands-on look at what works in 2026.
