Loading lesson…
Model Context Protocol is the most important open standard in agents. One protocol, 1,200+ servers, and your agent can plug into almost any system. Here's how it actually works.
Before MCP, every agent-to-tool connection was custom. Claude had one way to call a database. GPT had another. Your LangChain glue was a third. Multiply that by 1,000 tools and you had a combinatorial nightmare. Anthropic introduced MCP in late 2024 as 'USB-C for AI tools' — one protocol, any client, any server. By 2026 it's backed by Anthropic, OpenAI, Google, and the GitHub registry has 1,200+ servers.
| Piece | Role |
|---|---|
| Host | The app the user interacts with (Claude Desktop, Claude Code, Cursor, OpenClaw). |
| Client | The host's MCP client instance, one per connected server. |
| Server | A separate process exposing tools/resources/prompts. |
| Transport | stdio (spawned subprocess) or HTTP/SSE (remote service). |
| Protocol | JSON-RPC 2.0. Typed message schemas. |
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/you/projects"
]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_..."
}
},
"supabase": {
"command": "npx",
"args": [
"-y",
"@supabase/mcp-server-supabase@latest",
"--project-ref",
"your-project-ref"
],
"env": {
"SUPABASE_ACCESS_TOKEN": "sbp_..."
}
},
"notion": {
"url": "https://mcp.notion.com/sse",
"transport": "sse",
"headers": {"Authorization": "Bearer ntn_..."}
}
}
}Claude Desktop / Claude Code MCP config. Four real servers in the April 2026 registry. Drop this into ~/.claude/claude_desktop_config.json and restart.// A tiny MCP server in TypeScript (@modelcontextprotocol/sdk)
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
const server = new Server(
{ name: "my-server", version: "0.1.0" },
{ capabilities: { tools: {} } }
);
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [{
name: "greet",
description: "Say hello to a person.",
inputSchema: {
type: "object",
properties: { name: { type: "string" } },
required: ["name"],
},
}],
}));
server.setRequestHandler(CallToolRequestSchema, async (req) => {
if (req.params.name === "greet") {
const name = (req.params.arguments as any)?.name ?? "friend";
return { content: [{ type: "text", text: `Hello, ${name}!` }] };
}
throw new Error("Unknown tool");
});
const transport = new StdioServerTransport();
await server.connect(transport);A complete, minimal MCP server. Registers one tool, speaks stdio. About 30 lines.| Transport | Use case | Pros | Cons |
|---|---|---|---|
| stdio | Local-only, spawned by host. | Zero network; trivial auth. | One process per client. |
| SSE / HTTP | Remote or shared server. | Multi-tenant, scales. | Need auth, TLS, ops. |
| Streamable HTTP (2025+) | Bidirectional over HTTP/2. | Best of both; modern. | Newer, fewer libs. |
MCP is the one thing to learn cold. Once you have it, every agent gets cheaper, every tool is reusable, and switching models becomes a config change.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-agentic-mcp-deep-dive-creators
What is the core idea behind "MCP Deep Dive: The USB-C for AI Tools"?
Which term best describes a foundational idea in "MCP Deep Dive: The USB-C for AI Tools"?
A learner studying MCP Deep Dive: The USB-C for AI Tools would need to understand which concept?
Which of these is directly relevant to MCP Deep Dive: The USB-C for AI Tools?
Which of the following is a key point about MCP Deep Dive: The USB-C for AI Tools?
Which of these does NOT belong in a discussion of MCP Deep Dive: The USB-C for AI Tools?
Which statement is accurate regarding MCP Deep Dive: The USB-C for AI Tools?
Which of these does NOT belong in a discussion of MCP Deep Dive: The USB-C for AI Tools?
What is the key insight about "MCP servers are code you run" in the context of MCP Deep Dive: The USB-C for AI Tools?
What is the key insight about "The deepest integration is LangGraph's" in the context of MCP Deep Dive: The USB-C for AI Tools?
What is the key warning about "Scope your agents tightly" in the context of MCP Deep Dive: The USB-C for AI Tools?
Which statement accurately describes an aspect of MCP Deep Dive: The USB-C for AI Tools?
What does working with MCP Deep Dive: The USB-C for AI Tools typically involve?
Which best describes the scope of "MCP Deep Dive: The USB-C for AI Tools"?
Which section heading best belongs in a lesson about MCP Deep Dive: The USB-C for AI Tools?