Standalone lesson.
Lesson 2111 of 2116
Agentic AI & Automation
Claude Code, Codex, browser agents, MCP — and a hands-on OpenClaw local-orchestration lab.
Everything you’ve used so far is a chat AI — you say something, it says something back. An agent is different: you give it a goal, and it does things to get there. It runs code, reads files, clicks buttons, looks things up, checks its own work, and keeps going.
The four things that make an agent
- A goal. “Find me flights to Cleveland under $200.”
- Tools. A web browser, a terminal, a calendar, a file system.
- A loop. Try something → check if it worked → try again.
- A way to stop. Done, or out of budget, or stuck.
Why agents are a big deal
A chat AI is a very smart friend you can ask for advice. An agent is a very smart assistant who actually does the thing. That’s a bigger change than it sounds.
Cloud agents vs. local agents
Most famous agents — Claude Code, OpenAI Codex, browser agents — run on the AI company’s servers. Fast, powerful, but your data leaves your computer.
Local agents run the whole model on your own laptop, using software like Ollama or LM Studio. Smaller, sometimes slower, but nothing you say leaves your computer. That’s a big deal for privacy.
Case study — OpenClaw
Author disclosure
OpenClaw is built by the author of this portal. It’s one example among several. It’s shown here because it’s a real, working local-agent framework and the code is readable. Alternatives worth knowing: Ollama, Open Interpreter, STORM.
OpenClaw orchestrates several small local models — like Qwen and MiniMax — to handle different parts of a task. One model plans, another writes code, another checks the output. All of them run on your own computer with Ollama.
A simple OpenClaw session — plain English
- You: “Organize the files in my Downloads folder by type.”
- Planner model writes a 5-step plan.
- Coder model writes Python to move files into subfolders.
- Runner executes the script in a sandbox.
- Verifier model reads the result and confirms.
The ethics of agents
Agents can do real damage if they’re wrong. They can delete files, send messages, spend money. Three rules that professional agent-builders follow:
- Sandbox first. Never let an agent touch production without a human in the loop.
- Scope the tools. Only give it the minimum tools it needs.
- Log everything. If something goes wrong, you need to know why.
Capstone prep
In your capstone, you’ll design (not necessarily build) an agent for a real problem in your life. Write the four things: goal, tools, loop, stop condition. Then predict three ways it could go wrong.
Tutor
Curious about “Agentic AI & Automation”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 40 min
Ollama Context Windows: Set Them Deliberately
Ollama local coding workflows often fail because the effective context is too small or too large for the hardware.
Builders · 40 min
MCP — How Agents Connect to Tools
MCP (Model Context Protocol) is a standard way for agents to safely talk to tools.
Explorers · 40 min
When Many AI Agents Team Up Like a Sports Squad
Sometimes lots of small AI agents work together, each doing one thing well.
