Lesson 878 of 2116
Installing OpenClaw And Wiring It To A Local Model
Get OpenClaw running on your machine in under fifteen minutes, paired with a local LLM via Ollama. The shape of the install matters less than what you verify after.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1What you need before you start
- 2installation
- 3Ollama pairing
- 4system requirements
Concept cluster
Terms to connect while reading
Section 1
What you need before you start
- A reasonably modern laptop — Apple Silicon, a Linux box, or Windows with WSL2
- Roughly 16 GB of RAM if you want to run a small local model alongside the agent; 8 GB works if you point at a cloud LLM instead
- Python 3.11 or newer on your PATH
- A working terminal you are comfortable in
- About 10 GB of free disk space for the framework, a model file, and a few weeks of memory growth
The minimal install flow
The shape of a clean install. Exact commands shift over time — always check the docs page first; do not memorize URLs.
# 1. Install Ollama (the local model server) — pick the line for your OS
# macOS: brew install ollama
# Linux: curl -fsSL https://ollama.com/install.sh | sh
# Windows: download the installer from ollama.com
# 2. Pull a small, capable local model so OpenClaw has something to talk to
ollama pull qwen3:8b
# 3. Install OpenClaw — single conceptual install script
curl -fsSL https://openclaw.dev/install.sh | sh
# 4. Verify both are alive
ollama --version
openclaw --versionWiring OpenClaw to your local model
After install, OpenClaw needs to know which model to call. Pointing it at the local Ollama server keeps everything on your machine. Pointing it at a cloud provider gives you stronger reasoning at the cost of sending prompts off-box. Most builders pick local for daily small work and keep a cloud key on standby for hard tasks.
Two providers configured: a local one for daily use, a cloud fallback for the cases where the small model struggles.
# Tell OpenClaw to use a local Ollama model by default
openclaw config set provider ollama
openclaw config set model qwen3:8b
openclaw config set base_url http://localhost:11434
# Optional: keep a cloud fallback for bigger jobs
openclaw config set fallback_provider anthropic
openclaw config set fallback_model claude-sonnet-latest
# (You set ANTHROPIC_API_KEY in the environment, never in config files.)
# Sanity check — should return a one-line response
openclaw pingVerify before you trust
- 1Run `openclaw ping` and confirm the model replies — it proves the config and the model server are both alive
- 2Run `openclaw doctor` (if shipped) to surface common issues like missing Python deps or wrong permissions
- 3Look at the install path — make a note of where souls, skills, and heartbeats will live; a default lives under `~/.openclaw/` on most systems
- 4Tail the log once: `openclaw logs --tail` so you have seen real output before something goes wrong
- 5Send one test prompt that should obviously work, like `openclaw say 'hello'` — if that fails, do not move on to building souls
What to skip on day one
- Do not pull a 70B model just because you can — 8B is plenty for the hello-world phase
- Do not configure a vector database — the default file-backed store is fine for the first week
- Do not enable heartbeats yet — they are the next lesson, not the install lesson
- Do not commit the config file before sanitizing it — environment variables can leak into JSON if you copy-paste carelessly
Key terms in this lesson
The big idea: install once, wire to a model, and verify with a ping before you do anything else. The framework is uninteresting until the wiring works.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Installing OpenClaw And Wiring It To A Local Model”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 8 min
Installing And Authenticating Claude Code
Setup is short — but the setup choices shape every session afterwards. Get the model, billing, and permissions right on day one.
Creators · 45 min
Structured Outputs: Make the Model Return Data You Can Trust
For production apps, pretty prose is often the wrong output. Learn when to use structured outputs, function calling, and schema validation.
Creators · 45 min
OpenAI Use-Case Playbook: Match the Surface to the Job
OpenAI now spans chat, coding agents, APIs, images, realtime voice, search, files, and tools. Learn which surface belongs to which kind of product.
