Lesson 876 of 2116
Beyond The Basics: Federation, Custom Runtimes, Contributing Back
Once you trust the runtime, the next moves are scaling out (multiple machines), swapping the brain (different LLM provider), and giving back (clean upstream contributions). Each step compounds the value of the rest.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Where to go after one box
- 2federation
- 3Claworc orchestration
- 4custom LLM provider
Concept cluster
Terms to connect while reading
Section 1
Where to go after one box
You've deployed OpenClaw, instrumented it, and locked it down. The natural growth paths from here are sideways, not up: federate across machines so each soul lives where its data lives, swap the LLM provider so each soul uses the right brain for its work, and contribute the patches your real-world use produces back to the project so other people benefit and your fork doesn't drift forever.
Federation: multiple machines, one mission control
Real life rarely fits on one host. The inbox-triage soul should sit at home (private inbox, sensitive). The weather-summary soul wants the cheap VPS (public data, doesn't matter who reads). The on-call escalation soul belongs in the cloud (public webhook, must answer fast). Claworc — OpenClaw's orchestration layer — lets each soul live on its appropriate host while a single Mission Control pane shows them all.
A federation manifest. Each soul lives on a node whose trust level matches its data.
# claworc-federation.yaml — three hosts, one control plane
federation:
control_plane: home.example.com:8081
nodes:
- name: home
address: home.example.com:8082
souls: [inbox-triage, finance-bookkeeper] # private data
trust: high
- name: vps-public
address: vps.example.com:8082
souls: [weather-brief, social-summary] # public data only
trust: low
- name: vps-fast
address: oncall.example.com:8082
souls: [escalation-router] # latency-sensitive
trust: medium
# Souls do NOT migrate between nodes. Data sensitivity dictates placement.Custom runtimes: swap the brain
OpenClaw's LLM provider is a pluggable interface. Default is Ollama (local). Built-in alternates include OpenAI-compatible (any provider that speaks the OpenAI API: vLLM, llama.cpp server, Together, Groq), Anthropic, and Google. You can register a custom provider by implementing the LLMBackend trait — useful for routing through a corporate AI gateway, hitting a niche provider, or instrumenting requests for billing.
Compare the options
| Provider | When to use | Watch out for |
|---|---|---|
| Ollama (local) | Privacy-critical souls, offline use, hobby cost | Local model quality varies; benchmark per-soul |
| OpenAI-compatible (vLLM, Together, Groq) | Speed-critical souls, when you want a frontier OSS model on someone else's GPU | Provider-side logging policies; rate limits |
| Anthropic / OpenAI / Google direct | Best-in-class reasoning souls, agentic souls that need tool-use quality | Cost; context-window limits; vendor lock-in if you over-tune |
| Corporate AI gateway | Compliance, central billing, model routing across teams | Added latency hop; gateway availability becomes your availability |
| Custom backend | Niche models, on-prem GPU clusters, instrumentation | You now maintain a backend; expect feature drift vs upstream |
Building a custom backend
When the built-in providers don't fit — say you're routing through an internal gateway that needs custom auth — you implement the LLMBackend interface. Three methods: `chat(messages, tools)` for the model call, `embed(texts)` for embeddings, and `health()` for the readiness probe Mission Control polls. The reference implementations (ollama-backend, openai-backend) are 200-300 lines each — short enough to read end-to-end before forking.
Two-file pattern: implement the interface, register it at boot. Souls reference it by name.
// llm-backend.ts — the interface a custom provider must satisfy
export interface LLMBackend {
name: string; // 'mycorp-gateway'
chat(req: ChatRequest): Promise<ChatResponse>; // model call
embed(req: EmbedRequest): Promise<EmbedResponse>;// optional, for memory
health(): Promise<{ ok: boolean; latencyMs: number }>;
}
// Register at startup so souls can target it.
// openclaw.config.ts:
import { registerBackend } from '@openclaw/runtime';
import { MyCorpGatewayBackend } from './backends/mycorp.js';
registerBackend(new MyCorpGatewayBackend({
endpoint: process.env.MYCORP_GATEWAY_URL!,
apiKey: process.env.MYCORP_GATEWAY_KEY!,
}));Contributing back: the etiquette
OpenClaw is open source. If you fix a real bug, harden a real edge case, or add a backend more people will want, send the PR. Maintainers' time is the bottleneck; a clean, narrow PR gets merged in days. A 4,000-line 'massive refactor' PR sits for months and helps no one. Contribute the way you'd want a stranger to contribute to your project.
- 1Open an issue first for anything bigger than a typo. Confirm the maintainer wants the change before you write it.
- 2Keep PRs scoped — one bug, one feature. If you found three bugs, send three PRs.
- 3Match the project's code style and tests; don't reformat the world.
- 4Write the test that fails without your fix and passes with it. PRs without tests linger.
- 5Update the docs in the same PR. A feature without docs is a feature most users won't find.
- 6Be patient on review. Maintainers have day jobs. A polite ping after two weeks is fine; nagging at day three is not.
Three good first contributions
- Documentation: the deployment / observability / security pages always lag the code. If you set up something undocumented, write the doc.
- A new built-in skill: clean, well-tested, useful to many. The skill registry has a contribution guide; follow it.
- Backend support for a provider that ships an OpenAI-compatible API: usually 50-150 lines of glue plus tests.
Apply: pick one stretch this month
- 1Identify the one soul whose host placement is wrong (sensitive on cheap VPS, or fast soul on a Pi). Federate it.
- 2Identify one soul that's running on the wrong-cost model (cheap soul on Opus, or quality-critical soul on a 3B). Swap.
- 3Identify one bug or doc gap you've already worked around in your fork. Send the PR upstream.
Key terms in this lesson
The big idea: federation puts each soul where it belongs, custom runtimes put each soul on the right brain, and contributing back keeps your real-world experience flowing into the project everyone benefits from.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Beyond The Basics: Federation, Custom Runtimes, Contributing Back”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 10 min
Codex With Custom Tools And MCP
Codex's real power shows when you connect it to your own tools — internal APIs, datastores, ticketing systems — usually via Model Context Protocol.
Creators · 10 min
Debugging A Heartbeat Loop: Observability, Replay, And Failure Modes
Heartbeats fail in ways reactive agents never do — silent drift, soul-state thrash, infinite loops. Debugging them takes different tools and a different mental model.
Creators · 11 min
Observability: Logs, Traces, And Soul Timelines
A long-running agent is a black box unless you instrument it. Logs tell you what; traces tell you why; the soul timeline tells you whether the runtime is healthy at all.
