Loading lesson…
Once you trust the runtime, the next moves are scaling out (multiple machines), swapping the brain (different LLM provider), and giving back (clean upstream contributions). Each step compounds the value of the rest.
You've deployed OpenClaw, instrumented it, and locked it down. The natural growth paths from here are sideways, not up: federate across machines so each soul lives where its data lives, swap the LLM provider so each soul uses the right brain for its work, and contribute the patches your real-world use produces back to the project so other people benefit and your fork doesn't drift forever.
Real life rarely fits on one host. The inbox-triage soul should sit at home (private inbox, sensitive). The weather-summary soul wants the cheap VPS (public data, doesn't matter who reads). The on-call escalation soul belongs in the cloud (public webhook, must answer fast). Claworc — OpenClaw's orchestration layer — lets each soul live on its appropriate host while a single Mission Control pane shows them all.
# claworc-federation.yaml — three hosts, one control plane
federation:
control_plane: home.example.com:8081
nodes:
- name: home
address: home.example.com:8082
souls: [inbox-triage, finance-bookkeeper] # private data
trust: high
- name: vps-public
address: vps.example.com:8082
souls: [weather-brief, social-summary] # public data only
trust: low
- name: vps-fast
address: oncall.example.com:8082
souls: [escalation-router] # latency-sensitive
trust: medium
# Souls do NOT migrate between nodes. Data sensitivity dictates placement.A federation manifest. Each soul lives on a node whose trust level matches its data.OpenClaw's LLM provider is a pluggable interface. Default is Ollama (local). Built-in alternates include OpenAI-compatible (any provider that speaks the OpenAI API: vLLM, llama.cpp server, Together, Groq), Anthropic, and Google. You can register a custom provider by implementing the LLMBackend trait — useful for routing through a corporate AI gateway, hitting a niche provider, or instrumenting requests for billing.
| Provider | When to use | Watch out for |
|---|---|---|
| Ollama (local) | Privacy-critical souls, offline use, hobby cost | Local model quality varies; benchmark per-soul |
| OpenAI-compatible (vLLM, Together, Groq) | Speed-critical souls, when you want a frontier OSS model on someone else's GPU | Provider-side logging policies; rate limits |
| Anthropic / OpenAI / Google direct | Best-in-class reasoning souls, agentic souls that need tool-use quality | Cost; context-window limits; vendor lock-in if you over-tune |
| Corporate AI gateway | Compliance, central billing, model routing across teams | Added latency hop; gateway availability becomes your availability |
| Custom backend | Niche models, on-prem GPU clusters, instrumentation | You now maintain a backend; expect feature drift vs upstream |
When the built-in providers don't fit — say you're routing through an internal gateway that needs custom auth — you implement the LLMBackend interface. Three methods: `chat(messages, tools)` for the model call, `embed(texts)` for embeddings, and `health()` for the readiness probe Mission Control polls. The reference implementations (ollama-backend, openai-backend) are 200-300 lines each — short enough to read end-to-end before forking.
// llm-backend.ts — the interface a custom provider must satisfy
export interface LLMBackend {
name: string; // 'mycorp-gateway'
chat(req: ChatRequest): Promise<ChatResponse>; // model call
embed(req: EmbedRequest): Promise<EmbedResponse>;// optional, for memory
health(): Promise<{ ok: boolean; latencyMs: number }>;
}
// Register at startup so souls can target it.
// openclaw.config.ts:
import { registerBackend } from '@openclaw/runtime';
import { MyCorpGatewayBackend } from './backends/mycorp.js';
registerBackend(new MyCorpGatewayBackend({
endpoint: process.env.MYCORP_GATEWAY_URL!,
apiKey: process.env.MYCORP_GATEWAY_KEY!,
}));Two-file pattern: implement the interface, register it at boot. Souls reference it by name.OpenClaw is open source. If you fix a real bug, harden a real edge case, or add a backend more people will want, send the PR. Maintainers' time is the bottleneck; a clean, narrow PR gets merged in days. A 4,000-line 'massive refactor' PR sits for months and helps no one. Contribute the way you'd want a stranger to contribute to your project.
The big idea: federation puts each soul where it belongs, custom runtimes put each soul on the right brain, and contributing back keeps your real-world experience flowing into the project everyone benefits from.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-openclaw-ops-stretching-creators
What is the core idea behind "Beyond The Basics: Federation, Custom Runtimes, Contributing Back"?
Which term best describes a foundational idea in "Beyond The Basics: Federation, Custom Runtimes, Contributing Back"?
A learner studying Beyond The Basics: Federation, Custom Runtimes, Contributing Back would need to understand which concept?
Which of these is directly relevant to Beyond The Basics: Federation, Custom Runtimes, Contributing Back?
Which of the following is a key point about Beyond The Basics: Federation, Custom Runtimes, Contributing Back?
Which of these does NOT belong in a discussion of Beyond The Basics: Federation, Custom Runtimes, Contributing Back?
Which statement is accurate regarding Beyond The Basics: Federation, Custom Runtimes, Contributing Back?
Which of these correctly reflects a principle in Beyond The Basics: Federation, Custom Runtimes, Contributing Back?
What is the key insight about "Federation is for placement, not redundancy" in the context of Beyond The Basics: Federation, Custom Runtimes, Contributing Back?
What is the key insight about "Per-soul model is the killer feature" in the context of Beyond The Basics: Federation, Custom Runtimes, Contributing Back?
What is the key insight about "Don't contribute soul-specific hacks upstream" in the context of Beyond The Basics: Federation, Custom Runtimes, Contributing Back?
Which statement accurately describes an aspect of Beyond The Basics: Federation, Custom Runtimes, Contributing Back?
What does working with Beyond The Basics: Federation, Custom Runtimes, Contributing Back typically involve?
Which of the following is true about Beyond The Basics: Federation, Custom Runtimes, Contributing Back?
Which best describes the scope of "Beyond The Basics: Federation, Custom Runtimes, Contributing Back"?