Lesson 73 of 2116
Tool Switching — Why You Shouldn't Marry One Model
Brand loyalty is a liability in AI. Learn the muscle memory of switching models, the signals that say 'time to swap,' and the anti-lock-in habits.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The models keep leapfrogging each other
- 2model switching
- 3vendor lock-in
- 4Claude
Concept cluster
Terms to connect while reading
Section 1
The models keep leapfrogging each other
In the last 12 months: Claude surged in coding. GPT-5 landed. Gemini 3 took long context. Grok 4 closed reasoning gaps. DeepSeek and Qwen pushed open-weight capability. Any 'ranking' is out of date within a quarter. This isn't just noise — it's the base rate. Plan for it.
The cost of marriage
- Your Claude Projects don't migrate to ChatGPT Projects.
- Your ChatGPT memories don't export to Claude.
- Your Gemini Workspace integrations are Google-specific.
- Custom GPTs only work inside ChatGPT.
- Each Model Context Protocol setup is provider-flavored.
- Your 'prompt library' implicitly encodes one model's quirks.
The portable layer
- Markdown notes you own (Obsidian, local files).
- PDFs and source documents in a cloud folder you control.
- A plain-text prompt library, versioned in git.
- MCP servers (provider-neutral, increasingly widely supported).
- An abstraction layer like Vercel AI Gateway, OpenRouter, or LiteLLM for code.
Switch triggers — when to actually move
Compare the options
| Trigger | Action |
|---|---|
| New SOTA benchmark on work you actually do | A/B test for a week |
| Current provider had a major privacy incident | Move primary away, keep account for testing |
| Pricing changes unfavorably on your tier | Reprice across competitors |
| New capability only available on one provider | Add as secondary, keep primary |
| Your org mandates a specific provider | Comply; advocate internally |
| None of the above | Stay — switching costs are real |
A portable prompt library
The portability pattern: plain-text prompts, thin adapters per provider, no feature that only works on one brand.
# prompts/essay_polish.md is plain markdown — works anywhere
import anthropic, openai
from pathlib import Path
prompt_template = Path('prompts/essay_polish.md').read_text()
def run(model_family, user_text):
prompt = prompt_template.replace('{{INPUT}}', user_text)
if model_family == 'anthropic':
c = anthropic.Anthropic()
r = c.messages.create(model='claude-sonnet-4-5', max_tokens=4096,
messages=[{'role':'user','content':prompt}])
return r.content[0].text
if model_family == 'openai':
c = openai.OpenAI()
r = c.responses.create(model='gpt-5', input=prompt)
return r.output_text
# Switch providers by changing one argument.
# Your prompts, your files, your workflow — all portable.The two-week switch experiment
- 1Pick a challenger (say, you use Claude, try ChatGPT).
- 2Copy your top 5 prompts from the incumbent.
- 3Use the challenger as primary for 14 days. No cheating.
- 4Track: workflow friction, output quality, features missed, features discovered.
- 5End of week 2: decide to stay with incumbent, switch fully, or split roles.
- 6Repeat annually. Every year a different challenger.
The enterprise version
- Use a gateway (Vercel AI Gateway, Cloudflare AI Gateway, OpenRouter) to route.
- Negotiate portability clauses: data export, conversation history format, 90-day deletion after contract end.
- Audit where provider-specific features are baked into internal tools.
- Budget a 'model swap drill' once a year — everyone uses the challenger for a sprint.
- Treat model providers the way you treat cloud providers: important, replaceable.
A philosophy
Treat AI providers the way a good journalist treats sources — respectfully, but never with total loyalty. The moment you feel personally invested in 'Team Claude' or 'Team ChatGPT' is the moment you start losing to people who just use the best tool for the task.
“The biggest productivity loss in AI is refusing to try the other model.”
Key terms in this lesson
The big idea: AI tools are a fleet, not a marriage. Build portable habits, run quarterly switch experiments, and bet on open protocols. The only model you should be loyal to is your own workflow.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Tool Switching — Why You Shouldn't Marry One Model”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 30 min
Claude vs. ChatGPT vs. Gemini — Side-by-Side
All three claim to be the best. Pick tasks you actually care about, run the same prompt across all three, and you'll build your own benchmark.
Creators · 38 min
Building a Personal AI Stack for School and Career
Assemble the four or five AI tools that actually belong in your daily life. A tested template for the stack that earns its keep.
Creators · 35 min
Figma AI: When Design Tools Started Designing Themselves
Figma's AI features (First Draft, Make Designs, Rename Layers) bring generative design to the industry standard. Deep dive on what it's changed and what's still a gimmick.
