Lesson 430 of 2116
Migrating Prompts From Claude/GPT To Hermes: Gotchas
Most prompts that work on Claude or GPT need adjustment to work well on Hermes. Knowing what to change — and what not to bother with — saves a week of trial and error.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1What porting really involves
- 2prompt migration
- 3format differences
- 4tool grammar
Concept cluster
Terms to connect while reading
Section 1
What porting really involves
A prompt is not pure logic — it is logic plus model-specific phrasing that the original model is good at following. Migrating it is part translation, part re-tuning, part discovering which parts of the original were actually doing work. Plan for each prompt to need a few rounds of iteration before it matches the source-model output.
Common gotchas, by source model
Compare the options
| Source model trait | What it depends on | Hermes adjustment |
|---|---|---|
| Claude XML tags (<thinking>, <answer>) | Anthropic's tag conventions | Replace with plain prose sections and explicit format directives |
| Claude polite-and-thorough tone | Anthropic tuning | May come out shorter and blunter; tune system prompt for length |
| GPT step-by-step reasoning instructions | OpenAI's chain-of-thought training | Works on Hermes but may need explicit 'think step by step' more often |
| GPT JSON mode reliance | OpenAI strict-json infrastructure | Replace with grammar-constrained decoding or schema-with-examples |
| GPT tool-call format | OpenAI's tools schema | Convert to Hermes's documented function-call grammar |
| Memory / Custom Instructions | OpenAI's persistent state | Move stable parts into Hermes system prompt; rebuild the rest |
Things that usually port for free
- Plain task descriptions ('summarize this article in 100 words').
- Schema-locked output requests with one example.
- Anti-rules ('do not include code fences', 'do not preamble').
- Domain-specific glossaries and definitions.
- Few-shot examples — these often help Hermes more than they helped the source model.
Things that need real work
- Tool-using prompts — different grammar, different harness expectations.
- Long persona prompts — Hermes responds in kind, sometimes more than you wanted.
- Anything depending on memory or persistent state — you build the state layer yourself.
- Anything depending on a specific provider feature (web search, image gen, code interpreter) — those become separate components.
Migration workflow
- 1Pick the highest-value prompt in your library.
- 2Snapshot its current output on the source model for 10 representative inputs.
- 3Translate to Hermes using the gotchas table.
- 4Run on the same 10 inputs. Compare line by line.
- 5Iterate on the system prompt until Hermes outputs are equivalent or you accept the gap.
Applied exercise
- 1Open one prompt that you know works well on Claude or GPT.
- 2Identify every line that depends on a model-specific feature using the gotchas list.
- 3Rewrite those lines for Hermes.
- 4Run on three sample inputs and compare. Note which adjustments mattered most for your use case.
Key terms in this lesson
The big idea: porting prompts is a translation, not a copy. Mind the gotchas, evaluate honestly, and trim what is no longer carrying weight.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Migrating Prompts From Claude/GPT To Hermes: Gotchas”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 45 min
OpenAI Model Picker: GPT-5.5, GPT-5.4, Mini, Nano, and Codex
A practical picker for current OpenAI models: when to pay for the frontier model, when to use a smaller model, and when Codex-specific models make sense.
Creators · 9 min
The GPT Store: Discovery, Monetization, And Quality Signals
The GPT Store is a marketplace, but most listings are noise. Knowing how to read a listing — and how to make one stand out — is a creator skill of its own.
Creators · 10 min
Operator: The Agentic Browser Pattern
Operator points an agent at a real browser and lets it click, type, and navigate. The pattern is powerful and the failure modes are different from chat — supervision is not optional.
