Lesson 16 of 2116
Meta-Prompting: AI That Writes AI Prompts
Use an AI to write, optimize, and debug your prompts. Meta-prompting is how top teams ship production prompts faster than humans alone could write them.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The bootstrap move
- 2meta-prompting
- 3prompt generation
- 4prompt optimization
Concept cluster
Terms to connect while reading
Section 1
The bootstrap move
Writing a great prompt is itself a task Claude is good at. Instead of crafting a perfect prompt by hand, describe the task to Claude and ask it to write the prompt for you. Anthropic ships an official prompt generator tool in the Console; under the hood, it's a very good meta-prompt.
A meta-prompt skeleton
A meta-prompt that produces production prompts.
You are an expert prompt engineer trained on the Anthropic prompt engineering guide.
My task:
{TASK_DESCRIPTION}
The model that will run the final prompt: Claude Sonnet 4.5.
Produce a production-ready prompt that:
1. Assigns a clear role.
2. States the task precisely.
3. Lists constraints as a numbered block.
4. Includes 2-3 few-shot examples (you may fabricate plausible ones).
5. Specifies the output format using XML tags.
6. Includes a chain-of-thought section if the task needs reasoning.
Output the prompt inside <prompt> tags and a short justification inside <rationale> tags.Run this with a task like 'I need to classify customer support emails as billing / technical / other, and route them.' Claude will produce a multi-section prompt complete with role, examples, and XML output — often better than what a busy engineer would hand-write in 10 minutes.
Prompt optimization loop
- 1Collect 10 inputs with expected outputs (your eval set).
- 2Run your current prompt on all 10. Record which fail.
- 3Show Claude the prompt, the failing inputs, and the expected outputs.
- 4Ask: 'What changes to this prompt would likely fix these failures without breaking the passing cases?'
- 5Apply the suggested changes. Re-run the eval. Iterate.
Forcing three revisions (not just one) surfaces tradeoffs.
I have this prompt: <prompt>...</prompt>
It passes these cases: <passes>...</passes>
It fails these cases: <failures>
<case>
<input>...</input>
<expected>...</expected>
<actual>...</actual>
</case>
...
</failures>
Propose three distinct prompt revisions. For each:
- Explain the theory behind the change.
- Predict which failures it should fix.
- Flag any passing cases it might break.Meta-prompt for evaluating prompts
A scoring rubric — turn prompt engineering into a measurable discipline.
You are a senior prompt engineering reviewer.
<prompt_to_review>
{PROMPT}
</prompt_to_review>
Evaluate it on:
1. Role clarity (1-5)
2. Instruction specificity (1-5)
3. Example quality (1-5)
4. Format precision (1-5)
5. Robustness to adversarial input (1-5)
For each score under 4, explain what's missing and give a concrete fix.Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Meta-Prompting: AI That Writes AI Prompts”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 38 min
Anthropic's Prompt Engineering Patterns
Anthropic publishes detailed prompt engineering guidance. Master the core patterns — Be Direct, Let Claude Think, and Chain Complex Prompts — to write production-grade prompts.
Creators · 38 min
Red-Teaming Your Own Prompts
Before shipping, attack your own prompts. Inject, confuse, overload, and role-swap. If you don't find the holes, your users will.
Creators · 75 min
Capstone: Build and Ship a Real Agent
Everything comes together. Design, code, test, secure, and ship a production-quality agent with open-source code you can fork today.
