Loading lesson…
Use an AI to write, optimize, and debug your prompts. Meta-prompting is how top teams ship production prompts faster than humans alone could write them.
Writing a great prompt is itself a task Claude is good at. Instead of crafting a perfect prompt by hand, describe the task to Claude and ask it to write the prompt for you. Anthropic ships an official prompt generator tool in the Console; under the hood, it's a very good meta-prompt.
You are an expert prompt engineer trained on the Anthropic prompt engineering guide.
My task:
{TASK_DESCRIPTION}
The model that will run the final prompt: Claude Sonnet 4.5.
Produce a production-ready prompt that:
1. Assigns a clear role.
2. States the task precisely.
3. Lists constraints as a numbered block.
4. Includes 2-3 few-shot examples (you may fabricate plausible ones).
5. Specifies the output format using XML tags.
6. Includes a chain-of-thought section if the task needs reasoning.
Output the prompt inside <prompt> tags and a short justification inside <rationale> tags.A meta-prompt that produces production prompts.Run this with a task like 'I need to classify customer support emails as billing / technical / other, and route them.' Claude will produce a multi-section prompt complete with role, examples, and XML output — often better than what a busy engineer would hand-write in 10 minutes.
I have this prompt: <prompt>...</prompt>
It passes these cases: <passes>...</passes>
It fails these cases: <failures>
<case>
<input>...</input>
<expected>...</expected>
<actual>...</actual>
</case>
...
</failures>
Propose three distinct prompt revisions. For each:
- Explain the theory behind the change.
- Predict which failures it should fix.
- Flag any passing cases it might break.Forcing three revisions (not just one) surfaces tradeoffs.You are a senior prompt engineering reviewer.
<prompt_to_review>
{PROMPT}
</prompt_to_review>
Evaluate it on:
1. Role clarity (1-5)
2. Instruction specificity (1-5)
3. Example quality (1-5)
4. Format precision (1-5)
5. Robustness to adversarial input (1-5)
For each score under 4, explain what's missing and give a concrete fix.A scoring rubric — turn prompt engineering into a measurable discipline.15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-prompting-meta-prompting-creators
What is the core idea behind "Meta-Prompting: AI That Writes AI Prompts"?
Which term best describes a foundational idea in "Meta-Prompting: AI That Writes AI Prompts"?
A learner studying Meta-Prompting: AI That Writes AI Prompts would need to understand which concept?
Which of these is directly relevant to Meta-Prompting: AI That Writes AI Prompts?
Which of the following is a key point about Meta-Prompting: AI That Writes AI Prompts?
Which of these does NOT belong in a discussion of Meta-Prompting: AI That Writes AI Prompts?
What is the key insight about "Anthropic Console Prompt Generator" in the context of Meta-Prompting: AI That Writes AI Prompts?
What is the key insight about "Keep the human in the loop" in the context of Meta-Prompting: AI That Writes AI Prompts?
What is the recommended tip about "Practitioner tip" in the context of Meta-Prompting: AI That Writes AI Prompts?
Which statement accurately describes an aspect of Meta-Prompting: AI That Writes AI Prompts?
What does working with Meta-Prompting: AI That Writes AI Prompts typically involve?
Which best describes the scope of "Meta-Prompting: AI That Writes AI Prompts"?
Which section heading best belongs in a lesson about Meta-Prompting: AI That Writes AI Prompts?
Which section heading best belongs in a lesson about Meta-Prompting: AI That Writes AI Prompts?
Which section heading best belongs in a lesson about Meta-Prompting: AI That Writes AI Prompts?