Lesson 996 of 2116
Meta-Prompting and Self-Critique: AI That Improves Its Own Output
Static templates are predictable and cheap. Generated prompts adapt to context. The decision shapes maintenance burden, quality, and team workflow.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The premise
- 2Pair-Programming Prompts With AI Critique
- 3The premise
- 4Mitigating Sycophancy in LLM Responses
Concept cluster
Terms to connect while reading
Section 1
The premise
Templates and generators are different tools with different trade-offs; deliberate choice matters for production maintainability.
What AI does well here
- Use templates for stable use cases with predictable inputs (fewer variables, lower iteration cost)
- Use generators when input distribution varies widely (different customer types, industries, intents)
- Maintain both with clear ownership — bad templates and bad generators both fail silently
- Test changes to either against your eval suite before production deployment
What AI cannot do
- Eliminate prompt maintenance with either approach
- Substitute generation sophistication for the underlying use-case clarity
- Make generators reliable without strong evaluation
Key terms in this lesson
Section 2
Pair-Programming Prompts With AI Critique
Section 3
The premise
AI critique of prompts accelerates iteration when used with discipline; without it, you get sycophantic 'looks good' answers.
What AI does well here
- Ask AI specific critique questions (clarity, completeness, edge case handling)
- Have AI generate adversarial inputs to test prompt robustness
- Have AI suggest variations and reasons for each
- Maintain human judgment on which suggestions to take
What AI cannot do
- Trust AI's general 'looks good' assessment
- Substitute AI critique for real-data evaluation
- Generate truly novel prompt approaches via critique alone
Section 4
Mitigating Sycophancy in LLM Responses
Section 5
The premise
Models default to agreeable answers — explicit instructions to disagree-when-warranted improve accuracy.
What AI does well here
- Instruct the model to push back on incorrect premises.
- Reward stating uncertainty over agreeing.
- Use eval sets that test pushback quality.
What AI cannot do
- Eliminate sycophancy entirely without trade-offs.
- Detect every false premise in user input.
Section 6
Self-Critique Loops: Have the AI Grade Its Own Output
Section 7
The premise
Asking 'now find three weaknesses in your answer and fix them' often improves quality more than re-prompting from scratch.
What AI does well here
- Identify obvious flaws in its own draft when prompted.
- Apply specific revisions you ask for.
- Spot inconsistencies between earlier and later sentences.
- Tighten verbose sections on a second pass.
What AI cannot do
- Catch errors it confidently hallucinated the first time.
- Recognize subtle factual mistakes outside its knowledge.
Section 8
Meta-Prompting: Have AI Write Your Next AI Prompt
Section 9
The premise
AI is often better at structuring prompts than humans are. Ask it to write the prompt, then critique its own prompt, then run it.
What AI does well here
- Generate well-structured prompts from a goal description.
- Suggest variables and constraints you forgot.
- Iterate on its own prompt drafts when given feedback.
- Format prompts with clear sections.
What AI cannot do
- Know your hidden constraints or audience.
- Replace your judgment about what success means.
Section 10
Defeating AI Sycophancy: Prompts That Get Honest Pushback
Section 11
The premise
AI defaults to agreement and praise. You must explicitly invite disagreement to get useful feedback.
What AI does well here
- Identify weaknesses when explicitly invited to.
- Disagree with stated premises if asked.
- Rate confidence honestly when prompted with calibration scales.
- Hold a counter-position when role-played as a critic.
What AI cannot do
- Override training-level agreeableness completely.
- Be reliably blunt about your bad ideas without explicit framing.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Meta-Prompting and Self-Critique: AI That Improves Its Own Output”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 40 min
Meta-Prompting and Advanced Techniques: AI Improves Your Prompts, Part 2
Ask AI to lay out your options as a tree of consequences.
Explorers · 40 min
When the Answer Isn't Right: Feedback, Iteration, and Trying Again, Part 2
You don't have to start over each time. Keep building like LEGO.
Creators · 40 min
Prompt Security: Injection Defense, Jailbreaks, and Refusal Design
Prompt injection isn't solvable by prompting alone. Layered defenses combine prompt design, input filtering, and output validation.
