Lesson 1052 of 1570
Iterate, Don't Restart: Debugging and Improving Prompts, Part 2
It's faster to send three OK prompts than to craft one perfect one — iteration beats premeditation.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The big idea
- 2The Meta Move: Asking ChatGPT to Improve Your Prompt
- 3The big idea
- 4Asking the AI to Rewrite Your Own Prompt
Concept cluster
Terms to connect while reading
Section 1
The big idea
Newbies spend 20 minutes writing the perfect prompt. Pros write a 10-second prompt, see what comes back, and refine. The model is fast and free-ish — use that. After 3 iterations you'll have learned more about what works than after 30 minutes of overthinking the first try.
Some examples
- First try: 'Help me name my band.' Refine: 'I write punk songs about coding bugs. 10 names.'
- First try: 'Make this email better.' Refine: 'Same email, half the length, no greeting.'
- First try: 'Plan my study schedule.' Refine: 'Same plan, but add 15-min breaks every hour.'
- First try: 'Debug this.' Refine: 'Skip the explanation, just give me the fix as a diff.'
Try it!
For your next AI task, give yourself a 5-second budget for the first prompt. Refine three times. Compare to your usual.
Key terms in this lesson
Section 2
The Meta Move: Asking ChatGPT to Improve Your Prompt
Section 3
The big idea
Stuck on how to phrase something? Paste your draft prompt into Claude or ChatGPT and ask 'how would you improve this prompt to get better results?' The AI is genuinely good at this — it knows what kind of input it responds to. Use this once a week and your overall prompting will level up fast.
Some examples
- You paste a vague prompt; the AI suggests adding examples and a format spec.
- You ask Claude to rewrite your prompt for a different model (e.g., GPT) — it adapts the style.
- You ask 'what's missing from this prompt?' — the AI lists 3 assumptions you forgot to state.
- You ask 'rewrite as a system prompt' and get a polished, reusable version.
Try it!
Pick your most-used prompt. Ask Claude to improve it. Save the new version. Compare results for a week.
Section 4
Asking the AI to Rewrite Your Own Prompt
Section 5
The big idea
Your first draft of a prompt is rarely your best. Asking the model to rewrite it — for clarity, for specificity, for missing context — gets you a sharper version of what you meant. It's meta-prompting and it works because the model knows what the model wants.
Some examples
- You write 'help me with my essay' → ask Claude to rewrite → get a 4-point structured prompt.
- You ask ChatGPT to rewrite a vague feature request into a clear engineering ticket.
- You give Claude a half-formed instruction and ask 'what would I need to add to make this prompt unambiguous?'
- You paste a prompt and ask 'list 5 things you'd ask me to clarify before answering this.'
Try it!
Take a prompt you wrote today. Ask Claude or ChatGPT to rewrite it for clarity. Use the new version. Compare results.
Section 6
Building a Reusable Prompt Template with `{placeholders}`
Section 7
The big idea
Templating your best prompts means you tune them once and use them forever. A few `{placeholder}` slots turn your top-of-the-line prompt into a tool you can run on any input.
Some examples
- A 'summarize meeting transcript' template with {transcript} and {audience} slots used weekly.
- A bug-triage template with {error} and {context} that runs in your terminal.
- An interview-prep template with {role} and {question} for daily practice.
- A code-review template with {diff} and {style_guide} you call from a git hook.
Try it!
Pick the prompt you use most. Replace the variable parts with {placeholders} and save it as template.md.
Section 8
Writing Tiny Prompt Evals So You Stop Guessing Whether Your Prompt Improved
Section 9
The big idea
If you can't measure whether prompt v2 is better than v1, you're tuning by vibes. A handful of test cases with expected outputs — even a tiny eval set — turns prompt engineering into actual engineering.
Some examples
- You build 5 input/expected pairs for an extraction prompt and compare two versions on accuracy.
- Promptfoo or Braintrust shows v3 is 80% accurate vs v2's 60% on your eval set.
- A simple Python script comparing model outputs to expected JSON catches a regression you'd otherwise miss.
- Claude tested against 10 cases reveals a prompt change broke one — that you fix before shipping.
Try it!
Pick a prompt you use in production. Write 5 input/expected pairs. Run two versions. Pick the winner with data.
Section 10
AI and Iterating: Edit the Prompt, Don't Start Over
Section 11
The big idea
Beginners get a bad answer and start a new chat. Pros write 'no, more like this: [example]' or 'shorter, in bullet points' and the AI adjusts in seconds. Conversations build context. The third or fourth turn is usually when the magic happens. Treat AI like a junior collaborator — give feedback, don't fire and rehire.
Some examples
- 'Shorter — 100 words max.'
- 'More casual, like talking to a friend.'
- 'Add an example for each point.'
- 'Take the last version but change X.'
Try it!
Next time AI gives a meh answer, write one feedback sentence instead of starting over. Three rounds usually nails it.
Section 12
AI and Asking for Multiple Options at Once
Section 13
The big idea
Beginners ask 'write me an Instagram caption'. Pros ask 'give me 5 caption options in different tones — funny, serious, mysterious, sincere, edgy'. Now you compare and pick — or take the best line from each. AI generates the variation cost-free; the real work was the choosing. This single trick triples the value of every creative prompt.
Some examples
- 'Give me 5 versions, each in a different tone.'
- 'Show 3 outlines with different angles.'
- 'List 10 names — I'll pick.'
- 'Two extreme versions: most formal vs most casual.'
Try it!
Next creative prompt, ask for 5 versions in different tones. The mix-and-match version usually beats any single one.
Section 14
Iterative Refinement: Getting What You Actually Want
Section 15
The big idea
The myth of the 'perfect prompt' wastes a lot of teen time. In real use, you start with a decent prompt, see what comes back, and refine through conversation: 'shorter, more specific, drop the third example, change the tone.' Pros iterate fast and stop when good enough.
Some examples
- First reply rarely nails it — that's expected.
- Give specific feedback: 'too formal, drop the headers, keep it under 200 words.'
- Don't restart — refine the existing draft most of the time.
- Stop iterating when 'good enough,' not when 'perfect.'
Try it!
On your next AI task, count how many refinement rounds you do. Try to land it in three.
Tutor
Curious about “Iterate, Don't Restart: Debugging and Improving Prompts, Part 2”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Explorers · 40 min
When the Answer Isn't Right: Feedback, Iteration, and Trying Again, Part 2
You don't have to start over each time. Keep building like LEGO.
Builders · 40 min
Iterate, Don't Restart: Debugging and Improving Prompts, Part 1
Most teens scrap a bad AI answer and start over. Better: refine the answer with feedback. Way more efficient.
Explorers · 40 min
What AI Gets Wrong: Limits, Mistakes, and When to Ask a Human
AI doesn't always get it right the first time.
