Iterate, Don't Restart: Debugging and Improving Prompts, Part 1
Most teens scrap a bad AI answer and start over. Better: refine the answer with feedback. Way more efficient.
40 min · Reviewed 2026
The big idea
When AI gives a so-so answer, most teens delete and start fresh. Better approach: tell AI what is wrong, what to fix, what to keep. AI iterates better than it starts over.
Some examples
'Make it shorter — half the length.'
'You missed the point about [X]. Add that.'
'Rewrite this more like a casual text, less formal.'
'I like the first paragraph. Change just the second one to be more emotional.'
Try it!
Next time AI gives a bad answer, instead of starting over, give specific feedback. Try 3 rounds of refinement. Compare your final answer to your starting point. Way better, right?
Don't Restart When AI Is Off — Iterate
The big idea
When AI answers are 70% there, do not start over. Iterate — tell it what to change. Way faster than starting fresh and getting different mediocre output.
Some examples
'I like this but make it shorter.'
'Keep paragraph 1 but rewrite paragraph 2 to be more dramatic.'
'Good but too formal — make it sound more like a teen wrote it.'
'Almost there. Now add a counter-argument in the third paragraph.'
Try it!
Understanding "Don't Restart When AI Is Off — Iterate" in practice: Prompting is a skill: the more specific and structured your input, the more useful the output. When AI's answer is mostly right but a little off, iterate. Don't start over. Way faster — and knowing how to apply this gives you a concrete advantage.
Apply iteration in your prompting workflow to get better results
Apply refinement in your prompting workflow to get better results
Apply efficient prompting in your prompting workflow to get better results
Rewrite one of your best prompts using role + context + task + format
Ask an AI to critique your prompt and suggest improvements
Compare outputs from two models using the same prompt
Knowing when to stop iterating with AI
The big idea
AI lets you regenerate forever, which is dangerous. You can spend an hour 'improving' a paragraph that was already fine after attempt two. Builders learn to stop, not to chase perfect.
Some examples
Set a timer: max 3 prompt iterations on the intro of an essay.
If the 4th draft isn't clearly better than the 2nd, the 2nd was the answer.
Lock in a version, then move to the next section.
Ask: 'Would my teacher notice the difference?' If no, ship it.
Try it!
Next time you use AI for homework, count your iterations out loud. Stop yourself at three and look at what you have. You'll usually be done.
Prompting AI as a rubber duck
The big idea
Coders have a trick called 'rubber duck debugging': explain your problem out loud to a rubber duck and you'll often spot the bug yourself. AI is a duck that talks back.
Some examples
Explain your code line by line to AI before asking for fixes
Explain a math problem in your own words to AI
Explain why you're stressed before asking for advice
Explain a paper outline before asking for feedback
Try it!
Take a problem you're stuck on. Explain it to AI in 3-4 paragraphs as if it knew nothing. Notice if you figure it out before AI even responds.
AI and Success Criteria: Tell AI What 'Done' Looks Like
The big idea
Success criteria are the exact things an answer must hit. Without them, AI guesses what 'done' means. With them, AI self-checks and gives you a tighter result.
Some examples
'A great answer will: (1) be under 150 words, (2) include one stat, (3) end with a question.'
'Don't stop until you have three options of different price ranges.'
'Check yourself: did you actually answer the original question?'
Give AI the rubric your teacher will grade you on.
Try it!
Take a recent prompt and add three success criteria as a checklist. Ask AI to verify each one in its answer.
AI and Iteration: The Magic of Saying 'Make It Better'
The big idea
Iteration prompting means treating AI's first answer as a draft, not the final. The second and third versions are usually 5x better. People who don't iterate get bland output and blame AI.
Some examples
'Make it punchier. Cut 30%.'
'That's too generic. Add specifics from my context.'
'Now do three more variations: serious, funny, and weird.'
'Combine the best of v2 and v4.'
Try it!
Ask AI for something. Iterate at least three times — make it shorter, weirder, sharper. Compare v1 and v4.
AI and Prompt Debugging: When the Answer's Wrong, Fix the Prompt
The big idea
Prompt debugging is the skill of asking why a bad answer is bad and fixing your prompt — not just rewording it. The bug is usually missing context, an ambiguous word, or no example.
Some examples
Bad answer? Ask AI: 'What did you assume that I didn't tell you?'
Spot ambiguous words: 'better' for who? 'short' how short?
Add the missing constraint and try again.
Test the same prompt in a fresh chat to rule out chat history confusion.
Try it!
Find a recent AI answer you didn't like. Ask AI 'what would make my prompt clearer?' Apply its suggestions and rerun.
AI and Evaluator Prompts: Make AI Grade Itself
The big idea
After AI answers, in a new turn say 'grade your last response on this rubric: clarity (1-5), accuracy (1-5), tone (1-5).' AI usually finds its own weaknesses, then you can ask it to fix them.
Some examples
Prompt: 'Score that essay 1-5 on thesis, evidence, voice. Then rewrite to fix the lowest score.'
Two-pass prompting (write then critique) outperforms one-pass.
AI is harsher on itself than you'd expect — that's useful.
You can have AI grade against the actual class rubric you paste in.
Try it!
Have AI write a paragraph, then grade and rewrite it twice. Compare draft 1 to draft 3.
End-of-lesson check
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-builders-prompting-iterate-better
In the context of AI prompting, what does 'iteration' mean?
Waiting 24 hours before asking AI the same question again
Asking AI to generate completely new content from scratch
Making targeted improvements to an AI response based on feedback rather than starting fresh
Repeating the same prompt multiple times until AI randomly gives a good answer
You ask an AI to write a joke about homework, and it produces a weak, unfunny joke. What is the best approach?
Tell the AI what's wrong (it's not funny) and what to improve (add a pun or wordplay)
Report the AI as broken
Ask the AI to write 10 jokes at once to improve your chances
Delete the response and ask again with the same prompt
Which of these feedback statements would most likely lead to a significantly improved AI response?
'This is wrong. Try again.'
'I don't like this. Do it differently.'
'This isn't what I wanted.'
'Make it shorter — about half the length — and use simpler words.'
A student tells their AI: 'This introduction is boring. Make it better.' Why might this feedback fail to produce improvement?
The feedback is vague — 'boring' and 'better' don't tell the AI what concrete changes to make
The AI cannot understand the word 'better' as a concept
The student should have started over instead
The feedback is too specific and limits the AI's options
What is the main advantage of iterative prompting over starting over with a fresh prompt?
Starting over guarantees a perfect answer on the second try
AI cannot improve responses that already contain errors
Iteration uses more computational power, forcing better results
Iteration builds on what's already good in the response, saving time and effort
You receive an AI-generated summary that has good content but is too long. What feedback would be most effective?
'Cut this in half. Keep only the key points about [specific topic].'
'This is too long. Make it shorter somehow.'
'I don't need a summary this long.'
'Delete the first paragraph and rewrite the rest.'
A user tells an AI: 'I like the first paragraph. Change just the second one to be more emotional.' This demonstrates which principle?
Being mean to the AI to get better results
Telling the AI it did a bad job overall
Providing feedback about what works AND what needs changing
Ignoring parts of the response that are already good
Which statement about 'specific feedback' is most accurate?
Specific feedback tells the AI exactly what to change and how
Specific feedback confuses the AI with too many details
Specific feedback only works for factual errors, not style issues
Specific feedback means using more negative words
Why does saying 'try again' to an AI typically produce vague improvements?
The AI has already learned everything from the first attempt
The phrase doesn't tell the AI what was wrong or what to change
The AI intentionally ignores this phrase
The phrase triggers a programming error
What should you tell an AI when its response is close to what you want but misses one important point?
Ask it to make the response 10 times longer
Start over with a completely new prompt
Tell it exactly what point it missed and ask it to add that
Report that the AI failed
A friend says they always delete bad AI responses and start fresh because 'the first answer is always wrong anyway.' What would you tell them?
They are right — AI always gives wrong answers on the first try
They should try giving specific feedback instead, because iteration often produces better results than starting over
They should ask AI to confirm it will do a good job before generating
They should only use AI for simple tasks
What does 'refinement' mean in the context of working with AI?
Making an AI respond faster
Gradually improving a response through multiple rounds of feedback
Filtering out incorrect AI answers
Training a custom AI model from scratch
You ask AI to write code, but it uses a programming approach you haven't learned yet. What feedback helps?
'Write simpler code.'
'This is wrong because I don't understand it.'
'Your code is too advanced for me.'
'Rewrite this using only [specific concept] that I've learned in class.'
In the lesson's examples, which type of feedback was NOT given to improve an AI response?
Adjusting the tone or style
Making something shorter
Adding missed content
Completely changing the topic
What does 'efficiency' mean in the context of iterative prompting?
Getting better results with less effort by building on partial success rather than starting over