Lesson 177 of 1570
Process Supervision: Grading the Work, Not the Answer
Most training grades the final answer. Process supervision grades each reasoning step. That small change produced some of the biggest honesty gains in recent years. Math problem-solving accuracy jumped substantially over outcome-only training, and the model was more honest about its own mistakes.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Answer vs. Reasoning
- 2process supervision
- 3PRM
- 4chain of thought
Concept cluster
Terms to connect while reading
Section 1
Answer vs. Reasoning
If a math student guesses the right number with bad reasoning, outcome-graded training rewards the guess. Process supervision grades each step: was the setup correct, was the arithmetic correct, was the final step justified? Wrong steps are penalized even if the final answer is right.
Why it helps alignment, not just accuracy
- Reasoning becomes legible: you can inspect the chain
- The model can't easily hide a lie in a confident final answer
- Errors become debuggable — you know which step broke
- Sycophancy gets harder: a flattering conclusion with wrong steps gets caught
The limits
- 1Step labels are expensive — humans must read every step
- 2Hard for fuzzy domains: what counts as a correct step in an essay?
- 3Models can still generate plausible-looking wrong steps that slip past raters
- 4Does not guarantee faithful chain of thought — the model may reason one way and write another
Key terms in this lesson
The big idea: grading reasoning changes what the model learns to optimize. It is a small change to the training loop with outsized effect on honesty and debuggability.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Process Supervision: Grading the Work, Not the Answer”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 28 min
Where Bias in AI Actually Comes From
AI bias is not magic and not moral failure. It is math operating on imperfect data. Here is exactly where the bias enters the system.
Builders · 28 min
Your Data Is Somebody's Training Fuel
Your posts, chats, photos, and behavior have been scraped, sold, and fed to models. Here is what has actually happened and what you can actually do.
Builders · 25 min
The Environmental Cost of Training a Big Model
Training a frontier model uses the electricity of a small city for months. Running inference at scale matches a large country's load. Here is what the numbers actually look like.
