Lesson 2 of 1570
The Supervised Learning Loop
Most modern AI is trained on a loop of guess, check, and adjust. Understand the loop and you understand the heart of machine learning.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Training Is a Loop
- 2supervised learning
- 3loss
- 4gradient descent
Concept cluster
Terms to connect while reading
Section 1
Training Is a Loop
Here is the secret recipe behind almost every modern AI model. You feed it an example, let it make a guess, compare the guess to the right answer, then nudge the model to be a little more right next time. Repeat millions of times.
The four steps
- 1Forward pass: the model looks at the input and makes a prediction
- 2Compute loss: measure how wrong the prediction is
- 3Backward pass: figure out which weights caused the error
- 4Update: adjust those weights a tiny bit to reduce the error
Why tiny steps
The model could make a huge jump each round, but that usually ends badly. Big jumps overshoot the right answer. Tiny steps let the model sneak up on the right solution without bouncing past it.
Epochs and batches
- A batch is a small group of examples shown together
- Going through all training data once is one epoch
- Big models train for many epochs on enormous batches
- Picking the right batch size and learning rate is part art, part science
After enough loops, the weights settle into values that produce good predictions. The model is not memorizing each example. It is finding the pattern that fits the whole pile.
“The art of training is to stop just before the model gets too clever for its own good.”
Key terms in this lesson
The big idea: training is a feedback loop. Guess, measure the error, adjust, repeat. Everything else in machine learning is a detail on top of that loop.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “The Supervised Learning Loop”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 30 min
Tokens and Embeddings: How AI Reads Words
AI does not read letters. It reads tokens, which live as vectors in a space of meaning. Learn how text becomes numbers you can do math on.
Builders · 35 min
Neural Networks, Actually Explained
You have heard the term a thousand times. Now let's actually look inside: neurons, weights, activations, and what happens in a single pass.
Builders · 30 min
Is the Model Reasoning or Pattern Matching?
The line between deep reasoning and clever pattern recognition is blurry. Here's how researchers try to tell them apart.
