Lesson 200 of 1570
Conditional Probability (and the Monty Hall Problem)
A famous game show riddle teaches the single most important idea in Bayesian reasoning.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1When New Information Changes the Answer
- 2conditional probability
- 3Bayes
- 4Monty Hall
Concept cluster
Terms to connect while reading
Section 1
When New Information Changes the Answer
Conditional probability is the probability of A given that B happened, written P(A|B). It is the engine of Bayesian reasoning and the key to one of the most famous probability puzzles in history.
The Monty Hall problem
You are on a game show. Three doors: one hides a car, two hide goats. You pick door 1. The host, who knows what is behind each door, opens door 3 to reveal a goat. Should you switch to door 2?
Why switching works
- 1Before any door opens, your door has 1/3 chance of the car, and the other two doors together have 2/3
- 2The host's reveal is not random — he always opens a goat door from the unpicked pair
- 3That reveal transfers all the 2/3 probability from both unpicked doors onto the single remaining one
- 4So the remaining unpicked door has 2/3 probability; your original has 1/3
Run the simulation yourself — the numbers are brutal
Simulated 1 million games:
Stay strategy: ~333,000 wins (33.3%)
Switch strategy: ~667,000 wins (66.7%)
The math is correct. Your intuition is not.Why this matters for AI
Every time an AI system updates its belief based on new evidence — retrieved documents, user feedback, observations — it is doing conditional probability. Understanding Monty Hall protects you from the very human instinct to ignore prior information when new data arrives.
“No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”
Key terms in this lesson
The big idea: probabilities update when evidence arrives, but they update by multiplying, not replacing. Remembering that is half of statistical reasoning.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Conditional Probability (and the Monty Hall Problem)”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 30 min
The Supervised Learning Loop
Most modern AI is trained on a loop of guess, check, and adjust. Understand the loop and you understand the heart of machine learning.
Builders · 30 min
Tokens and Embeddings: How AI Reads Words
AI does not read letters. It reads tokens, which live as vectors in a space of meaning. Learn how text becomes numbers you can do math on.
Builders · 35 min
Neural Networks, Actually Explained
You have heard the term a thousand times. Now let's actually look inside: neurons, weights, activations, and what happens in a single pass.
