Lesson 1551 of 1570
How Large Language Models Actually Work
A teen-friendly explanation of what's really happening inside ChatGPT, Claude, and Gemini.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The big idea
- 2large language model
- 3token
- 4training data
Concept cluster
Terms to connect while reading
Section 1
The big idea
Underneath all the magic, an LLM is a system that takes your text, breaks it into tokens, and predicts what tokens come next based on patterns it learned from huge amounts of text. Understanding this one fact explains why AI is brilliant at some things and weirdly bad at others.
Some examples
- LLMs predict the next token, then the next, then the next — that's basically the whole trick.
- 'Training' means showing the model billions of text examples and adjusting its weights.
- 'Fine-tuning' is extra training to make it follow instructions or be safer.
- RLHF (reinforcement learning from human feedback) is how models learn what humans prefer.
Try it!
Open any chatbot and ask it to explain its own architecture in one paragraph. Notice what it gets right and what's vague.
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “How Large Language Models Actually Work”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 22 min
Train Your Tiny Classifier
Teach a mini-AI to tell fruits from vegetables, one example at a time.
Builders · 40 min
Why AI 'Forgets' Halfway Through a Long Chat
AI has a memory limit called the context window. Hitting it explains a LOT of weird behavior.
Builders · 40 min
AI and tokens vs words: why your prompt costs what it costs
Learn what a token actually is so you can predict cost and context limits.
