A teen-friendly explanation of what's really happening inside ChatGPT, Claude, and Gemini.
7 min · Reviewed 2026
The big idea
Underneath all the magic, an LLM is a system that takes your text, breaks it into tokens, and predicts what tokens come next based on patterns it learned from huge amounts of text. Understanding this one fact explains why AI is brilliant at some things and weirdly bad at others.
Some examples
LLMs predict the next token, then the next, then the next — that's basically the whole trick.
'Training' means showing the model billions of text examples and adjusting its weights.
'Fine-tuning' is extra training to make it follow instructions or be safer.
RLHF (reinforcement learning from human feedback) is how models learn what humans prefer.
Try it!
Open any chatbot and ask it to explain its own architecture in one paragraph. Notice what it gets right and what's vague.
End-of-lesson check
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-how-llms-actually-work-teens-final2-teen
What is the core idea behind "How Large Language Models Actually Work"?
A teen-friendly explanation of what's really happening inside ChatGPT, Claude, and Gemini.
Apply budgeting in your foundations workflow to get better results
decoding speed
sessions
Which term best describes a foundational idea in "How Large Language Models Actually Work"?
token
large language model
training data
Apply budgeting in your foundations workflow to get better results
A learner studying How Large Language Models Actually Work would need to understand which concept?
large language model
training data
token
Apply budgeting in your foundations workflow to get better results
Which of these is directly relevant to How Large Language Models Actually Work?
large language model
token
Apply budgeting in your foundations workflow to get better results
training data
Which of the following is a key point about How Large Language Models Actually Work?
LLMs predict the next token, then the next, then the next — that's basically the whole trick.
'Training' means showing the model billions of text examples and adjusting its weights.
'Fine-tuning' is extra training to make it follow instructions or be safer.
RLHF (reinforcement learning from human feedback) is how models learn what humans prefer.
Which of these does NOT belong in a discussion of How Large Language Models Actually Work?
'Training' means showing the model billions of text examples and adjusting its weights.
Apply budgeting in your foundations workflow to get better results
'Fine-tuning' is extra training to make it follow instructions or be safer.
LLMs predict the next token, then the next, then the next — that's basically the whole trick.
What is the key insight about "It's prediction, not understanding" in the context of How Large Language Models Actually Work?
Apply budgeting in your foundations workflow to get better results
decoding speed
Knowing LLMs are predicting patterns explains both their strengths and their failures.
sessions
Which statement accurately describes an aspect of How Large Language Models Actually Work?
Apply budgeting in your foundations workflow to get better results
decoding speed
sessions
Underneath all the magic, an LLM is a system that takes your text, breaks it into tokens, and predicts what tokens come next based on patter…
What does working with How Large Language Models Actually Work typically involve?
Open any chatbot and ask it to explain its own architecture in one paragraph. Notice what it gets right and what's vague.
Apply budgeting in your foundations workflow to get better results
decoding speed
sessions
Which best describes the scope of "How Large Language Models Actually Work"?
It is unrelated to foundations workflows
It focuses on A teen-friendly explanation of what's really happening inside ChatGPT, Claude, and Gemini.
It applies only to the opposite beginner tier
It was deprecated in 2024 and no longer relevant
Which section heading best belongs in a lesson about How Large Language Models Actually Work?
Apply budgeting in your foundations workflow to get better results
decoding speed
Some examples
sessions
Which section heading best belongs in a lesson about How Large Language Models Actually Work?
Apply budgeting in your foundations workflow to get better results
decoding speed
sessions
Try it!
Which of the following is a concept covered in How Large Language Models Actually Work?
large language model
token
training data
Apply budgeting in your foundations workflow to get better results
Which of the following is a concept covered in How Large Language Models Actually Work?
large language model
token
training data
Apply budgeting in your foundations workflow to get better results
Which of the following is a concept covered in How Large Language Models Actually Work?
large language model
token
training data
Apply budgeting in your foundations workflow to get better results