Loading lesson…
Companies retrain AI on their own data — that's fine-tuning, and it's different from prompting.
Fine-tuning is taking a base AI and retraining it on specific data so it specializes. That's how lawyers, doctors, and coders end up with custom AI tools.
Build a Custom GPT (free with paid ChatGPT) for one school subject. Notice how detailed instructions change behavior.
If you want AI to know your company docs or personal notes, you can either fine-tune (slow, expensive) or use RAG (fast, cheap). AI can explain when to use which.
Try a free RAG tool like NotebookLM. Upload 5 docs and ask questions across them all.
Fine-tuning is when you take a pre-trained model and continue training it on your own data so it specializes in something. It used to be the only way to customize AI. In 2026, with system prompts, RAG, and large context windows, fine-tuning is rarely needed — you can usually achieve the same result with a good prompt and uploaded examples. Fine-tune only when: you need a specific output format every time, you have thousands of examples, and prompting plus RAG isn't getting you there. For 99% of teen AI projects, prompt + RAG wins.
Try few-shot prompting: instead of asking ChatGPT 'write me a haiku,' show it 5 of your favorite haikus first, then ask. Quality jumps. That's why fine-tuning is rarely needed — examples in the prompt do most of the work.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-builders-foundations-AI-and-fine-tuning-vs-prompting
What does it mean to 'fine-tune' an AI model?
A hospital wants an AI that can read X-rays better than a general-purpose AI. What should they do?
Why is fine-tuning usually expensive?
What are Custom GPTs primarily designed to do?
For a middle school student who wants to use AI to help with homework, what does the lesson recommend?
What does it mean for an AI to be 'specialized'?
A law firm wants an AI that understands legal terminology and can draft contracts. They should:
Why might someone choose prompting over fine-tuning?
The lesson describes Custom GPTs as 'soft' fine-tuning. What does this mean?
What happens to an AI model during the fine-tuning process?
A student builds a Custom GPT for studying biology. They add more detailed instructions about what kind of answers they need. What happens?
Which scenario best describes fine-tuning?
The lesson notes that most teen-friendly use of AI is prompting, not fine-tuning. Why?
What is a key reason companies invest in fine-tuning their own AI models?
If you wanted to create an AI that writes code, what approach would you take?