Standalone lesson.
Lesson 1564 of 1570
Compare Four Models
Run the same prompt through Claude, GPT, Gemini, and a local model.
You’ve heard the names — Claude, ChatGPT, Gemini. But you’ve probably only used one of them at a time. In this module, you’ll compare model answers deliberately using the AI platforms you already have access to.
The comparison method
Pick one clear prompt, paste it into two or three AI tools, and save the answers side by side. You are looking for differences in tone, confidence, detail, citations, and how each model handles uncertainty.
What to notice
- Tone. Claude tends to be the most measured; Gemini is often chattier; GPT-4 is somewhere in between; a small local model like Llama may be blunter.
- Refusals. Ask a tricky-but-fine question and see which model over-refuses.
- Errors. Try a question you already know the answer to. Does any model hallucinate confidently?
Try these prompts
- “Explain how a toaster works to a 7-year-old.”
- “Write a haiku about a quiet library.”
- “List three things you’re not sure about, and why.”
Tutor
Curious about “Compare Four Models”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 40 min
Tool Use Quality Across Claude, GPT, Gemini, Llama
Compare native tool-calling reliability and patterns across model families.
Builders · 28 min
Chat AI vs. Agent AI: The Real Difference
A chatbot answers. An agent does. Learn the line between a model that talks and a model that acts — and why crossing it changes everything about how you work with AI.
Builders · 30 min
Why Agents Fail (and How to Notice)
Agents fail in weird, quiet, expensive ways. Learn the six failure modes, the warning signs, and the simple habits that catch problems before they compound.
