Loading lessons…
Builders · Ages 11–13
Compare Claude, ChatGPT, Gemini, and Flux side-by-side. Learn prompt engineering and catch hallucinations.
Meet your guide: Wren — a sharp raven
Your progress
Loading your progress…
Where should I start?
Chapters
Modules · 18
You cannot understand modern AI without understanding its diet. Let's map where the data comes from, how it gets cleaned, and what that means.
Writing a test first is not just good engineering. It is the clearest possible prompt for an AI. Let's use tests to make AI code reliable.
Refactoring means changing code without changing behavior. That used to be scary. With tests and AI together, it becomes routine.
Claude.ai and the Anthropic API both run Claude. So why do they cost different amounts? Pull apart the two doors into the same model.
Artifacts is Claude's canvas. Charts, code, docs, and interactive React components render live next to the chat.
Deep Research is Gemini's multi-step research agent. You ask a question; it plans, searches, reads, synthesizes, and delivers a report.
Upload a PDF, a set of docs, or a research paper. NotebookLM produces a two-host podcast conversation about the material.
A Space is a bookmarked, collaborative research context. Your sources, your prompts, your team — all persistent.
Midjourney is the artist favorite. FLUX.2 Pro is the API-native challenger. Here is which one to pick depending on what you are making.
Three command-line coding agents, three flavors. Which one belongs in your terminal? Install all three on a weekend and decide for yourself, but here is the cheat sheet.
Every coder uses AI now. The skill is learning to code WITH AI from day one, not letting AI code for you.
Stats is 10 percent concepts and 90 percent careful arithmetic. AI is shockingly good at the arithmetic, which frees you to actually think about the concepts.
GitHub Copilot was the first AI coding assistant at scale. Look at what it is great at, where Cursor and Claude Code have passed it, and whether the $10 subscription still makes sense.
v0 by Vercel generates working React and Next.js code from prompts. Look at what it nails, what it still gets wrong, and why it's changed how startup MVPs get built.
Benchmarks are how AI progress gets measured. Understanding them is the first step in reading any AI claim.
When you change a prompt, how do you know the new version is actually better? A/B testing is the honest answer.
Stable Diffusion, Midjourney, and DALL-E all trace back to LAION, an open dataset of 5 billion image-text pairs. It changed AI, and started a legal storm.
The imitation game became famous, but most AI researchers now think it measures the wrong thing.