Loading lessons…
Builders · Ages 11–13
Compare Claude, ChatGPT, Gemini, and Flux side-by-side. Learn prompt engineering and catch hallucinations.
Meet your guide: Wren — a sharp raven
Your progress
Loading your progress…
Where should I start?
Chapters
Modules · 90
Instead of describing what you want, show the AI two or three examples. Few-shot prompting is often the fastest way to get consistent output.
Every AI conversation has two layers: a system prompt that sets the rules, and user prompts you type. Understanding the difference is the gateway to building AI-powered tools.
When your prompt feeds into code, you need machine-readable output. JSON mode and XML tags make the AI's response parseable instead of loose prose.
Bad output is almost never random. It's a clue. Here's how to diagnose and fix a broken prompt instead of just mashing the regenerate button.
AI-assisted coding is not magic and not cheating. It is a new way of working where a model drafts, you decide. Let's draw a map before we start building.
Let's actually feel what autocomplete is like. Write a comment, pause, and watch a full function appear. Then learn what to do next.
A prompt that writes a poem is not the same as a prompt that ships working code. Code has hidden standards. You need to make them explicit.
The AI will hand you code that looks right but isn't. Here are the most common bugs and the habits that catch them before they bite.
Bugs are where AI is most useful and most humbling. Paste errors, ask for causes, run experiments, and learn how to get a real answer instead of a guess.
Writing a test first is not just good engineering. It is the clearest possible prompt for an AI. Let's use tests to make AI code reliable.
Most of a developer's life is reading code someone else wrote. AI is astonishing at this. Here's how to get fast, honest explanations of unfamiliar code.
Refactoring means changing code without changing behavior. That used to be scary. With tests and AI together, it becomes routine.
Time to get hands on. Install a real AI coding editor, sign in, and write your first line together. No credit card required to start.
Git is a time machine for your code. Before we ship anything, let's learn the three commands that matter and what they actually do under the hood.
Let's make something real. A single-page site with HTML, CSS, and a little interactivity. You plan, the AI drafts, you review and ship.
Bring it all together. Pick one of three starter projects, plan it, build it with AI, and deploy it. You are now a builder who ships.
A chatbot answers. An agent does. Learn the line between a model that talks and a model that acts — and why crossing it changes everything about how you work with AI.
Every agent — fancy or simple, local or cloud — boils down to four parts. Learn the recipe and you can read any agent system like a menu.
Follow a real agent run step by step — from prompt to result — and see exactly what happens inside. No code yet, just the anatomy of a successful task.
Agents fail in weird, quiet, expensive ways. Learn the six failure modes, the warning signs, and the simple habits that catch problems before they compound.
Giving an AI the keys to your computer is a big deal. Learn the two simplest ways to keep an agent safe: wall it off from things it shouldn't touch, and put a human in the decision path.
Agents are only as useful as their tools. Tour the big three — filesystem, browser, code execution — plus the emerging MCP ecosystem, with examples of what each unlocks.
Your data can live in someone's data center or on your own laptop. Both are real options in 2026. Understand what you gain and lose with each.
OpenClaw is open-source software that runs agents on your own machine — no cloud dependency, your data stays put. A tour of why it exists and how its pieces fit together.
Ollama turns 'I want to run an LLM locally' into a one-line install and a two-word command. Here's the stack, the key commands, and the models worth pulling first.
No code. Just design. Pick a real task you do every week and draft a complete agent spec — goal, tools, loop, stop, approvals, and what success looks like.
US Copyright Office in 2026: works created purely by AI aren't copyrightable. Works with enough human creative control might be. Here's where the line sits right now.
Your first end-to-end AI-assisted creative project. Plan it, make it, and reflect on what surprised you. Small scope, real output.
Claude.ai and the Anthropic API both run Claude. So why do they cost different amounts? Pull apart the two doors into the same model.
Every big AI has a free version. Stack them side-by-side and learn where each one runs out of gas.
All three claim to be the best. Pick tasks you actually care about, run the same prompt across all three, and you'll build your own benchmark.
When the question is 'what happened this week?' or 'what does this paper say?', Perplexity is often the right answer. Here is why.
Grok is the odd one out — baked into X, trained on live posts. Sometimes that's a superpower, and sometimes it's a liability.
Voice interfaces flipped from gimmick to genuinely useful. Learn what each top voice mode feels like and when to pick which.
AI in your browser turns every webpage into something you can interrogate. Learn which extension to install, and why that access needs trust.
Artifacts is Claude's canvas. Charts, code, docs, and interactive React components render live next to the chat.
Deep Research is Gemini's multi-step research agent. You ask a question; it plans, searches, reads, synthesizes, and delivers a report.
v0 by Vercel turns a prompt, screenshot, or Figma file into a working Next.js app deployed in one click.
Upload a PDF, a set of docs, or a research paper. NotebookLM produces a two-host podcast conversation about the material.
A Space is a bookmarked, collaborative research context. Your sources, your prompts, your team — all persistent.
Opus is the flagship, Sonnet is the workhorse. Here is the five-minute decision tree for when to pay 2x more for Opus and when Sonnet handles it.
Three command-line coding agents, three flavors. Which one belongs in your terminal? Install all three on a weekend and decide for yourself, but here is the cheat sheet.
GPT-5.5 is the hard-problem default; GPT-5.4 mini is the cost-sensitive workhorse. Learn when quality is worth the extra latency and tokens.
xAI's code-specialist model ships strong benchmarks. Here is how it actually feels in a real IDE.
Meta's Llama 4 family splits into Scout (lean) and Maverick (flagship). Here is how to choose between them for self-hosted work.
Codestral 25 is Mistral's dedicated coding model. Small, fast, and cheap enough to run as an inline autocomplete.
Codestral Mamba ditches transformers for a state-space model. The result: linear-time long-context coding at a fraction of the attention cost.
Qwen 3 Coder is the open-weights coding specialist from Alibaba. Strong benchmarks, good IDE ergonomics, and cheap to run.
Every coder uses AI now. The skill is learning to code WITH AI from day one, not letting AI code for you.
Past the beginner phase, English learners need targeted grammar practice. AI shows you your exact mistakes without embarrassment.
Past the basics, dyslexic students can use AI for deep work - reading papers, writing essays, and asking for accommodations that work.
Stats is 10 percent concepts and 90 percent careful arithmetic. AI is shockingly good at the arithmetic, which frees you to actually think about the concepts.
Grammarly went from grammar checker to full AI writing assistant. Honest look at what it catches, what it misses, and whether you still need it in the Claude era.
Notion AI lives inside the Notion workspace you already use. Look at whether it's worth the extra $10/month or a waste when you have ChatGPT open in another tab.
Canva bolted AI onto the world's most popular design app. It is intentionally un-flashy, which is why 185 million people use it monthly.
Otter invented the AI meeting assistant category in 2016. It has been lapped by rivals but still has the cheapest starting tier and the largest user base.
Fathom gives you unlimited meeting recording, transcription, and AI summaries for free. Look at why it's eating Otter's lunch and what the paid tier adds.
Granola listens to your computer audio instead of joining as a bot. Look at why that design choice changed the meeting-notes category. What it's genuinely good at No bot in the meeting — attendees never know AI is listening, which matters for sensitive deals.
GitHub Copilot was the first AI coding assistant at scale. Look at what it is great at, where Cursor and Claude Code have passed it, and whether the $10 subscription still makes sense.
v0 by Vercel generates working React and Next.js code from prompts. Look at what it nails, what it still gets wrong, and why it's changed how startup MVPs get built.
Replit Agent builds a full working app inside Replit's cloud IDE. Look at what you can actually ship with it and when it falls apart.
ChatGPT Projects organize chats by topic, with shared files and custom instructions. Look at what they actually change in how you work.
ChatGPT Memory lets the model remember facts about you across conversations. Look at what it remembers, what it misses, and the privacy tradeoffs.
Custom GPTs let you package ChatGPT with instructions, files, and tools. Look at whether anyone actually uses them outside of demos.
Claude Projects are simpler than ChatGPT Projects but work better for teams. Look at what's included, what's missing, and why many people prefer them.
Claude Artifacts show generated code, docs, and HTML in a live side panel. Look at how it changed what people build with Claude.
Perplexity gives you AI answers with source citations. Honest look at whether it beats ChatGPT with browsing and what the $20 Pro tier actually adds.
NotebookLM turns your documents into an AI tutor that only answers from your sources. Look at why its audio overviews went viral and where it still falls short.
Jasper was a $1B+ company before ChatGPT existed. Look at whether marketing teams still pay $49+/month when Claude does most of what Jasper does for $20.
Copy.ai started as a copywriting tool and pivoted to sales/GTM automation. Look at the new product and whether marketers still have a reason to use it.
ProWritingAid is Grammarly's biggest competitor, aimed more at long-form writers. Look at what it catches that Grammarly misses and whether it's worth switching. In 2024 it added AI rewriting and now in 2026 has a full AI writing coach mode.
Captions turns a phone recording into a polished short video with auto-captions, B-roll, and AI edits. Look at what it nails and the limits of its one-tap workflow.
Variables are named boxes for data. You'll write your first ten, then use AI to decode error messages and grow your intuition for types.
If-statements and loops are where programs come alive. You'll write both kinds, then see where AI autocomplete helps and where it lies.
A function is a reusable chunk of code with a name. You'll write three, add type hints, and let AI suggest better names and docstrings.
Lists are ordered rows; dicts are labeled lookups. You'll use both to solve a real problem, and catch the mistakes autocomplete makes.
A CLI quiz app: Claude generates questions on any topic, you answer, it grades. Teaches prompts, loops, and keeping state.
Variables, loops, and functions are the atoms of Python. Let an AI help you write them while you learn what each line actually does.
Lists hold ordered items. Dicts hold keyed pairs. Comprehensions make both sing. Learn the core patterns AI will push you toward.
Reading and writing files is where real scripts start. Learn the with-statement, path handling, and JSON round-trips.
TypeScript is JavaScript with types. Learn how `strict` mode catches bugs at compile time and how AI writes cleaner types than you might alone.
SELECT, WHERE, JOIN, GROUP BY. Four keywords run the data world. AI is excellent at SQL because it has read every StackOverflow answer ever.
The terminal is where real work happens. Pipes, variables, and loops in bash are a superpower — and AI is surprisingly good at shell one-liners.
A paper without code is often a paper without truth. Papers With Code links claims to runnable proof. Where Claims Meet Code Papers With Code is a community-maintained site that pairs AI papers with their open-source implementations and benchmark results.
Benchmarks are how AI progress gets measured. Understanding them is the first step in reading any AI claim.
When you change a prompt, how do you know the new version is actually better? A/B testing is the honest answer.
If your sample is skewed, your conclusion is skewed. Here is how to spot it.
Results tables are where papers make their case. Here is how to decode one in under five minutes.
The imitation game became famous, but most AI researchers now think it measures the wrong thing.
Red-teamers try to make models misbehave before bad actors do. Here is how the job works, who does it, and what they look for.