Loading…
New · guided experience
A curated walkthrough of the library — ordered lessons, a 15-question quiz between each, and 3 next-steps so you stay in flow. Earn XP, badges, and a streak as you go.
Library · 6440 lessons · Professional view
Search below, or pick a shelf. Kids and teens get age-tier shelves; College+ and Career+ organize the same library around life stage and situation.
Lessons handpicked for the Professional shelf.
Your data can live in someone's data center or on your own laptop. Both are real options in 2026. Understand what you gain and lose with each.
No code. Just design. Pick a real task you do every week and draft a complete agent spec — goal, tools, loop, stop, approvals, and what success looks like.
Bugs are where AI is most useful and most humbling. Paste errors, ask for causes, run experiments, and learn how to get a real answer instead of a guess.
Bring it all together. Pick one of three starter projects, plan it, build it with AI, and deploy it. You are now a builder who ships.
Fresh Professional lessons added to the library.
Deceptive alignment is when a model behaves well during training while planning to behave differently after deployment. Long a theoretical worry, recent work has moved it onto the empirical map.
When AI can read documents and act on them, hidden instructions become attacks. Here is what prompt injection is and why nobody has fully solved it.
Red-teamers try to make models misbehave before bad actors do. Here is how the job works, who does it, and what they look for.
Benchmarks measure what you ask. Red-teaming measures what breaks. Learn to test for failure modes, not capabilities. For AI, red teams probe for harmful outputs, jailbreaks, bias, leakage of training data, and dangerous capabilities.
Browse everything
Subject tracks
Tap a tile to filter the library — or pick “Surprise me” below for a randomized starter set.
Your data can live in someone's data center or on your own laptop. Both are real options in 2026. Understand what you gain and lose with each.
No code. Just design. Pick a real task you do every week and draft a complete agent spec — goal, tools, loop, stop, approvals, and what success looks like.
Bugs are where AI is most useful and most humbling. Paste errors, ask for causes, run experiments, and learn how to get a real answer instead of a guess.
Bring it all together. Pick one of three starter projects, plan it, build it with AI, and deploy it. You are now a builder who ships.
Five image models, five personalities. Here's when each one is the right pick — in 2026, with current strengths, costs, and quirks.
Your first end-to-end AI-assisted creative project. Plan it, make it, and reflect on what surprised you. Small scope, real output.
v0 by Vercel turns a prompt, screenshot, or Figma file into a working Next.js app deployed in one click.
Training a frontier model uses the electricity of a small city for months. Running inference at scale matches a large country's load. Here is what the numbers actually look like.
Japan chose light-touch, guideline-based AI governance built on existing laws. Understanding why illuminates a real alternative to comprehensive AI acts.
Red-teamers try to make models misbehave before bad actors do. Here is how the job works, who does it, and what they look for.
When AI can read documents and act on them, hidden instructions become attacks. Here is what prompt injection is and why nobody has fully solved it.
Past the basics, dyslexic students can use AI for deep work - reading papers, writing essays, and asking for accommodations that work.
Two frontier models, same subscription price, very different personalities. Pick by vibe, not by benchmark — here is how to figure out which one clicks for you.
xAI's Grok 4.1 Fast has the biggest context window on the market at the cheapest price. Here is when that matters more than raw reasoning quality.
ElevenLabs voices are indistinguishable from humans. That is a feature and a fraud vector. Here is the production checklist before you clone anyone.
Haiku is Anthropic's cheap, fast tier. Here is the math on when it beats Sonnet for production workloads.
Extended thinking makes Opus smarter but burns hidden tokens. Here is how to budget it without blowing your bill.
Google gives Flash away on a generous free tier. Here is how to extract real production value without paying a cent.
Gemini Ultra on Vertex unlocks extended context and enterprise controls. Here is what you get for moving up-tier.
Mistral Small is the right open-weights model when you need to run on a laptop, a phone, or an on-prem CPU box.
Codestral Mamba ditches transformers for a state-space model. The result: linear-time long-context coding at a fraction of the attention cost.
DeepSeek V3.5 is the open-weights model that keeps punching above its weight class on coding benchmarks at a fraction of the cost.
Claude.ai and the Anthropic API both run Claude. So why do they cost different amounts? Pull apart the two doors into the same model.
A Space is a bookmarked, collaborative research context. Your sources, your prompts, your team — all persistent.
GitHub Copilot was the first AI coding assistant at scale. Look at what it is great at, where Cursor and Claude Code have passed it, and whether the $10 subscription still makes sense.
Claude Projects are simpler than ChatGPT Projects but work better for teams. Look at what's included, what's missing, and why many people prefer them.
Jasper was a $1B+ company before ChatGPT existed. Look at whether marketing teams still pay $49+/month when Claude does most of what Jasper does for $20.
LangGraph became the production favorite in 2026 for good reasons — explicit state, checkpointing, first-class MCP. Build a real agent end-to-end and learn why.
Computer Use lets Claude see your screen and use it — mouse, keyboard, apps. The capability is real, the gotchas are real. A hands-on look at what works in 2026.
A prototype agent and a production agent have the same LLM. What's different is everything around it — durable state, retries, idempotency, observability. The real engineering.
An agent is a new attack surface. Prompt injection, privilege escalation, data exfiltration — these are no longer theoretical. Learn the attacks and the defenses.
Everything comes together. Design, code, test, secure, and ship a production-quality agent with open-source code you can fork today.
The AI coding tool market fragmented fast. Let's map the 2026 landscape honestly: who is for autocomplete, who is for agents, who wins on cost, and what the tradeoffs actually feel like.
Autocomplete is a suggestion. An agent is an actor. The mental model you bring to each is different, and conflating them is the number-one reason teams trip over AI coding.
Frontier models now read a million tokens of your codebase in one shot. That changes how we architect prompts, retrieval, and the cost curve of agentic work.
Agents ship working code that's also quietly insecure. Red-teaming means actively attacking your own code. Let's build the habits that catch real-world exploits before attackers do.