Loading lessons…
Creators · Ages 14–17
The full LLM pipeline, agentic AI with OpenClaw + Ollama, subscription-tier literacy, and a real capstone.
Meet your guide: Atlas — a minimal octahedron
Your progress
Loading your progress…
Where should I start?
Chapters
Modules · 22
Model Context Protocol is the most important open standard in agents. One protocol, 1,200+ servers, and your agent can plug into almost any system. Here's how it actually works.
Everything comes together. Design, code, test, secure, and ship a production-quality agent with open-source code you can fork today.
Assemble the four or five AI tools that actually belong in your daily life. A tested template for the stack that earns its keep.
Claude Projects, ChatGPT Projects, Notion AI, Perplexity Spaces. How persistent context changes AI from search box to actual assistant.
Perplexity Comet is a full web browser that treats AI as a first-class citizen. It reads, summarizes, and acts on pages you visit.
NVIDIA GR00T, Physical Intelligence π0, and Figure Helix took the vision-language-action paradigm from research paper to factory floor. This is the hottest hardware-software frontier.
McKinsey Lilli, Gamma, and Claude generate first-draft slides and research in minutes. The real consulting work — client relationships and implementation — is more human than ever.
v0, Linear AI, and Dovetail synthesize research, draft PRDs, and ship prototypes in hours. The PM role has leveled up from communicator to quasi-builder.
Codex CLI is OpenAI's open-source terminal coding agent. Look at how it compares to Claude Code, what it does uniquely, and why it matters to non-Anthropic shops.
Consensus searches 200M+ academic papers and gives evidence-based answers. Deep look at how researchers use it, what it does differently from Perplexity, and its limits.
Elicit automates slow parts of academic research: finding papers, extracting data, building literature matrices. Look at what it saves PhDs 20 hours a week.
Sparse autoencoders, features, circuits. How researchers try to see what a model actually thinks, and why it may be the most strategically important safety work.
Could AI help someone build a bioweapon? It's a serious question with a boring, important answer. Here is what the evidence shows without the scare quotes.
NotebookLM turns a pile of PDFs into a searchable, askable brain. Here is how to build a research notebook that keeps paying dividends.
The norms for disclosing AI use in research are still being written. Here is the emerging consensus and how to stay on the right side of it.
Behind every supervised model is an army of human labelers. Understanding how labeling works is understanding who really builds AI.
Even accurate data can encode an unjust history. The COMPAS recidivism tool shows what happens when AI learns from a biased past.
Small populations get hurt first when datasets are built carelessly. Fixing this requires intentional collection, not just better algorithms.
Ownership of data is not one question but a tangle of rights: copyright, contract, privacy, and control. Untangling them is essential for responsible use.
If you build a dataset, how you license it determines who can use it and how. Picking the right license matters as much as the data itself.
Jupyter is the data scientist's notebook. Code, output, and narrative in one document. Learning Jupyter well pays dividends for every future project.
Creating a dataset from scratch teaches you more than using someone else's. Here is how to build a high-quality small labeled dataset for a real task.