Loading lesson…
Closed = OpenAI/Anthropic/Google. Open = Meta/Mistral/DeepSeek. The split shaping 2026 — and your future.
AI models split into two camps: closed (OpenAI's GPT, Anthropic's Claude, Google's Gemini — you can only access via API) and open-source (Meta's Llama, Mistral, DeepSeek, Qwen — you can download the model weights and run them yourself). Open models are usually a step behind closed in raw quality but free, private, and can be run on your own hardware (M-series Mac with 16GB+ RAM runs Llama 3 8B). The split has huge implications: control, cost, censorship, geopolitics. DeepSeek's January 2025 release rocked the industry by showing open models can match closed at a fraction of the training cost.
Download Ollama (free, ollama.com) and run Llama 3 8B on your laptop tonight. No account, no internet needed, fully private. Ask it anything. You just ran AI you OWN.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-builders-foundations-ai-open-source-vs-closed-r10a10-teen
Which statement best describes a closed AI model like GPT or Claude?
What company developed the Llama model?
What are 'model weights' in an AI system?
What happened to Nvidia's market value after DeepSeek-V3 was released?
What is a key advantage of running an open-source AI model locally on your own computer?
Which of the following is an example of a closed AI model?
Approximately how much did it cost to train DeepSeek-V3, and how does this compare to reported GPT-4 costs?
What can you do with model weights after downloading an open-source model?
What does it mean for an open-source AI model to be 'uncensored'?
Why might someone choose a closed AI model like GPT-4 over an open-source alternative?
What hardware specification is mentioned for running Llama 3 8B on a personal Mac computer?
What is the primary benefit of running an AI model locally rather than through a cloud API?
Which company created the Claude model?
Which model was described in the lesson as the 'sweet spot' for free local models on consumer laptops in early 2025?
What is required to run Ollama and use a local AI model on your computer?