Search
44 results
Sharing Datasets on Hugging Face Hub
Hugging Face Hub is the GitHub of AI data and models. Uploading a dataset there makes it instantly accessible to millions of practitioners.
LM Studio: The GUI Alternative to Ollama
Not everyone wants a CLI. LM Studio gives you a desktop app for browsing, downloading, and chatting with local models — and a server mode when you outgrow the GUI.
Text Generation Inference: Production Serving Concepts
Hugging Face Text Generation Inference is a useful teaching example for production model serving: router, model server, streaming, and operational controls.
Licensing Your Own Datasets
If you build a dataset, how you license it determines who can use it and how. Picking the right license matters as much as the data itself.
CSV and Why It Has Ruled for 50 Years
CSV is the plainest, ugliest, most universal data format. It has survived every trend because it does one thing well: it works everywhere.
Data Cards: The Label on Your Dataset
A data card is like a nutrition label for a dataset: who collected it, how, what is in it, and what it should not be used for.
Your Own Ethical Checklist as an AI Builder
If you ship AI, ethics is not abstract. It is a set of decisions you make with real trade-offs. Here is the working checklist serious builders actually use.
SmolLM: Tiny Models That Teach the Limits Clearly
SmolLM-style models are perfect for classroom experiments because students can see speed, limitations, and task fit quickly.
Quantization Choices: FP16, Q8, Q6, Q5, and Q4
Quantization is the art of making models fit local hardware by using fewer bits, while watching how quality changes.
Keeping Current: Newsletters, Feeds, and Lists
AI moves so fast that staying current is its own skill. Here is a sustainable system.
ML Engineer in 2026: You Build the Tools Everyone Else Uses
Fine-tune, evaluate, serve, monitor. The ML engineer is the person who ships the models that now power medicine, law, and design. It is the highest-leverage engineering role.
Building a Real Portfolio in High School Using AI
You don't need an internship to have a portfolio. AI lets you ship real projects from your bedroom.
Granite Code: Local Enterprise Coding Workflows
Granite code models are a useful contrast to Qwen Coder, Codestral, and StarCoder2 because they emphasize enterprise-friendly workflows.
Local Model Family: NVIDIA Nemotron
Nemotron gives students a way to discuss open models built for NVIDIA-accelerated deployment, agents, and enterprise AI stacks.
StarCoder2: Open Code Models for Local Programming Lessons
StarCoder2 gives students an open-science code model family to compare against general chat models and newer coder families.
Quantization Explained: GGUF, AWQ, GPTQ, and the Q4 vs Q8 vs FP16 Decision
A model file's quantization decides how big it is, how fast it runs, and how good it sounds. Learn the formats, the trade-offs, and how to pick the right one.
Choosing a Local Model: Llama, Mistral, Hermes, Qwen, DeepSeek, and Friends
There are too many open-weight models. A short, opinionated tour of the major families and what each is actually good at.
Open vs Closed AI Models: What's the Difference?
Why some AI you can download and run yourself, and others you can only rent.
Hugging Face Deep Reinforcement Learning Course
Hugging Face — Developers wanting to train agents with RL
Hugging Face Audio Course
Hugging Face — Developers building speech, ASR, or audio-gen apps
Hugging Face AI Agents Course
Hugging Face — Developers and students building AI agents
Hugging Face NLP Course
Hugging Face — Students learning transformers and modern NLP
Hugging Face Model Context Protocol (MCP) Course
Hugging Face — Developers adding MCP-compatible tools to AI agents
Hugging Face Machine Learning for Games Course
Hugging Face — Game developers embedding ML agents in Unity/Godot
Hugging Face Computer Vision Course
Hugging Face — Developers doing image classification, detection, generation
Hugging Face Diffusion Models Course
Hugging Face — Creators and engineers training image/video diffusion models
IBM AI Developer Professional Certificate
IBM / Coursera — High school students and beginners wanting to build AI apps
Hugging Face
The GitHub of AI — a hub for open-weights models, datasets, and demos.
Model zoo
A collection of pre-trained models — Hugging Face Hub is the biggest.
TGI
Text Generation Inference — Hugging Face's production LLM serving stack.
Transformers library
Hugging Face's open-source library that makes using and fine-tuning LLMs straightforward.
Diffusers
Hugging Face's library for running and training diffusion models like Stable Diffusion.
lm-eval
EleutherAI's toolkit for running standard LLM benchmarks reproducibly.
Groq
Custom-silicon inference provider competing on tokens-per-second and latency.
Model card
A short document describing what a model does, how it was trained, and its limits.
Dataset card
A short document describing a dataset — what's in it, where it came from, and its limits.
Leaderboard
A public ranking of models on a benchmark.
GAIA
A benchmark for general AI assistants, with multi-step real-world questions.
GPTQ
A post-training quantization method for LLMs based on second-order information.
Llama
Meta's open-weights LLM family, a staple of the open-source AI ecosystem.
DeepSpeed
Microsoft's open-source library for scaling deep learning training and inference.