Lesson 640 of 2116
Latency Benchmarks: TTFT, Tokens per Second, and User Feel
A local model that is technically capable can still feel bad if time-to-first-token or generation speed is too slow.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The operational idea: latency benchmarking
- 2latency
- 3TTFT
- 4tokens per second
Concept cluster
Terms to connect while reading
Section 1
The operational idea: latency benchmarking
A local model that is technically capable can still feel bad if time-to-first-token or generation speed is too slow. In local AI, the model family is only one part of the system. The runtime, file format, serving path, hardware budget, evaluation set, and safety policy decide whether the model becomes useful.
Compare the options
| Layer | What to decide | What can go wrong |
|---|---|---|
| Runtime | latency benchmarking | The model runs, but the workflow is slow or brittle |
| Evaluation | A small task-specific test set | A flashy demo hides routine failures |
| Safety and ops | Permissions, provenance, logging, and rollback | Reporting only tokens per second and ignoring time-to-first-token, prompt length, streaming, and perceived responsiveness. |
Current source signal
Build the small version
Benchmark three local models with short, medium, and long prompts, then translate the numbers into user experience notes.
- 1Define the user task in one sentence.
- 2Choose the smallest model and runtime that might pass that task.
- 3Run one happy-path prompt and one failure-path prompt.
- 4Record speed, memory pressure, output quality, and the exact reason for any failure.
- 5Write the operating rule you would give a non-expert user.
A local-model operations sketch students can adapt.
latency_report:
prompt_length: 2000_tokens
time_to_first_token_ms: 850
tokens_per_second: 34
total_response_time_s: 9.8
user_feel: acceptable_for_draft, too_slow_for_chat
measure_more_than_one_promptKey terms in this lesson
The big idea: measure the feel. A local model app is not done when the model answers once; it is done when the whole workflow can be installed, measured, trusted, and recovered.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Latency Benchmarks: TTFT, Tokens per Second, and User Feel”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 18 min
Reranker Evals: The Second Look at Evidence
A reranker can improve local RAG by reordering candidate chunks, but it adds latency and needs measurement.
Creators · 11 min
Comparing Output Token Throughput Across Models
Tokens per second matters for streaming UX and batch jobs; benchmark instead of trusting datasheets.
Creators · 8 min
ChatGPT Memory: When To Enable, When To Turn It Off
Memory is supposed to make ChatGPT feel personal. It also quietly accumulates context that can pollute later conversations or leak into the wrong workspace.
