Lesson 149 of 1570
Build It: Terminal Quiz Bot Powered by Claude
A CLI quiz app: Claude generates questions on any topic, you answer, it grades. Teaches prompts, loops, and keeping state.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1What we're building
- 2CLI
- 3LLM calls
- 4state
Concept cluster
Terms to connect while reading
Section 1
What we're building
A terminal script that asks for a topic, calls Claude to get 5 multiple-choice questions, asks them one at a time, tracks your score, and shows the final result. Real code, ~80 lines.
Pydantic models define the exact shape we expect from the LLM.
# pyproject.toml: anthropic, pydantic
import json
from pydantic import BaseModel, Field
from anthropic import Anthropic
class Question(BaseModel):
prompt: str
options: list[str] = Field(min_length=4, max_length=4)
correct_index: int = Field(ge=0, le=3)
explanation: str
class QuizSet(BaseModel):
topic: str
questions: list[Question]One LLM call returns the whole quiz. Pydantic will reject malformed output.
client = Anthropic()
def generate_quiz(topic: str, n: int = 5) -> QuizSet:
prompt = f"""Generate exactly {n} multiple-choice questions about: {topic}
Return ONLY valid JSON, no markdown fence, matching this shape:
{{
"topic": "...",
"questions": [
{{"prompt": "...", "options": ["a","b","c","d"], "correct_index": 0, "explanation": "..."}}
]
}}
Target middle-school difficulty. No trick questions."""
response = client.messages.create(
model="claude-opus-4-7",
max_tokens=2000,
messages=[{"role": "user", "content": prompt}],
)
raw = response.content[0].text.strip()
if raw.startswith("```"):
raw = raw.strip("`").split("\n", 1)[1].rsplit("\n", 1)[0]
return QuizSet.model_validate_json(raw)The main loop: ask, validate input, score, report.
def run_quiz(quiz: QuizSet) -> int:
score = 0
for i, q in enumerate(quiz.questions, start=1):
print(f"\n--- Question {i}/{len(quiz.questions)} ---")
print(q.prompt)
for idx, opt in enumerate(q.options):
print(f" {idx+1}. {opt}")
while True:
raw = input("Your answer (1-4): ").strip()
if raw in {"1","2","3","4"}:
break
print("Please enter 1, 2, 3, or 4.")
chosen = int(raw) - 1
if chosen == q.correct_index:
print("Correct!")
score += 1
else:
correct = q.options[q.correct_index]
print(f"Nope — answer was: {correct}")
print(f"Why: {q.explanation}")
return score
def main():
topic = input("Topic? ").strip() or "the solar system"
print(f"Generating quiz about {topic}...")
try:
quiz = generate_quiz(topic)
except Exception as e:
print(f"Quiz generation failed: {e}")
return
final = run_quiz(quiz)
print(f"\nFinal score: {final}/{len(quiz.questions)}")
if __name__ == "__main__":
main()Mini-exercise
- 1Run the quiz on three different topics
- 2Add a --difficulty flag (easy/medium/hard) and wire it into the prompt
- 3After the quiz, ask Claude to explain one question in simpler terms
- 4Save results to quiz_history.json
Compare the options
| Ad-hoc JSON parse | Pydantic schema |
|---|---|
| Crashes on bad output | Raises ValidationError with field path |
| Easy to write | Slightly more setup |
| Good for: throwaway scripts | Good for: anything you run twice |
Key terms in this lesson
Big idea: an LLM + a typed schema + a simple loop is a stunningly powerful base for any interactive tool. You've now built the skeleton every AI tutor app shares.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Build It: Terminal Quiz Bot Powered by Claude”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 25 min
What Does AI-Assisted Coding Even Mean?
AI-assisted coding is not magic and not cheating. It is a new way of working where a model drafts, you decide. Let's draw a map before we start building.
Builders · 30 min
Your First Copilot-Style Completion
Let's actually feel what autocomplete is like. Write a comment, pause, and watch a full function appear. Then learn what to do next.
Builders · 30 min
Prompting for Code Is Different From Prompting for Prose
A prompt that writes a poem is not the same as a prompt that ships working code. Code has hidden standards. You need to make them explicit.
