Creator of instructor; consultant on LLM apps
Representative of Jason's consulting threads — showing teams how defining a Pydantic schema for LLM output turns vague 'AI features' into measurable, testable systems.
“If you can't write the Pydantic model, you don't understand the feature yet.”
How to replicate
- 1.Write the Pydantic model for what you want out of the LLM — fields, types, constraints.
- 2.Pass the schema to the model via instructor or tool-calling.
Prompt template
You are extracting structured data from <source>. Return an object matching this schema: <paste Pydantic fields with descriptions> Rules: if a field isn't present, set it to null — don't guess. If the input is ambiguous, populate the `confidence` field with a float 0–1.
Pitfall
Putting the schema in the prompt as free text. Use native tool-calling or instructor so the model is constrained — prompts alone leak invalid JSON.
What you'll learn
- •Why schemas make LLM features testable
- •How to evaluate extraction features without human review every time
- •When to use tool-calling vs free-form output
- •How to surface model uncertainty instead of hiding it
