Lesson 1970 of 2116
Keeping Secrets Out of Prompts and Logs
Treat prompts and traces as places secrets leak by default.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The premise
- 2secrets
- 3redaction
- 4data-handling
Concept cluster
Terms to connect while reading
Section 1
The premise
Prompts get logged, cached, and shared. Anything secret in a prompt should be assumed to leave your system eventually.
What AI does well here
- Reference resources via opaque IDs the model can pass back.
- Run user input through a redactor before sending to a model.
What AI cannot do
- Be trusted to keep a secret it has seen.
- Detect secrets in user-supplied content reliably.
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Keeping Secrets Out of Prompts and Logs”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 45 min
Structured Outputs: Make the Model Return Data You Can Trust
For production apps, pretty prose is often the wrong output. Learn when to use structured outputs, function calling, and schema validation.
Creators · 9 min
Pro Search vs Default: When To Spend The Compute
Pro Search runs more queries, reads more pages, and routes to a stronger model. It is not always worth the wait — knowing when it is is the skill.
Creators · 10 min
Perplexity API: Building RAG Without Owning The Pipeline
The Perplexity API gives you cited search answers with one call. It is the cheapest way to add grounded retrieval to a product — and the limits are worth understanding.
