Lesson 72 of 2116
Privacy Settings Across the Big Three
Every major AI product has a privacy page you've never visited. Here's what to click, toggle, and delete to keep your data yours.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Your chats are data. Do the settings reflect that?
- 2privacy
- 3data training
- 4Claude privacy
Concept cluster
Terms to connect while reading
Section 1
Your chats are data. Do the settings reflect that?
Free-tier AI products are usually allowed to train on your conversations. Paid tiers usually aren't. But 'usually' isn't a spec — it's a setting buried three screens deep. Here's the map to every privacy control on Claude, ChatGPT, Gemini, and the rest.
The privacy settings that actually matter
Compare the options
| Setting | Where to find it | What to pick |
|---|---|---|
| Training on your data | Settings → Privacy/Data controls | OFF if sensitive work |
| Chat history | Settings → Data controls | Keep, but delete periodically |
| Memory / cross-chat memory | ChatGPT: Personalization | Claude: Profile | ON if trusted workflow, OFF for shared accounts |
| Voice recording storage | Advanced Voice settings | Delete on exit |
| Third-party integrations | Connectors / Plugins | Review quarterly, revoke unused |
| Model Context Protocol (MCP) | Settings → Beta/Integrations | Audit allowed servers |
Claude: privacy settings walkthrough
- Claude.ai → Profile icon → Settings → Privacy.
- 'Help improve Claude' — toggle OFF to opt out of training. Paid tiers (Pro/Max/Team) are already opted out.
- 'Delete all conversations' — available in the same menu.
- Claude for Education: already excluded from training by institution agreement.
- API data: by policy, Anthropic does not train on API inputs/outputs.
ChatGPT: privacy settings walkthrough
- ChatGPT → Profile → Settings → Data Controls.
- 'Improve the model for everyone' — toggle OFF to opt out.
- 'Memory' — ON lets ChatGPT remember facts across chats. Review memories at any time.
- 'Temporary chat' (top-right switch in a chat) — doesn't save to history, isn't trained on.
- Enterprise/Team: SOC 2 Type 2, not trained on by policy.
Gemini: privacy settings walkthrough
- gemini.google.com → Profile → Activity.
- 'Gemini Apps Activity' — OFF to stop Google from retaining your chats.
- Even OFF, Google retains conversations for ~72 hours for abuse review.
- Workspace users: admin policy may override personal settings.
- Google AI Pro/Ultra: per-tenant isolation available for enterprise.
The other tools
- Perplexity: Settings → 'AI Data Retention' toggle. Pro users can disable by default.
- Grok (on X): privacy controls tied to your X account; disable training under Settings → Grok.
- Copilot (Microsoft 365): admin-controlled; personal Copilot has a data controls page.
- Notion AI: admin-controlled at workspace level; not trained on your data in Business/Enterprise.
- GitHub Copilot: Settings → Copilot; 'Allow GitHub to use my code snippets' toggle.
What you should never paste into any AI, even with opt-outs on
- Passwords, API keys, SSH private keys.
- Social Security numbers, bank account numbers.
- Health diagnoses tied to your name (HIPAA concerns).
- Other people's private info without their consent.
- Any material under an NDA or trade secret.
- Student records covered by FERPA.
A privacy audit checklist
A once-a-quarter routine that keeps the surface area manageable.
Quarterly AI privacy audit (15 minutes):
[ ] Log into each AI tool I've used in the past 90 days.
[ ] Confirm training-off for each.
[ ] Delete chats older than 30 days that contain sensitive context.
[ ] Review Memory entries; delete anything no longer true.
[ ] Revoke third-party integrations I haven't used.
[ ] Check my email for 'new privacy policy' notices I ignored.
[ ] Rotate any API keys I haven't rotated in 6+ months.
[ ] Spot-check voice transcripts; delete any containing PII.Local models — the privacy extreme
If maximum privacy matters, skip the cloud entirely. Tools like Ollama, LM Studio, and Apple's Foundation Models run LLMs on your own machine. Performance is lower than frontier cloud models, but nothing ever leaves your device.
“Your settings tell the company how much of you they can keep. Most people have never checked.”
Key terms in this lesson
The big idea: defaults are not your friends. Spend 15 minutes per quarter clicking through every AI tool's privacy page. Delete what you don't need, opt out of training, and know what can't be taken back.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Privacy Settings Across the Big Three”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 45 min
API Access vs. Consumer Products — A Deeper Look
Going beyond the chat window. When you'd reach for the API, how pricing actually works, and how to start building. The API is where AI becomes a building block The consumer app is the most polished version of an AI experience.
Creators · 38 min
Building a Personal AI Stack for School and Career
Assemble the four or five AI tools that actually belong in your daily life. A tested template for the stack that earns its keep.
Creators · 40 min
Claude Code: Anthropic's Terminal-Native Coding Agent
Claude Code runs in your terminal, operates on your actual file system, and treats your whole repo as context. Deep look at why senior engineers prefer it to IDE-based AI.
