Loading lesson…
Every major AI product has a privacy page you've never visited. Here's what to click, toggle, and delete to keep your data yours.
Free-tier AI products are usually allowed to train on your conversations. Paid tiers usually aren't. But 'usually' isn't a spec — it's a setting buried three screens deep. Here's the map to every privacy control on Claude, ChatGPT, Gemini, and the rest.
| Setting | Where to find it | What to pick |
|---|---|---|
| Training on your data | Settings → Privacy/Data controls | OFF if sensitive work |
| Chat history | Settings → Data controls | Keep, but delete periodically |
| Memory / cross-chat memory | ChatGPT: Personalization | Claude: Profile | ON if trusted workflow, OFF for shared accounts |
| Voice recording storage | Advanced Voice settings | Delete on exit |
| Third-party integrations | Connectors / Plugins | Review quarterly, revoke unused |
| Model Context Protocol (MCP) | Settings → Beta/Integrations | Audit allowed servers |
Quarterly AI privacy audit (15 minutes):
[ ] Log into each AI tool I've used in the past 90 days.
[ ] Confirm training-off for each.
[ ] Delete chats older than 30 days that contain sensitive context.
[ ] Review Memory entries; delete anything no longer true.
[ ] Revoke third-party integrations I haven't used.
[ ] Check my email for 'new privacy policy' notices I ignored.
[ ] Rotate any API keys I haven't rotated in 6+ months.
[ ] Spot-check voice transcripts; delete any containing PII.A once-a-quarter routine that keeps the surface area manageable.If maximum privacy matters, skip the cloud entirely. Tools like Ollama, LM Studio, and Apple's Foundation Models run LLMs on your own machine. Performance is lower than frontier cloud models, but nothing ever leaves your device.
Your settings tell the company how much of you they can keep. Most people have never checked.
— A privacy engineer
The big idea: defaults are not your friends. Spend 15 minutes per quarter clicking through every AI tool's privacy page. Delete what you don't need, opt out of training, and know what can't be taken back.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-tools-privacy-settings-creators
What is the core idea behind "Privacy Settings Across the Big Three"?
Which term best describes a foundational idea in "Privacy Settings Across the Big Three"?
A learner studying Privacy Settings Across the Big Three would need to understand which concept?
Which of these is directly relevant to Privacy Settings Across the Big Three?
Which of the following is a key point about Privacy Settings Across the Big Three?
Which of these does NOT belong in a discussion of Privacy Settings Across the Big Three?
Which statement is accurate regarding Privacy Settings Across the Big Three?
Which of these does NOT belong in a discussion of Privacy Settings Across the Big Three?
What is the key insight about "Opting out doesn't un-train" in the context of Privacy Settings Across the Big Three?
What is the key insight about "For students: Claude for Education is the gold standard" in the context of Privacy Settings Across the Big Three?
What is the recommended tip about "Evaluate systematically" in the context of Privacy Settings Across the Big Three?
Which statement accurately describes an aspect of Privacy Settings Across the Big Three?
What does working with Privacy Settings Across the Big Three typically involve?
Which of the following is true about Privacy Settings Across the Big Three?
Which best describes the scope of "Privacy Settings Across the Big Three"?