Loading…
New · guided experience
A curated walkthrough of the library — ordered lessons, a 15-question quiz between each, and 3 next-steps so you stay in flow. Earn XP, badges, and a streak as you go.
Library · 6440 lessons · Safety & Ethics view · disclosure
You are viewing safety & ethics lessons focused on disclosure. Use the tool lanes below to jump sideways into related workflows.
Drill down
Start with a real app or workflow. Each lane filters the library to practical lessons, not just broad theory.
548 lessons in safety & ethics
Lessons handpicked for the disclosure shelf.
AI-generated media has crossed the perceptual threshold where humans cannot reliably detect it. Detection tools help — but are in an arms race with generation.
AI deployment in workplaces raises consent questions that legal minimums don't fully address. Employers who lead on transparency gain trust; those who don't face backlash.
Model cards and transparency reports are how AI providers document what their systems can and can't do. Knowing how to read them — and what's missing — is a core deployer skill.
Publishing AI research or releasing models creates benefits and risks simultaneously. The norms for when to disclose, delay, or withhold are evolving — deployers need a framework.
Fresh disclosure lessons added to the library.
AI audits creator posts for missing or buried sponsorship disclosures before regulators or audiences notice.
AI helps creators draft FTC-compliant paid promotion disclosure that survives a regulator's read.
AI can draft AI political ad disclosure language and on-screen labels, but the legal sufficiency of the disclosure is a campaign counsel question.
AI can draft an AI academic integrity policy, but the enforcement standard and faculty discretion belong to the institution.
Browse everything
Subject tracks
Tap a tile to filter the library — or pick “Surprise me” below for a randomized starter set.
AI-generated media has crossed the perceptual threshold where humans cannot reliably detect it. Detection tools help — but are in an arms race with generation.
AI deployment in workplaces raises consent questions that legal minimums don't fully address. Employers who lead on transparency gain trust; those who don't face backlash.
Model cards and transparency reports are how AI providers document what their systems can and can't do. Knowing how to read them — and what's missing — is a core deployer skill.
Publishing AI research or releasing models creates benefits and risks simultaneously. The norms for when to disclose, delay, or withhold are evolving — deployers need a framework.
Effective AI red-teaming goes beyond clever prompts. The exercises that surface real risk include socio-technical scenarios, integration-point attacks, and post-deployment misuse patterns.
Some AI failures harm users and warrant public disclosure. Knowing when (and how) to disclose is its own discipline — far beyond the standard breach-notification playbook.
Watermarking AI-generated content is a partial solution to provenance. The current state is messy: standards are emerging, adoption is fragmented, removal is possible.
AI productivity-monitoring tools have exploded. The research shows they often hurt the productivity they're meant to measure — while damaging trust permanently.
Your vendor's AI incident becomes your incident. Knowing your obligations to your own users — disclosure, remediation, credit — matters before the vendor's incident hits.
News organizations using AI for production, personalization, and translation face trust trade-offs. Disclosure and editorial judgment remain primary.
Federal and state laws now require AI disclosure in political advertising. Compliance evolves rapidly — and enforcement is ramping up.
AI mental health tools must meet specific standards for disclosure, crisis handling, and clinical oversight. Vendor selection criteria matter.
Employees have evolving rights around workplace AI — disclosure, consent, opt-out. Compliance is operational necessity.
Customer consent for AI interactions is now legally required in many jurisdictions. Designing for meaningful consent matters.
Customer disclosure of AI involvement is now table stakes. Patterns that respect customers vs check legal box.
Public AI incident disclosure builds industry-wide learning. Done well, it shapes practice.
Draft an attribution policy that names AI contributions clearly, without using credit to obscure responsibility.
Stock-photo marketplaces selling AI-generated assets need provenance metadata, model disclosure, and indemnity terms that survive resale.
Newsrooms using AI for synthesis or translation need disclosure standards that maintain reader trust without burying every story in caveats.
AI can draft disclosure language for synthetic media, but organizational thresholds for what triggers a label require human policy judgment.
AI can draft an incident disclosure letter, but the timeline of what was known when must come from your investigation, not the model.
AI can draft a responsible disclosure policy for AI vulnerabilities, but legal safe-harbor terms and bounty scope are leadership decisions.
AI can draft an AI incident disclosure timeline, but who learns what and when belongs to legal counsel and the accountable executive.
AI can draft an AI bug bounty scope and safe-harbor clause, but the legal authorization to test must come from your general counsel.
AI can draft an AI academic integrity policy, but the enforcement standard and faculty discretion belong to the institution.
AI can draft AI political ad disclosure language and on-screen labels, but the legal sufficiency of the disclosure is a campaign counsel question.
AI helps creators draft FTC-compliant paid promotion disclosure that survives a regulator's read.
Some kids want to build chatbots, generate art, code with AI assistance. This is healthy maker energy — and parents can encourage it while building good safety habits from the start.
Many US schools use AI to monitor what students type, search, and post — looking for signs of self-harm, bullying, or weapons..
Schools use AI to detect AI-written essays — but the detection is unreliable, and false positives have hurt real students..
Most teachers don't ban AI — they ban using it the wrong way. Here's how to tell which side you're on.
AI helps with applications. Lying about it is a fast way to get rejected. Honesty is the move.
My AI logs everything you tell it — here's what that means for your privacy.
Gaggle and GoGuardian flag teen searches constantly — and the false alarms have consequences.
TikTok, Instagram, and YouTube Shorts require AI-content labels — failing to add one can demonetize you for life.
Red-teamers get paid to make AI misbehave. The field has grown into a real discipline — with its own methods, its own ethics, and its own unresolved questions.