Lesson 65 of 2116
Ethics of Synthetic Media
Consent, deepfakes, fair use, democratization of creation. The hardest questions in this track don't have clean answers. Let's work through them honestly.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1Why ethics isn't optional here
- 2deepfakes
- 3consent
- 4fair use
Concept cluster
Terms to connect while reading
Section 1
Why ethics isn't optional here
Every creative AI ship-decision has an ethical dimension. Unlike coding or analysis, creative work literally MAKES IMAGES OF PEOPLE, VOICES THAT SOUND LIKE PEOPLE, AND MUSIC THAT IMITATES ARTISTS. The harm surface is large. Being good at this track means holding both capability and restraint together.
The consent hierarchy
- 1STRONGEST — individual, revocable, written consent for a specific use ('you can use my cloned voice for this audiobook; you will delete it when I withdraw consent').
- 2MEDIUM — broad consent for a class of use ('you can use my voice for commercial narration').
- 3WEAK — implied consent from posting public content ('my songs are on Spotify so training is fine' — contested, unreliable).
- 4NONE — no consent at all. This is deepfake territory and, increasingly, criminal territory.
Deepfakes — the hard edge
A deepfake is synthetic media depicting a real person doing or saying something they didn't. The harm spectrum runs from obvious parody (generally protected speech) to non-consensual intimate imagery (federal crime under TAKE IT DOWN Act, 2025) to election disinformation (state-law crime in many US states) to fraud (wire fraud, extortion — existing criminal law).
A workable framework for builders
Compare the options
| Use case | Consent needed? | Legal risk (US, 2026) | Ethical posture |
|---|---|---|---|
| Your own face/voice in your own content | Self-consent; easy. | None. | Fine. |
| Historical / deceased public figures, educational | Consider estate; cultural sensitivity. | Low if clearly marked. | Generally fine with disclosure. |
| Public figures, political, satirical | None strictly required; courts generally protect parody. | Varies; election deepfake laws in many states. | Disclose; don't impersonate policy positions falsely. |
| Private individual for parody/commentary | Consent strongly preferred. | Defamation / right-of-publicity risk. | Get consent or don't do it. |
| Private individual for commercial use | Consent required. | Right-of-publicity, in 46+ states. | Always consent; written. |
| Non-consensual intimate imagery of anyone | Never acceptable. | Federal felony under TAKE IT DOWN (up to 2y; 3y if minor). | Don't. |
| CSAM — including AI-generated | Never. | Federal felony. | Don't. Report if encountered. |
The training data question
Should AI companies pay artists whose work trained the models? Courts haven't decided definitively as of April 2026. The Authors Guild v. OpenAI, Andersen v. Stability AI, and RIAA v. Suno/Udio cases are pending. Ethically, the landscape has two camps.
Compare the options
| 'Training is fair use' view | 'Training requires licensing' view |
|---|---|
| Models learn patterns, don't store copies. | Output can be substantially similar to training data. |
| Fair use is what lets humans read, too. | Scale is categorically different from human learning. |
| Opt-out mechanisms (robots.txt, Glaze) are sufficient. | Opt-in licensing is required; artists shouldn't bear the burden. |
| Without training, no AI progress. | Without compensation, artists lose livelihood. |
Democratization — the other side
AI creative tools have put professional-quality image, video, music, and voice production in the hands of millions who couldn't afford it before. A Kenyan filmmaker on a laptop can now produce work that required a Hollywood budget five years ago. An indie game dev can voice their characters in 30 languages. These wins are real and worth defending — they're not erased by the harms, and the harms aren't erased by them.
Decision questions for any new creative AI feature
- 1Who could be HARMED by this feature at its worst (not its best)?
- 2What's the consent model — and can users easily REVOKE consent?
- 3What's the disclosure surface — how do downstream viewers know it's AI?
- 4What's the moderation strategy — what do we refuse to generate, and how?
- 5How would this feature behave in an adversarial jurisdiction (election, authoritarian use)?
- 6What's the takedown path if someone is harmed?
- 7Are we offering a ROYALTY or CREDIT path for affected creators?
Technical harm-reduction tools
- Face detection + celebrity match-block — refuse to generate named public figures.
- NSFW classifier at output — block, blur, or flag.
- CSAM detection (PhotoDNA / PDQ hash matching) — auto-report to NCMEC per federal law.
- C2PA + watermark on output.
- Audit logs with user attribution for harmful output investigations.
- Rate limiting + cooling-off for suspicious prompt patterns.
- Clear takedown + appeal UX.
Layered safety gates — pre-generation, post-generation, and CSAM reporting.
# Multi-stage safety gate for a creative-AI product.
# Every generation passes through these before the user sees it.
from dataclasses import dataclass
@dataclass
class SafetyVerdict:
allow: bool
reasons: list[str]
mitigation: str | None = None
def prompt_gate(prompt: str, user_id: str) -> SafetyVerdict:
reasons = []
if matches_celebrity_list(prompt):
reasons.append("named_public_figure")
if matches_nsfw_keywords(prompt):
reasons.append("nsfw_intent")
if matches_csam_indicators(prompt):
report_to_ncmec(user_id, prompt)
return SafetyVerdict(False, ["csam_blocked"], mitigation="reported")
return SafetyVerdict(len(reasons) == 0, reasons)
def output_gate(image_bytes: bytes, prompt: str) -> SafetyVerdict:
reasons = []
if nsfw_classifier(image_bytes).score > 0.85:
reasons.append("nsfw_output")
if csam_hash_match(image_bytes): # PhotoDNA / PDQ
ncmec_report(image_bytes)
return SafetyVerdict(False, ["csam_match"], mitigation="reported")
if face_detect(image_bytes) and matches_known_person(image_bytes):
reasons.append("recognized_person_no_consent")
return SafetyVerdict(len(reasons) == 0, reasons)
def generate_safely(prompt: str, user_id: str):
pre = prompt_gate(prompt, user_id)
if not pre.allow:
return {"error": "blocked", "reasons": pre.reasons}
image = generate(prompt)
post = output_gate(image, prompt)
if not post.allow:
log_incident(user_id, prompt, post.reasons)
return {"error": "blocked", "reasons": post.reasons}
return {"image": sign_c2pa(image, prompt)}Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Ethics of Synthetic Media”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 38 min
Human-in-the-Loop Creative Workflows
The winning pattern in 2026 is not AI-replacing-humans — it's AI-as-instrument. Figma, v0.dev, Canva, and editor workflows show how to compose it.
Creators · 44 min
ControlNet, IP-Adapter, LoRA — Fine-Grained Control
Base diffusion models give you creative possibilities. Adapters give you creative PRECISION. Master the three that matter most.
Creators · 40 min
Video Generation at the API Level
Behind the glossy UIs, video models expose REST APIs. Here's how to call Sora, Veo, and Runway programmatically and build production pipelines.
