AI Deepfakes, Defamation, and the Limits of Prevention
Back to Blog

AI Deepfakes, Defamation, and the Limits of Prevention

February 25, 2026
Marc Hoag
AI Deepfakes, Defamation, and the Limits of Prevention

This is educational material and does not constitute legal advice nor is any attorney/client relationship created with this article, hence you should contact and engage an attorney if you have any legal questions.


What happens when we accept that we can’t actually prevent deepfakes?

I’ve been thinking about this a lot lately, and I’ve come to a conclusion that I suspect will be controversial, though I think it’s nevertheless logical, if not fair.

The patchwork problem

As of early 2026, the legal landscape around deepfakes is a sprawling patchwork. Nearly every state has enacted some form of deepfake legislation. Federal laws like the TAKE IT DOWN Act and the DEFIANCE Act target nonconsensual intimate deepfakes. California has enacted a suite of laws protecting performers’ digital likenesses and mandating AI provenance disclosures.

These are all important steps. But notice what they have in common: they’re almost entirely focused on specific use cases— nonconsensual pornography, election manipulation, commercial exploitation of performers. They’re reactive, categorical, and narrow. They address particular symptoms of a much broader underlying problem.

The broader problem is this: somebody can fabricate a video of you doing or saying something you never did or said, present it as real, and destroy your reputation. And no existing legal framework cleanly addresses that fundamental harm across all contexts.

Why defamation is the right framework

Consider what a malicious deepfake actually is. It’s not presented as opinion. It’s not presented as satire. It’s presented as a statement of fact. Here’s this person. That’s her face. Here’s what she’s doing. Here’s what she’s saying. The entire premise depends on the viewer believing it’s real.

That’s a false statement of fact presented to the world as truth. That’s defamation.

The elements map remarkably well. Defamation requires (a) a false statement of fact, (b) published to a third party, (c) with the requisite degree of fault, and (d) causing harm. And defamation per se doesn’t even require any demonstration of harm at all if the defamation targets certain classes of statements, e.g., one’s health history, confidential relationship information, or career reputation.

One objection worth addressing: defamation traditionally requires a human publisher making a statement. But the creator of a deepfake is the publisher — they fabricated the depiction and distributed it. That’s the statement. Whether it constitutes slander or libel is a secondary question, but the core defamation analysis holds. A malicious deepfake satisfies every element. The creator necessarily knows the content is false — fault is built in. And harm — reputational, emotional, financial — is often catastrophic and irreversible.

There’s a feature of deepfakes that actually strengthens the defamation argument, particularly for public figures. Under New York Times Co. v. Sullivan, public figures must prove “actual malice” — that the publisher knew the statement was false or acted with reckless disregard for its truth. With deepfakes, this hurdle essentially vanishes. The creator necessarily knows the content is false, because they fabricated it. Actual malice is baked into the act of creation.

Prevention is a fantasy

Like it or not, we’re now living in an AI world. The tools to create convincing deepfakes are essentially free, widely available, and improving at a rate that outpaces every detection technology we’ve developed, to speak nothing of the law’s molasses pace in keeping up.

Legislation requiring watermarking, provenance disclosures, and platform takedowns are all valuable — but they’re speed bumps, not walls. You cannot prevent someone from creating a deepfake of you any more than you can prevent someone from lying about you.

Celebrities face a particular version of this reality. Public figures have, by definition, put themselves in the public eye. If your image and likeness are already pervasive — across the internet, across social media, across every camera and screen — the practical reality is that your digital likeness will be misappropriated by AI tools. That’s not a moral judgment. It’s a recognition of the threat landscape. Just as the ubiquity of smartphones fundamentally changed our expectations around being photographed in public, the ubiquity of generative AI is fundamentally changing our expectations around the integrity of digital media.

To be absolutely clear: I am not saying deepfakes are acceptable. I am not saying celebrities deserve what they get. I am saying that the law’s energy is better spent building a robust system of accountability after the fact than chasing in vain the impossible goal of prevention before the fact. Stop trying to build a wall. Build a better courthouse.

Where existing frameworks fall short

This is why the defamation framing matters so much.

Right of publicity protects against commercial misappropriation of likeness. It’s powerful in the entertainment context. But it’s fundamentally a commercial doctrine. If someone creates a deepfake of you to destroy your reputation rather than to make money, right of publicity doesn’t cleanly apply.

Criminal statutes address specific categories of harm — nonconsensual intimate imagery, election interference, fraud. They’re essential but fall into limited, discrete categories. A deepfake that puts fabricated political statements in your mouth, or depicts you in a compromising but non-sexual scenario, may fall through the cracks.

Copyright protects the creator of content, not the person depicted. If someone deepfakes you, you don’t own that video — and your own image isn’t copyrightable. That’s a gap worth closing, but it’s a legislative fix, not a remedy available today.

Defamation fills the gap. It addresses the core dignitary harm — the injury to reputation caused by the presentation of fabricated content as fact — regardless of whether the motive is commercial, sexual, political, or purely malicious. And it’s available to everyone, not just celebrities or performers.

The road ahead

Defamation isn’t a silver bullet. Identifying anonymous deepfake creators is extraordinarily difficult. Damages may be modest. And defamation’s speech-protective doctrines will require careful application in the deepfake context, even if the actual malice hurdle is lower than usual.

But here’s what defamation offers that no other framework does: a universal, flexible, well-established cause of action that addresses the fundamental harm of deepfakes — the presentation of fabricated content as fact — and crucially, not opinion — causing reputational injury. It doesn’t require new legislation. It doesn’t require categorical definitions of prohibited conduct. It asks a simple question: Did someone present a fabrication as fact, and did it cause harm?

The law doesn’t need to reinvent itself every time technology evolves. Sometimes, the frameworks we already have are the right ones — they just need to be applied to new facts. Defamation has been doing this work for centuries. It’s time to let it do this work too.


These are exactly the kinds of issues I think about every day at Hoag Law.ai and as Chair of the Beverly Hills Bar Association’s AI and the Law Section, alongside Vice-Chair Amanda Harris. If you’re navigating AI compliance, deepfake risk, or the broader intersection of technology and law, let’s talk.

Hoag Law.ai LogoHoag Law.ai

Providing fractional General Counsel services specifically tailored for high-growth startups and AI-driven companies.

Practice Areas

  • AI Governance
  • SaaS & Technology Contracts
  • Privacy & AI Compliance
  • Startup IP Strategy

Connect

Attorney Advertisement. Prior results do not guarantee a similar outcome.

© 2026 Hoag Law.ai. All rights reserved.

This site uses cookies for live chat (Crisp). No analytics cookies. Privacy details