Why AI breaks traditional indemnification (and what founders need to know)
Back to Blog

Why AI breaks traditional indemnification (and what founders need to know)

January 27, 2026
Marc Hoag
Why AI breaks traditional indemnification (and what founders need to know)

This is educational material and does not constitute legal advice nor is any attorney/client relationship created with this article, hence you should contact and engage an attorney if you have any legal questions.


Here’s a fundamental problem that most AI startup founders don’t realize until they’re deep into an enterprise contract negotiation: the entire legal architecture of indemnification was built for a world where you could control your product’s outputs.

AI breaks this assumption.

When a traditional SaaS company indemnifies a customer against, say, patent infringement claims, that SaaS company can actually prevent infringement by controlling exactly what their software does. The code is deterministic. If version 2.3.1 doesn’t infringe, it won’t suddenly start infringing tomorrow.

But an AI system? It might generate infringing content on Monday that it didn’t generate on Sunday, using the exact same model weights. You can’t “fix” this with a patch. The non-deterministic nature of generative AI means you’re being asked to indemnify against outcomes you cannot fully control or predict. This is a fundamentally different risk profile, and most MSAs haven’t caught up.

The Three Pillars: Defend, Indemnify, Hold Harmless

Before diving deeper into AI-specific issues, let’s clarify what these terms actually mean, because even sophisticated founders conflate them:

Defend means you’re paying for the lawyers and managing the litigation. This is an active, immediate obligation. The moment a claim lands, you’re on the hook for legal fees, win or lose. This is often the most expensive component.

Indemnify means you reimburse for losses and damages after the fact. This kicks in once a judgment is rendered or a settlement is reached. You’re covering the bill, but you weren’t necessarily driving the defense.

Hold Harmless is the broadest obligation, and means you’re preventing the indemnified party from suffering any loss whatsoever. This can include things like reputational harm, business disruption, and other consequential damages that might not be covered under pure indemnification.

The distinction matters because many contracts use these terms together (“shall defend, indemnify, and hold harmless…”) when the parties haven’t actually thought through what each entails. When something goes wrong, that ambiguity becomes very expensive very quickly.

Why Symmetry Is Usually Right (But Not Always)

In a typical SaaS MSA, indemnification should generally be mutual and symmetrical. You indemnify the customer for claims arising from your product; they indemnify you for claims arising from their data or misuse of your service. This makes intuitive sense because each party is best positioned to control and insure against their own risks.

But here’s where it gets nuanced: certain asymmetries are not only acceptable but necessary.

Carve-outs that make sense:

  • Material unlawful acts: You shouldn’t indemnify someone for their own illegal conduct. If your customer uploads stolen data to your AI tool and gets sued, that’s on them.
  • Gross negligence or willful misconduct: Standard carve-out. If one party acts recklessly or intentionally causes harm, they shouldn’t be shielded by the other party’s indemnification.
  • Modifications or combinations: If the customer modifies your software or combines it with something that creates the infringement, that’s outside your control and shouldn’t be your liability.
  • Continued use after notice: If you notify the customer of a potential infringement issue and they keep using the product anyway, the indemnification should cease.

Asymmetries to watch for:

Enterprise customers often hand startups contracts with wildly one-sided indemnification. Red flags include:

  • Indemnification obligations carved out of your liability cap (meaning your exposure is unlimited)
  • “Arising from” or “relating to” language that’s so broad it captures things you can’t control
  • No reciprocal indemnification from them for their data or misuse
  • Defense obligations triggered by mere allegations rather than actual claims

The AI-Specific Problem: What Exactly Are You Warranting?

This is where it gets genuinely difficult for companies leveraging generative AI services.

Traditional IP indemnification works because the huaman is fully in the loop: You can warrant that your software doesn’t infringe third-party IP. You wrote the code. You know what’s in it. You can control it.

With generative AI, the calculus changes because the human is, by definition, no longer in the loop end-to-end. Consider:

Output IP risk: Your AI might generate text that infringes a third party’s copyright. Not because your model was trained on that text (though that’s a separate issue), but because the model probabilistically produced something substantially similar to protected work. You didn’t intend this. You couldn’t predict it. But under a broad indemnification clause, you might be on the hook.

Hallucinated defamation: Your AI generates false statements about a real person or company. The customer’s end-user sees this output, relies on it, shares it. Someone sues for defamation. Who’s liable? The answer depends heavily on how your indemnification clause is drafted.

Data privacy exposure: Your AI processes customer data in ways that arguably violate GDPR or other privacy laws—perhaps through unexpected data retention in model weights, or outputs that reveal PII from training data. The customer gets hit with a regulatory fine. Are you indemnifying that?

The honest answer is that nobody knows how courts will ultimately allocate these risks (not to mention insurance providers). The case law is embryonic. Which means your MSA is doing heavy lifting that it probably wasn’t designed to bear.

But here’s the uncomfortable part: none of these risks are actually unknown anymore, three years into this post-ChatGPT world.

Output IP risk, hallucinations, data privacy leakage aren’t speculative edge cases. They’re thoroughly documented, widely discussed, and entirely foreseeable. Every AI company knows this. Every sophisticated customer knows this. The courts will certainly know this.

Which changes the legal calculus considerably. You can’t later argue you didn’t appreciate the risk you were assuming. Foreseeability cuts against you. If you signed a broad indemnification clause covering AI outputs, you did so with eyes open. That’s assumption of risk, not surprise.

What is genuinely unsettled is how courts will ultimately allocate liability, whether Section 230 protections apply to AI outputs, how copyright’s substantial similarity analysis works for probabilistic generation, whether GDPR fines can be contractually shifted. The minefield is clearly mapped. We just don’t know which mines are live.

This is why precise contract drafting matters so much right now. You’re not protecting yourself against the unforeseeable. You’re negotiating the allocation of known risks before the case law catches up.

Practical Implications for Founders

If you’re an AI startup founder, here’s what this means for your contract negotiations:

1. Read the indemnification clause. Actually read it. I know this sounds obvious, but most founders skim contracts looking for the price and payment terms. The indemnification section is often where the real risk lives. If you’re signing enterprise paper (their contract, not yours), this is where they’ve hidden the landmines.

2. Push back on “arising from” or “relating to” language. These phrases are dangerously broad. “Arising from” can capture claims that have only a tangential connection to your product. “Relating to” is even worse. Negotiate for “directly resulting from” or “caused by” language that creates a tighter causal nexus.

3. Insist on liability caps that apply to indemnification. Many enterprise contracts carve indemnification out of the overall liability cap. This means your maximum exposure isn’t 12 months of fees — it’s potentially unlimited. For an AI company facing unpredictable output liability, this is existential risk.

4. Add AI-specific carve-outs. Consider language that limits your indemnification obligation for claims arising from: (a) outputs generated by the AI that you couldn’t reasonably prevent or predict, (b) customer’s failure to implement recommended guardrails or content filters, or (c) customer’s use of outputs without human review in contexts where review was warranted.

5. Understand your insurance coverage. Most E&O and cyber policies weren’t written with AI output liability in mind. Before you sign a broad indemnification clause, confirm with your broker that you actually have coverage for the risks you’re assuming. You might be surprised.

6. Consider warranty disclaimers carefully. Traditional SaaS companies warrant that their software will perform as documented. Can you warrant that about AI outputs? Probably not. Consider express disclaimers that AI outputs are probabilistic, may contain errors, and should be verified by humans before reliance.

The Elephant in the Room: You’re Indemnifying Against the Unknown

Here’s the uncomfortable truth: when you sign a broad indemnification clause for AI outputs, you’re agreeing to cover risks that neither party fully understands. The legal landscape is evolving. Courts haven’t weighed in on most of the novel issues AI presents. Regulatory frameworks like the EU AI Act are still being implemented.

This doesn’t mean you shouldn’t sign enterprise contracts. It means you should be clear-eyed about what you’re agreeing to, negotiate where you can, and price the risk appropriately where you can’t.

The founders who get this right will be the ones who treat indemnification not as boilerplate to skim past, but as a core business risk that deserves the same attention as product-market fit or unit economics.


For further questions on AI contract negotiation, MSAs, or indemnification structuring, feel free to reach out.

Hoag Law.ai LogoHoag Law.ai

Providing fractional General Counsel services specifically tailored for high-growth startups and AI-driven companies.

Practice Areas

  • AI Governance
  • SaaS & Technology Contracts
  • Privacy & AI Compliance
  • Startup IP Strategy

Connect

Attorney Advertisement. Prior results do not guarantee a similar outcome.

© 2026 Hoag Law.ai. All rights reserved.

Cookie Notice

We use essential cookies for site functionality and analytics cookies (Google Analytics) to understand how you use our site. By clicking "Accept", you consent to our use of cookies. You can decline analytics cookies and still use the site. Learn more