This is educational material and does not constitute legal advice nor is any attorney/client relationship created with this article, hence you should contact and engage an attorney if you have any legal questions.
If you’re a California attorney using generative AI — and at this point, who isn’t — you need to know about SB 574. Also, it’s inevitable that it will soon be ethical malpractice not to use AI in the practice of law, so what follows is essential knowledge.
Introduced by Senate Judiciary Chair Tom Umberg (D-Santa Ana), SB 574 would create the first U.S. statute specifically governing how lawyers and arbitrators use generative AI.
What SB 574 requires
The bill creates three distinct regulatory tracks: one for attorneys, one for court filings, and one for arbitrators.
Attorneys’ duties when using AI
SB 574 would impose four statutory duties on any attorney using generative AI to practice law:
Protect confidential data. Don’t feed confidential, personally identifying, or other nonpublic information into a public generative AI system. Note the qualifier — “public.” This is the bill drawing a line between consumer-facing tools (think free-tier ChatGPT) and enterprise-grade deployments with contractual data protections. The distinction matters; see below.
Prevent discrimination. Ensure your use of AI doesn’t unlawfully discriminate against individuals or protected groups. This goes beyond what most attorneys probably think about when using AI for drafting or research, but consider: if you’re using AI to screen documents, evaluate claims, or prioritize case intake, bias in the model becomes your bias.
Verify and correct. Take reasonable steps to verify the accuracy of any AI-generated material, correct hallucinated or misleading content, and remove biased or harmful output. “Reasonable steps” will inevitably be the subject of future litigation, but the intent is clear — you can’t blindly copy-paste.
Consider disclosure. Consider whether your use of AI should be disclosed when producing content provided to the public. Note: this is a “consider” requirement, not a mandate. The bill leaves disclosure to professional judgment rather than imposing a blanket rule.
Court filings and citation verification
The bill would amend CCP § 128.7 — the sanctions statute — to make it explicitly unlawful for any court filing to contain a citation the attorney hasn’t personally read and verified, including citations provided by generative AI. This is therefore a direct response to the hallucination problem; SB 574 converts the formerly implicit “don’t do this” into a statutory prohibition with teeth: sanctions under the existing § 128.7 framework.
For practitioners, this means building verification checkpoints into your workflow. If you’re using AI for legal research — and again, you probably are — the days of trusting the output without reading every cited case yourself are formally numbered.
Arbitrator limits on AI
This is the most forward-looking piece of the bill, and it arrives at an interesting moment.
SB 574 would impose three constraints on arbitrators:
No AI delegation. An arbitrator cannot delegate any part of their decision-making process to any generative AI tool.
Record-only reliance. Arbitrators can’t rely on AI-generated information from outside the record without disclosing that use to all parties.
Human responsibility. The arbitrator must assume full responsibility for the award regardless of any AI assistance.
As a practical matter, if SB 574 passes in its current form, California-based arbitrations would effectively be unable to use the AAA-ICDR’s AI arbitrator product, at least without significant structural changes to ensure the human arbitrator retains genuine decision-making authority rather than rubber-stamping AI output.
What SB 574 does NOT do
A few important negatives worth flagging:
It does not ban attorneys from using AI. This is regulation, not prohibition.
It does not prescribe specific tools or require AI registration, audits, or approved-vendor lists.
It does not create new criminal penalties. Enforcement is through existing mechanisms: court sanctions under § 128.7 and State Bar disciplinary proceedings. The bill doesn’t currently specify any enforcement mechanism for arbitrators, which is a gap worth watching.
The bigger picture
California has been on a regulatory tear: SB 53 (the Transparency in Frontier Artificial Intelligence Act, signed September 2025) established the first U.S. safety framework for frontier AI models. SB 243 regulates AI companion chatbots. AB 853 expands AI content disclosure requirements.
SB 574 is the profession-specific layer on top of that broader regulatory stack. And if California’s pattern holds — see: CCPA becoming the de facto national privacy standard, vehicle emission rules setting the industry baseline — expect other states to follow with similar attorney AI regulations.
These are exactly the kinds of issues I work on every day at Hoag Law.ai and as Chair of the Beverly Hills Bar Association’s AI and the Law Section, alongside Vice-Chair Amanda Harris. If you’re building AI products, navigating AI compliance, or just trying to figure out how to use these tools responsibly in your practice, let’s talk.

