Using AI in Law: Compliance with ABA Model Rules and Best Practices [UPDATED: February 19, 2026]
Back to Blog

Using AI in Law: Compliance with ABA Model Rules and Best Practices [UPDATED: February 19, 2026]

June 13, 2024
Marc Hoag
Using AI in Law: Compliance with ABA Model Rules and Best Practices [UPDATED: February 19, 2026]

This is educational material and does not constitute legal advice nor is any attorney/client relationship created with this article, hence you should contact and engage an attorney if you have any legal questions. Any discussion on technical matters may be inaccurate or no longer timely, so you should consult with the various vendors mentioned below to clarify and to ascertain accuracy.


The ABA Model Rules of Professional Conduct strongly imply, if not yet require, the adoption of generative AI technology in the practice of law.


The legal sector is rapidly embracing generative AI to streamline tasks such as research, drafting, document review, finding the needle in a haystack of email discovery, and more.

The case in 2023 of a New York lawyer being sanctioned for filing a brief with fake AI-generated case citations first alerted the world with a cautionary reminder: while AI can enhance our capabilities, it must be wielded with utmost responsibility to prevent ethical violations and safeguard our reputation.

While hallucinations have largely vanished as of this update in February 2026, attorneys must always review AI-generated output just as they review other associates’ and paralegals’ output, too.

Accordingly, the ABA Model Rules of Professional Conduct and California Rules of Professional Conduct, which largely align with the ABA Rules, not only imply but arguably necessitate the adoption of technology, including generative AI, to maintain competent client representation.

Embracing Competence through Technology

Maintaining competence is a fundamental ethical obligation for lawyers. According to ABA Rule 1.1 on Competence, this includes staying updated on changes in the law and its practice, including the benefits and risks associated with relevant technology (Comment 8 to ABA Rule 1.1).

This rule arguably implies that lawyers should use AI tools to enhance their practice, while emphasizing the need to ensure the accuracy and reliability of AI-generated content.

Regular training and the adoption of advanced AI tools are prudent ways to fulfill this obligation, ensuring that lawyers are well-versed in both the capabilities and limitations of AI technologies.

Enhancing Diligence with AI

ABA Rule 1.3 on Diligence requires lawyers to act with reasonable diligence and promptness in representing a client; this therefore implies strongly that technology must be used to balance a lawyer’s workload.

A lawyer relying on pen and paper to draft in days or weeks what would ordinarily take hours with a word processor, or to eschew email entirely for communicating with opposing counsel in favor of using only postal mail, would arguably run afoul of this rule if only because of the unnecessary delays and resulting costs that such anachronistic work would impose upon clients.

AI tools can likewise be a significant asset in managing workloads and delivering timely service to clients. By automating routine tasks like legal research, document review, drafting, and sifting through hundreds and thousands of discovery documents, lawyers can free up valuable time for more complex and strategic aspects of their work; in some instances the time and cost savings can be profound, sometimes reducing to mere minutes what would have otherwise taken hours.

However, it is crucial to remember that AI is not a substitute for human judgment and expertise, and errors are still a concern. Lawyers must still exercise diligence in reviewing and verifying AI-generated content to ensure its accuracy and relevance to the specific case at hand.

Ensuring Accuracy in AI Outputs

ABA Rule 3.1 on Meritorious Claims requires that lawyers only bring forward claims that are based on law and fact. Comment 2 to this rule emphasizes that an “action is frivolous … if the lawyer cannot make a good faith argument on the merits of the action taken” or support it by a good faith argument for the extension, modification, or reversal of existing law.

Therefore, it is essential to independently verify the accuracy of AI-generated content, including sources, citations, case law, statutes, and quoted language.

Manually checking all citations and legal references against primary legal sources is crucial, and simply asking or “prompting” a generative AI tool to “double-check its work,” as it were, will not usually suffice.

What can sometimes help is to use multiple generative AI platforms in parallel, by running your desired output from the first platform back and forth through all of them, as if slowly distilling the end result through a funnel to filter out the errors.

Maintaining Transparency and Protecting Confidentiality

Transparency in disclosing AI usage not only fosters trust but also helps the court and clients understand the role of AI in your work. ABA Rule 1.4 on Communication emphasizes the importance of keeping clients reasonably informed about the status of their matter.

When using AI tools, lawyers should disclose this to their clients and explain how AI is being used to assist with their case. This can be done through engagement letters or other client communications. Below is an example of such a clause:

Use of Artificial Intelligence (AI)

In representing you, I may use generative AI tools for tasks such as legal research, drafting, email and file management, and analysis. These tools augment, but do not replace, my professional judgment. Any use of such tools will be consistent with applicable ethics rules, and I will maintain human oversight and protect your confidential information.

However, ABA Rule 1.6 on Confidentiality mandates that lawyers must protect client information. When using AI tools, particularly those connected to public APIs, it is crucial to ensure that no confidential information is inadvertently disclosed.

Crucially, when evaluating any AI platform for use with client-confidential information, attorneys should assess the following factors — roughly in decreasing order of importance:

  1. Zero Data Retention (ZDR) Policies. The single most critical requirement. Confirm that the AI provider contractually commits to not retaining your prompts, outputs, or uploaded documents beyond the duration of the session or API call. This is distinct from merely “opting out” of training — ZDR means the provider does not store your data at all, period. Major providers like OpenAI and Anthropic offer ZDR through their API and enterprise tiers if negotiated, but their free consumer products typically do not provide this protection. Always verify ZDR commitments in the provider’s Data Processing Agreement (DPA) or enterprise terms, not just their marketing materials.
  2. No Training on Your Data. Separately from retention, confirm that the provider does not use your inputs or outputs to train, fine-tune, or improve its models. While most enterprise and API tiers now exclude customer data from training by default, consumer-tier products often do not. This should be an explicit contractual commitment, not merely a toggle in a settings menu that can be changed unilaterally.
  3. Data Isolation and Access Controls. Understand the provider’s architecture: Is your data logically or physically isolated from other customers? Who at the provider has access to your data, and under what circumstances (e.g., trust and safety review, debugging)? Enterprise deployments and private instances offer stronger isolation, but even shared-infrastructure API access can be acceptable if ZDR and no-training commitments are in place.
  4. Encryption in Transit and at Rest. Ensure the platform uses industry-standard encryption (TLS 1.2+ in transit; AES-256 at rest). This is table stakes — virtually every reputable provider meets this standard — but it should still be confirmed, particularly for any data that is retained (e.g., conversation logs you choose to save on the platform).

The critical distinction is between cloud storage (where the platform functions like a filing cabinet, holding your data for your later retrieval) and data processing (where the platform processes your data ephemerally and discards it).

For AI tools handling privileged information, you want processing without storage — and ZDR is how you get there.

And no, public AI platforms are not the same as cloud storage. The former is considered disclosure to a third party and thus risks waiving privilege, while the latter is not.

Finally, if the platform allows you to save conversation histories or upload documents for persistent use, confirm that (a) such data is deletable on demand, (b) deletion is actual deletion and not merely soft-deletion or archival, and (c) you understand the provider’s data retention schedule for any backups or logs.

My AI Privacy Guide was built for precisely this reason: to help you understand the various platforms’ Terms of Service and Privacy policies so that you can make a better informed decision.

Ethical Supervision of AI Tools

As AI tools — generative AI specifically — become more prevalent in legal practice, lawyers must remember that they are ultimately responsible for the work product generated using these tools.

ABA Rules 5.1 and 5.3 emphasize the ethical duties of lawyers to supervise subordinate lawyers and non-lawyer assistants, including AI tools. This involves understanding how the AI tool works, monitoring its output for accuracy and reliability, and ensuring that its use aligns with ethical and professional standards, including and especially, matters involving privacy and confidentiality.

Addressing Emerging AI Challenges

The rapid advancement of AI in the legal industry raises novel ethical and legal questions that lawyers must be prepared to address. ABA Resolution 112 (2019) urges courts and lawyers to address emerging issues related to AI usage, such as algorithmic bias, explainability, and transparency.

By staying informed about these issues and proactively implementing best practices for ethical AI usage, lawyers can harness the power of AI to improve efficiency and client service while upholding the highest standards of professionalism and integrity. This proactive approach includes engaging in continuous education about AI technologies and their ethical implications.

If you’re curious to try an interesting academic thought exercise, consider asking your favorite AI chatbot — be it ChatGPT; Claude; or GC.AI — to draft a new set of fictitious rules for the ABA or California Rules of Professional Conduct governing the use of AI. I used the prompt below, but you should play around and see what you come up with:

Can you please produce a fictitious rule for the California Rules of Professional Conduct, about the use of AI like LLMs like ChatGPT? Please give it appropriate headings and section numbers to fit into a logically coherent place in the existing rules, make sure it’s suitably detailed, and so on.

I don’t want to give anything away, but I think you’ll find this exercise as entertaining as it is awe-inspiring and impressive. And while it obviously offers no practical value, it certainly suggests a very plausible direction to where things are likely headed with generative AI and the legal community.

Spoiler alert: I’ll state for the record that I think it will soon be ethical malpractice NOT to use AI in the practice of law, or indeed, any profession, very soon; say, 2030.

Closing Thoughts

Generative AI is a revolutionary, society-changing tool the likes of which humanity has never seen before; a fantastic magic heretofore relegated only to the realm of fantasy and science fiction. It can significantly enhance the efficiency, accuracy, and client service in legal practice, but is not yet a replacement for human judgment and expertise.

Attorneys must remain ethically responsible for their work product, regardless of the tools used. By selecting appropriate AI tools, verifying AI-generated content, adhering to court rules, transparently disclosing AI usage, and addressing the ethical implications of AI, lawyers can effectively integrate AI into their practice while avoiding potential pitfalls.

And despite the risks, it is precisely because of the immense power and productivity gains offered by generative AI that the ABA and California Ethics Rules at least impliedly support and potentially encourage the thoughtful use of such technology to uphold the high standards of competence and ethical practice required in the practice of law, which at the end of the day, must always be in the client’s best interest.

For more insights on responsible AI usage in legal practice, feel free to reach out or explore our resources.

This site uses cookies for live chat (Crisp). No analytics cookies. Privacy details