Legal Ethics and Artificial Intelligence

By Timothy Casey

Modern artificial intelligence tools offer the promise of quick and efficient solutions to complex questions. General applications such as OpenAI’s ChatGPT, Microsoft’s Bing A.I., Google’s Bard and law specific applications such as CaseText’s CoCounsel are based on advanced, deep-learning, language-based artificial intelligence. These applications use generative artificial intelligence that analyzes massive data sets to provide natural language responses to questions submitted by human users. While these tools may offer advantages over existing computer assisted research tools such as Lexis and Westlaw, there are unseen dangers as well.

A New York federal district court recently imposed sanctions on a lawyer who used ChatGPT to write a legal brief. The lawyer submitted the brief as his own work, and the brief contained material misstatements of law, including reference to six non-existent case authorities. (Mata v. Avianca, Inc., Opinion and Order on Sanctions, No. 22-CV-1461 (PKC)(S.D.N.Y., June 22, 2023, Castel, J.)(hereafter, Sanctions Order). Of note, the fictitious cases produced by ChatGPT included “indicia of reliability” such as “case captions, the names of the judges from the correct locations, and detailed fact patterns and legal analysis that sounded authentic.” (Sanctions Order, supra, at 20.). In response to a request for the text of the fake opinions, ChatGPT produced an entire fake opinion allegedly decided by the Eleventh Circuit. (Sanctions Order, Appendix A).

What are the ethical issues surrounding the use of artificial intelligence, specifically programs such as ChatGPT, Microsoft’s Bing A.I. chatbot, Google’s Bard, Harvey, or CaseText CoCounsel?

Lawyers have several ethical obligations that intersect with the use of modern technologies such as artificial intelligence.

Lawyers have duties of competence and diligence. California Rule of Professional Conduct 1.1(b) defines competence as applying “the (i) learning and skill, and (ii) mental, emotional, and physical ability reasonably* necessary for the performance of such [legal] service.” (Cal. R. Prof. Conduct r. 1.1(b)). As part of the duty of competence, lawyers have an obligation to understand the “relevant technology” and the advantages and disadvantages of those technologies. “The duties set forth in this rule include the duty to keep abreast of the changes in the law and its practice, including the benefits and risks associated with relevant technology.” (Cal. R. Prof. Conduct, r. 1.1, cmt. 1).

Lawyers must be candid with the court and others related to the representation. “A lawyer shall not: (1) knowingly* make a false statement of fact or law to a tribunal* or fail to correct a false statement of material fact or law previously made to the tribunal* by the lawyer.” (Cal. R. Prof. Conduct r. 3.3(1)). 

Moreover, the State Bar Act mandates that a lawyer must “employ, for the purpose of maintaining the causes confided to him or her those means only as are consistent with truth, and never to seek to mislead the judge or any judicial officer by an artifice or false statement of fact or law.” (Bus. & Prof. Code § 6068(d)). This obligation of honesty and candor extends beyond the practice of law. A lawyer may not “engage in conduct involving dishonesty, fraud,* deceit, or reckless or intentional misrepresentation.” (Cal. R. Prof. Conduct r. 8.4(c)).

In addition, California lawyers must be mindful of the duties of confidentiality. (Cal. R. Prof. Conductr. 1.6; Bus. & Prof. Code 6068(e)).

And lawyers are responsible for supervising the work of subordinate lawyers and staff in the firm. (Cal. R. Prof. Conduct r. 5.1 and 5.3).

 The risks of using A.I. tools extend to law students and many law schools have enacted policies governing the use of such tools. (See, e.g., UCLA School of Law, Academic Standards and Related Procedures – J.D., § XIV. B. (prohibiting “submitting written work drafted or edited in any way by an artificial intelligence (AI) content generator (including but not limited to OpenAI’s ChatGPT, Microsoft’s Bing AI Chatbot, and Google’s Bard), without the prior and explicit approval by the instructor.”).

Last, but certainly not least, lawyers have a statutory obligation to verify the accuracy of filings. Failure to do so may result in sanctions under both federal and California rules. (Fed. R. Civ. Proc. 11; Code Civ. Proc. § 128.5).

In conclusion, California lawyers bear the ultimate responsibility for their work. While ChatGPT and other technologies present the opportunity to increase efficiency, they may do so at the expense of accuracy. A lawyer who presents the output of a computer program as his or her own work product risks violating numerous ethical rules.


By Tim Casey, casey@law.ucla.edu