Ethically Managing The Shiny New Toy: Ethical Obligations And Generative AI

By Edward J. McIntyre

One would have to have been living on another planet not to recognize that generative AI is upon us —and all the rage. Whether ChatGPT, Bing A.I., Bard or some other platform, it’s hard to miss the proliferation of information — and misinformation — about large-learning-model platforms. They’ll either miraculously transform the way we work, recreate, communicate — or will destroy it. Rest assured, there’s only more to come.

Will generative AI make younger lawyers and paraprofessional personnel in larger firms obsolete? Will it give small firms and sole practitioners new tools to compete with “the big guys?” Those questions must wait for time to sort out. What is the advent of this shiny new toy mean for lawyers right now?

Rules 5.1 and 5.3 give some guidance and implicate so many other rules. First, as rule 5.1 tells us, all lawyers—both in a law firm and a government office—with managerial authority have the obligation to make reasonable efforts that the firm—including an office of a governmental organization[1] —has measures in effect that give reasonable assurance that all lawyers comply with the Rules of Professional Conduct and the State Bar Act.[2] This ethical responsibility also includes all other lawyers who have intermediate managerial authority responsibilities.[3] 

The responsibilities do not, however, stop with managers. Any lawyer with supervisory authority over another lawyer must also make reasonable efforts to ensure that the supervised lawyer complies with the Rules and the Act.[4]

The responsibilities of managers and supervisors goes further. Lawyers who have managerial authority, and lawyers with supervisory authority, over non-lawyer personnel have the ethical obligation to make reasonable efforts to ensure that the non-lawyer’s conduct is compatible with the professional obligation of the lawyer.[5]

Which obligations of lawyers does the advent of generative AI implicate? Among them, the duties of: competence (rule 1.1)[6]; diligence (rule 1.3); client communication (rule 1.4); confidentiality (rule 1.6) and Business and Professions Code section 6068(e)(1); protection of client property (rule 1.15); candor to the tribunal (rule 3.3), and more. But that’s a sufficiently long shopping list for now.  

The issue for managers is does your firm or office have in place policies or procedures reasonably to ensure that all lawyers will comply with these rules and the Act as they begin to experience or use generative AI? For those who supervise other lawyers, what are you doing to ensure that your supervisee is complying with the rules and the Act as they become enamored of this new tool?

The same questions go to non-lawyer personnel. Does the firm or office have policies or procedures, and sufficient supervision, to ensure that non-lawyers’ conduct is compatible with the rules and the Act when it comes to testing, experimenting with, or using generative AI?

Is it sufficient to have a policy that no one may use ChatGPT, or some other platform at all — a head-in-the-sand approach? Perhaps too blunt a tool. ChatGPT, used intelligently, may be useful in serving clients’ needs — and significantly reduce costs. Think, for example, about the hours that go into summarizing testimony, deposition or trial, and the lightning speed at which generative AI can perform the task — at least in the first instance. Or the time spent doing a preliminary draft of a letter. Given the correct prompts — and careful review and editing — the time, and cost, may come down significantly.

Can client information be used for generative AI prompts to get increasingly relevant responses? California’s strict confidentiality obligation — rule 1.6 and section 6068, subdivision (e)(1) — almost assuredly makes that impossible; once client information is used on such an AI platform, confidentiality is likely gone forever. The same with client proprietary information, whether trade secrets or otherwise. Same result.

Does client communication — rule 1.4 — require a lawyer to advise a client that the lawyer is using generative AI as one of the “means by which to accomplish the client’ objectives”? Given its recent advent, and the attendant risks, such communication is likely required. Client approval? Not necessarily ethically required, but good risk management may suggest otherwise.

All of which brings us back to where we started. Does your firm or office have policies or practices in place — for lawyers and non-lawyer personnel — that set standards for the use of generative AI, compatible with our ethics obligation under both the rules and the Act?


[1]  Rule 1.0.1(c).

[2]  Rule 5.1(a).

[3]  Rule 5.1(a) Comment [3].

[4]  Rule 5.2(b).

[5]  Rules 5.3(a) and 5.3(b).

[6]  Competence includes the obligation to stay abreast of the benefits and risks of technology (rule 1.1 Comment [1]).