By Brandon Kimura
Artificial Intelligence has long been imagined by both science and popular culture. With the release of ChatGPT to the public, AI is now reality and its potential application in every industry is evolving daily. The law is no exception. From applying AI to discovery, to allowing AI to argue in court, AI is here, as are the ethical issues that arise from its use. Thankfully, while the problems AI poses may be novel, at least some of the ethical answers appear to be comfortably traditional.
One of the first cases to deal with the ethical issues of AI in the law is Roberto Mata v. Avianca Inc., 22-cv-1461-PKC, filed in the United States District Court, Southern District of New York. The plaintiff, Mr. Mata, asserted that in 2019, he was injured during a flight when a service tray struck his knee. Avianca moved to dismiss the case asserting that the claim was time-barred.
Mr. Mata’s counsel, Steven Schwartz filed a response to Avianca’s Motion to Dismiss. Unlike many of its users, Mr. Schwartz did not use the AI program to draft a document for him. Instead, Mr. Schwartz relied upon it for his legal research. Unfortunately, in a Skynet-like betrayal, ChatGPT turned on its user. ChatGPT’s treachery was both audacious and, at the same time, almost childishly apparent. The AI fabricated at least 6 cases, complete with quotations. It also cited several other real cases with fictitious holdings and/or quotations. The AI cloaked these cites with authentic looking case captions, court names, docket numbers, and names of the judges and lawyers. When challenged on these cites, the AI, with con man like confidence, confirmed each case was real and then provided what it said was a “brief excerpt” from each case. It also asserted that these cases could be found on Westlaw and Lexis. Without access to either database, Mr. Schwartz relied upon the fictional cases for substantive portions of his brief.
In its reply, Avianca easily saw through ChatGPT’s deception. It informed the court that it could not locate several cases cited in Mr. Schwartz’s brief. The court, finding the same, ordered Mr. Schwartz to provide the cases. Undeterred by Avianca’s and the court’s questions and suspicions, Mr. Schwartz returned to ChatGPT. After some delay, Mr. Schwartz provided the ChatGPT excerpts of the cases in a filed affidavit, noting that they “may not be inclusive of the entire opinions but only what is made available by [an] online database.” When Avianca informed the court that it still could not locate the cases, the court ordered Mr. Schwartz to show cause on why he should not be sanctioned. Mr. Schwartz, now forced to realize that he had been deceived, admitted the error, but argued that his ignorance was reasonable because he “did not understand [that ChatGPT] was not a search engine, but a generative language processing tool primarily designed to generate human-like text response…with little regard for whether those responses are factual.”
Predictably, the court was unimpressed. While the complete transcript is not yet publicly available online, the awkwardness of the sanction hearing echoes through the court’s Opinion and Sanction order. The Opinion begins with the premise that lawyers have the ethical obligation to ensure the accuracy of their filings. It then found that Mr. Schwartz and his firm “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by…ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”
The court also noted that if Mr. Schwartz had “come clean” after the initial request by Avianca and the court, “the record now would look quite different.” The court found that Mr. Schwartz ignored these “red flags”[1] and instead, “doubled down …and did not begin to dribble out the truth until… after the Court issued an Order to Show Cause.”
The court found that Mr. Schwartz acted in bad faith when he represented in an affidavit that ChatGPT “supplemented” his research to create “the false impression that he had done other, meaningful research on the issue and did not rely exclusively” on ChatGPT, when, in fact, “it was the only source of his substantive arguments.” It also found that Mr. Schwartz’s actions went beyond being “objectively unreasonable” and became “bad faith” when he “consciously ignored” the numerous indications the cases he had cited were false.
In its legal conclusions, the court listed a litany of ethical rules that Mr. Schwartz and his firm violated when they filed their brief with fraudulent cites, slow played their response, and then stalled both the court and opposing counsel long after they should have recognized, acknowledged, and corrected the problem.
The court considered a wide variety of sanctions, but eventually settled on only two. It ordered Mr. Schwartz and his firm to inform their client and the judges whose names were wrongfully cited by the fictitious cases of the sanctions the court had imposed. It also ordered a fine of $5,000.
Mata gives us a peek into some of the novel pitfalls of using AI in the practice of law. The court’s conclusions, however, were not entirely driven by the use of AI, or even reliance on its fictional cases. Instead, its focus was upon the actions of the attorneys after the error was made known to them. While AI will undoubtedly create new and unique problems, in contrast, at least part of the ethical answers to these issues remains one of the most time-tested and honored of rituals … falling on one’s sword.
[1] The court also noted red flags within the fabricated cases, noting in one that “Its legal analysis is gibberish” and that the “summary of the case’s procedural history is difficult to follow and borders on nonsensical.”