The submission of fake legal cases and legal arguments created by ChatGPT has resulted in a $5,000 fine against a lawyer and his firm. The underlying personal injury case involved lawyers using AI for legal briefs to avoid a dismissal of the claim. Artificial intelligence performed the legal research and writing normally done by junior lawyers and paralegals. The result was mostly fake but convincing enough to fool the lawyers. As one sanctioned lawyer stated in his June 6, 2023 declaration:

…I still could not fathom that ChatGPT could produce multiple fictitious cases, all of which had various indicia of reliability such as case captions, the names of the judges from the correct locations, and detailed fact patterns and legal analysis that sounded authentic. The First OSC caused me to have doubts. As a result, I asked ChatGPT directly whether one of the cases it cited, “Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2009),” was a real case. Based on what I was beginning to realize about ChatGPT, I highly suspected that it was not. However, ChatGPT again responded that Varghese “does indeed exist” and even told me that it was available on Westlaw and LexisNexis, contrary to what the Court and defendant’s counsel were saying. This confirmed my suspicion that ChatGPT was not providing accurate information and was instead simply responding to language prompts without regard for the truth of the answers it was providing. However, by this time the cases had already been cited in our opposition papers and provided to the Court.

The court found the lawyer acted in bad faith in continuing to claim the fake cases were real. Also, the lawyer’s subjective bad faith was further supported by the untruthful assertion that ChatGPT was merely a “supplement” to his research, his conflicting accounts about his queries to ChatGPT as to whether the cases were real, and the failure to disclose reliance on ChatGPT in his affidavit.

The law firm was jointly and severally liable for the violations of their lawyers under the state law. The law firm acknowledged responsibility, identified remedial measures taken by the firm, including an expanded Fastcase subscription and CLE programming. The fake cases were not submitted for  financial gain and were not done out of personal animus.

The Court concluded that a penalty of $5,000 paid into the Registry of the Court was sufficient but not more than necessary to advance the goals of specific and general deterrence.  This  penalty was made to advance the interests of deterrence and not as punishment or compensation.

When AI legal research writing becomes as accurate as human research these mistakes will not longer occur. This leaves the question of whether to regulate the use of AI to perform legal research and writing. The quick fix seems to be requiring lawyers to notify the court if a document has been created by AI. If so, to required further certifications that the research has to reviewed by a human.

As Artificial Intelligence begins to take over the work of humans, building protocols to verified authentic human content will be necessary if we value humanity. As AI begins to surpass human based legal research, human directed legal opinion may also become automated. Expect more mistakes, but it is likely that this new and powerful intelligence will only proliferate getting more accurate, reliable and consistent.

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment