For his use of the artificial intelligence ChatGPT to draft a legal brief (Mata v. Avianca) a personal injury lawyer now faces potential sanctions. As a result, Federal judge Brantley Starr is now requiring lawyers in his cases to certify that they did not use artificial intelligence to draft their filings without a human checking their accuracy.

Lawyer Uses ChatGPT to Create Legal Brief

The claimant was suing a Colombian airline for injuries arises from a metal food cart.  The defendants applied to have the claim dismissed. The claimant’s lawyer submitted a brief based on the citations and legal opinion of ChatGPT. Made up legal citations confused the airlines’ lawyers. Not only were the cases made up but Chat GPT created fictitious citations for the made-up caselaw.

ChatGPT made up three cases and the lawyer did not fact check them. It is unclear whether the lawyer was using the free version or the paid updated version of ChatGPT and whether it would have made a difference. The lawyer was required to respond to the concerns of the airline lawyers in an affidavit. In his affidavit the lawyer said he consulted with ChatGPT to supplement his research. He relied on the citations and legal opinions of Chat GPT without verifying their accuracy. ( affidavit of lawyer Steven A. Schwartz )

Ruling on the Use of ChatGPT for Legal Opinions

Setting rules for his courtroom Texas judge Brantley Starr said,

“All attorneys appearing before the court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being,”

a legal submissions replete with “bogus judicial decisions, with bogus quotes and bogus internal citations.” were filed with the court. Judge Castel ordered a hearing for June 8, 2023 to discuss potential sanctions.

Use of AI for Legal Briefs

Currently, AI’s like ChatGPT are prone to hallucinations and bias. AI makes stuff up, even quotes and citations. Furthermore, while lawyers swear an oath the AI and its designers do not.

“Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle,”  read the notice put out by court .

The misuse of AI is likely to have a chilling effect on the growth of AI into law firms globally. Current programs like Harvey.ai and other artificial intelligence designed for the legal community need to pay attention to these court decisions. Thousands of lawyers in America are using Harvery.ai. There are more than 15,000 law firms on the waiting list for this new AI.

There are a two key takeaways from this case: (1) All AI generated content must be noted as such so the court is aware of its use; and (2) lawyers must confirm that the content was checked by an actual human being.

The AI platforms are very powerful and will have many uses in the law including form divorces, discovery requests, suggested errors in documents, and suggested questions in oral arguments. However, the legal briefs and legal arguments still need to be the work product of human lawyers, at least for now. The fact that the current level of AI’s are prone to hallucinations and making up legal citations means that lawyers needs to fact check any drafts created by their AI helpers.

The Future of AI for Legal Citations and Opinions

Significant problems with reliability and bias will limit the use of AI in its current form. Lawyers swear an oath to uphold the rule of law and the rights and freedoms of all persons. Generative artificial intelligence is a product of programming created by humans that did not have to take this oath.

Legal AI will inevitably have important roles in the law firms of the future. Currently however it is difficult to weight the biases built into the current AI’s. We don’t know whether an unknown bias will be detrimental to humanity. The misuse of AI in Canadian law firms could put the administration of justice beyond the reach of humans. The legal community needs to take steps to ensure that paralegals, lawyers, and judges when using AI, verify their citations and legal opinions.

 

1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment