AI can now be used responsibly to increase the quality of legal services. However in the case Zhang v. Chen, 2024 BCSC 285, a lawyer filed two fictitious cases in her notice of application, which were generated by ChatGPT but not verified for authenticity.

The family law case involved an application for costs following an unsuccessful motion for parenting time. Ms. Zhang sought party and party costs, arguing she was substantially successful in opposing the motion. Additionally, she sought special costs against Mr. Chen’s counsel, Ms. Ke, for including two fake cases in her application, which she later admitted were generated by ChatGPT and not verified by the lawyer. Ms. Zhang argued this conduct was reprehensible and warranted a rebuke due to the time and expense it caused opposing counsel.

As the Judge aptly stated out, “Citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court.  Unchecked, it can lead to a miscarriage of justice.”

The lawyer’s counsel argued that the special costs sought were a matter for the Law Society and emphasized that the fictitious cases were withdrawn before the hearing, thus not affecting the case’s merits.

The court, presided by Justice D.M. Masuhara, found Ms. Zhang to be substantially successful, warranting an award of costs in her favor. However, regarding the special costs against the lawyer Ms. Ke, the court found this unprecedented and required a high threshold of reprehensible conduct or abuse of process. Despite the seriousness of Ms. Ke’s mistake, the court did not find an intent to deceive, given the steps taken to correct the error and the context in which the fictitious cases were unlikely to influence the court’s decision.

The court dismissed the request for special costs against Ms. Ke, recognizing the significant negative publicity and professional embarrassment she faced. However, it held Ms. Ke personally liable for additional costs incurred due to her inclusion of the fake cases, emphasizing the need for lawyers to verify the accuracy of materials submitted to court, especially when using AI-generated content. The court also mandated Ms. Ke to review her files for any other AI-generated case citations and to report back to the court and opposing parties if any were found.

This case highlights the legal profession’s challenges with integrating AI tools like ChatGPT, underscoring the importance of verifying AI-generated information and the ethical responsibilities of lawyers in ensuring the accuracy of their submissions to the court.

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment