SLAW: Straight-Up Made-Up: the Ongoing Saga of Problematic AI-Generated Legal Submissions | Practice Source – Legal News and Views


Don’t include fake cases in court submissions. Don’t miscite cases for wrong propositions of law. Don’t refer to cases in court if you haven’t read them. This is the most basic of lawyering stuff. Probably too basic to even be included in an Advocacy 101 course. Yet, over the last two years, lawyers have made headlines for doing exactly these things. Their common partner in misadventure? Generative AI.

The landscape of AI-generated errors in court submissions

AI-related mishaps first came to the attention of the legal community (and the rest of the world) in May 2023 when an American lawyer misused ChatGPT for legal research and made the front page of the New York Times after he included references to non-existent cases in court filings. When asked by the court to explain himself, the lawyer stated, “I did not comprehend that ChatGPT could fabricate cases” and indicated he was “embarrassed, humiliated and deeply remorseful” about not taking any steps to verify the accuracy of the tool’s outputs. Given the wide coverage this case received, many assumed that the days of lawyers misusing AI were numbered – surely lawyers would have gotten the message not to uncritically trust the outputs of ChatGPT and other generative AI tools.

Unfortunately, more than two years later, we have many more examples of the same thing happening. French researcher Damien Charlotin has created a database to track legal decisions that reference “hallucinated” content. Over 150 instances are currently listed (this number includes cases involving self-represented litigants). Worryingly, the database suggests that the “fake case” trend is actually accelerating, with the vast majority of instances occurring in 2025. And, of course, these numbers no doubt represent an under-counting given that not every instance of problematic AI-generated legal content makes its way to a written decision and not every written legal decision gets publicly reported.

In Canada, there are four reported cases of lawyers including problematic AI-generated content in court submissions that I am aware of:

  • In February 2024, a lawyer appearing before the Supreme Court of British Columbia in a family law matter had costs awarded against her personally after she included two non-existent cases, generated by ChatGPT, in a notice of application filed with the Court. Like the American lawyer mentioned above, this lawyer was not aware that the technology could produce fictitious authorities and was “remorseful” and “deeply embarrassed”. (Zhang v Chen, 2024 BCSC 285)
  • In April 2025, a lawyer appearing before the Federal Court was admonished after the Court was unable to locate two cases cited in filed materials. The Court observed that the AI tool used in this case (“a professional legal research platform designed specifically for Canadian immigration and refugee law practitioner”) also hallucinated a legal test and “cited, as authority, a case which had no bearing on the issue at all.” This use of generative AI was undeclared, notwithstanding a Federal Court notice requiring disclosure. The Court found that a costs award (in an amount to be determined) was appropriate given “that the use of generative artificial intelligence [was] not only undeclared but, frankly, concealed from the Court”. The Court further found that, with respect to any costs that are awarded, there should be consideration as to whether the lawyer should be required to pay any of those costs personally. (Hussein v. Canada (Immigration, Refugees and Citizenship), 2025 FC 1060)
  • In May 2025, a lawyer appearing before the Ontario Superior Court of Justice in a criminal law matter was required to show cause as to why she should not be held in contempt of court after her factum was found to include “references to several non-existent or fake precedent court cases.” Prior to a contempt hearing, the lawyer informed the Court that her staff, unbeknownst to her, had used ChatGPT when preparing the factum. The lawyer took full responsibility, expressed deep regret for what happened, committed to taking continuing education training in technology and legal ethics and said that she implemented new protocols in her office to prevent this from happening again. Finding that the purposes of a contempt hearing were already met, the Court dismissed that proceeding on the conditions that the lawyer: (1) fulfill her commitment to take continuing education training and (2) not bill her client for the research, factum writing and attendance at the underlying motion. (Ko v. Li, 2025 ONSC 2965)
  • In May 2025, a lawyer appearing before the Ontario Court of Justice in a criminal law matter was ordered to prepare a new set of defence submissions after the Court observed that the original submissions included: a case that appeared to be fictitious; several case citations to unrelated civil cases; and cases cited for legal propositions not found in the cases. There was no explicit finding in the reasons that generative AI was used to prepare these submissions, with the Court noting that there would be “a discussion at the conclusion of the trial about how the defence submissions were prepared.” That said, the Court’s suspicion that generative AI had been used was apparent in the Court’s specific requirement that “generative AI or commercial legal software that uses GenAI must not be used for legal research for [the new submissions]” ( v. Chand, 2025 ONCJ 282)

Why should we be concerned with problematic AI-generated content?

It goes without saying that fabricated or misleading legal submissions are bad. No one is applauding these lawyers. That said, it is worthwhile to unpack, a little bit, exactly what is at stake when faulty generative AI submissions make their way into Canadian courtrooms.

Most immediately, there are concerns about the proper administration of justice. As noted by Justice Myers in the Ko v. Li case,  “a court decision that is based on fake laws would be an outrageous miscarriage of justice to the parties.” Legal disputes must be adjudicated on the basis of actual, not fabricated, legal authorities.

Straight-Up Made-Up: the Ongoing Saga of Problematic AI-Generated Legal Submissions

We will be happy to hear your thoughts

Leave a reply

Som2ny Network
Logo
Compare items
  • Total (0)
Compare
0