
AI-generated ‘hallucinations” in filings, why do lawyers keep using ChatGPT?
12/06/2025
Attorneys keep getting in trouble for submitting filings with AI-generated ‘hallucinations.’
- Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.”
- The details vary, but the throughline is the same: an attorney turns to a large language model (LLM) like ChatGPT to help them with legal research (or worse, writing), the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake.
- In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations.
So why haven’t they stopped?
- The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now.
- For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant.
- Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work.
- One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator that could give you correct information or convincingly phrased nonsense.
Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers.
- “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.
In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly.
Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.”
The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said
- “Exploring the potential for implementing AI” at work is their highest priority.
One respondent said.
- “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,”
But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.
Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion.
In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.
Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers.
That filing included a citation with an “inaccurate title and inaccurate authors.”
Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use.
Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation.
These documents do, in fact, matter — at least in the eyes of judges.
In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up.
Judge Michael Wilner wrote.
- “I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,”
Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views.
Perlman said.
- “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table,
- but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,”
- But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce,
- Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture.
- “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,”
- “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.” (That said, the cases do at least typically exist.)
Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces.
Perlman said.
- “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,”
Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said
- He treats ChatGPT as a junior-level associate.
He’s also used ChatGPT to help write legislation.
- In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time.
Kolodin said
- He “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law.
Kolodin —
- Who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.
Kolodin SAID
- “You don’t just typically send out a junior associate’s work product without checking the citations,”
- “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever.
- You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”
- he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys.
- that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.”
AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools.
- Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads.
- The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.”
- Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.
Perlman is bullish on lawyers’ use of AI.
- “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said.
- “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”
Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. Wilner wrote,
- “Even with recent advances,”
- “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”
SOURCE
https://www.theverge.com/policy/677373/lawyers-chatgpt-hallucinations-ai
The Team
Meet the team of industry experts behind Comsure
Find out moreLatest News
Keep up to date with the very latest news from Comsure
Find out moreGallery
View our latest imagery from our news and work
Find out moreContact
Think we can help you and your business? Chat to us today
Get In TouchNews Disclaimer
As well as owning and publishing Comsure's copyrighted works, Comsure wishes to use the copyright-protected works of others. To do so, Comsure is applying for exemptions in the UK copyright law. There are certain very specific situations where Comsure is permitted to do so without seeking permission from the owner. These exemptions are in the copyright sections of the Copyright, Designs and Patents Act 1988 (as amended)[www.gov.UK/government/publications/copyright-acts-and-related-laws]. Many situations allow for Comsure to apply for exemptions. These include 1] Non-commercial research and private study, 2] Criticism, review and reporting of current events, 3] the copying of works in any medium as long as the use is to illustrate a point. 4] no posting is for commercial purposes [payment]. (for a full list of exemptions, please read here www.gov.uk/guidance/exceptions-to-copyright]. Concerning the exceptions, Comsure will acknowledge the work of the source author by providing a link to the source material. Comsure claims no ownership of non-Comsure content. The non-Comsure articles posted on the Comsure website are deemed important, relevant, and newsworthy to a Comsure audience (e.g. regulated financial services and professional firms [DNFSBs]). Comsure does not wish to take any credit for the publication, and the publication can be read in full in its original form if you click the articles link that always accompanies the news item. Also, Comsure does not seek any payment for highlighting these important articles. If you want any article removed, Comsure will automatically do so on a reasonable request if you email info@comsuregroup.com.