Lawyers Warned: ChatGPT Creates Nonexistent Cases

Lawyers face penalties for citing AI-generated nonexistent cases, stressing the vital need for accuracy in AI-assisted legal research.

Another day, another lawyer getting burned by ChatGPT—and I don’t mean metaphorically. The legal world is buzzing with yet another case of an attorney punished for citing non-existent cases, all thanks to the uncritical use of generative AI. As of June 3, 2025, this isn’t just a one-off embarrassment; it’s a trend that keeps repeating itself, raising urgent questions about how lawyers—and, frankly, anyone relying on AI for research—should balance efficiency with accuracy[1][3][5].

Let’s face it: generative AI tools like ChatGPT are seductive. They promise to cut through the tedium of legal research, drafting briefs in minutes that might otherwise take hours. But as recent headlines show, there’s a real cost to trusting these systems blindly. The latest incident involves a lawyer who was reprimanded for including citations to cases that simply don’t exist—hallucinations, in AI parlance—in an official court filing[1]. This isn’t just a footnote in the annals of legal tech; it’s a cautionary tale that’s playing out in courtrooms across the country.

ChatGPT and similar large language models (LLMs) have flooded law offices, promising to streamline research, drafting, and even client communications. Legal tech startups and established firms alike have embraced these tools, integrating them into everyday workflows. The allure is obvious: AI can process vast amounts of text, summarize complex rulings, and suggest legal strategies—all in real time.

But here’s the rub: these models are designed to generate plausible-sounding text, not fact-checked legal advice. They “hallucinate”—a polite term for inventing facts or, in this case, entire court cases—when they don’t know the answer. And because the output is so convincing, even seasoned attorneys can be fooled.

Recent Cases and Sanctions

The most recent example, as reported by TechSpot and other outlets, involves a lawyer who incorporated ChatGPT-generated citations into a court filing, only to have the court discover that the cited cases were completely fabricated[1][3][4]. This isn’t an isolated incident. Over the past year, at least a half-dozen high-profile cases have emerged where lawyers faced sanctions or public embarrassment for similar mistakes.

One notable case involves attorneys from the firm Butler Snow, hired to defend Alabama’s prison system. According to the Associated Press, a federal judge found that lawyers had included five false citations in two separate filings, all sourced from ChatGPT. U.S. District Judge Anna Manasco considered a range of sanctions, including fines, and gave the firm ten days to respond. The lawyers apologized, admitting that a partner, Matt Reeves, had used ChatGPT for case research but failed to verify the results before signing off[3].

Similarly, Morgan & Morgan, ranked No. 42 in the U.S. by head count, was sanctioned after a motion cited eight non-existent cases—some hallucinated by ChatGPT—in a product liability lawsuit against Walmart[4][5]. The presiding judge, Kelly Rankin, issued an order to show cause, demanding an explanation for why sanctions shouldn’t be imposed. The lawyers, Rudwin Ayala, T. Michael Morgan, and Taly Goody, ultimately withdrew the motion[4][5].

The Broader Trend: Courts React

Judges are increasingly sounding the alarm. The Government Accountability Office (GAO) and other courts have noted a troubling trend: both attorneys and pro se litigants are submitting filings with citations to non-existent authority, often generated by AI[2]. The GAO has explicitly warned that such submissions may result in sanctions.

“The submission of filings with citations to non-existent authority may result in the imposition of appropriate sanctions,” reads a recent GAO statement[2]. Courts are clearly signaling that they won’t tolerate sloppy or reckless use of AI in legal proceedings.

Why Does This Keep Happening?

It’s tempting to chalk this up to laziness or incompetence, but the reality is more nuanced. Legal professionals are under immense pressure to deliver results quickly and cost-effectively. AI tools offer a seductive shortcut. But as David Lat of Original Jurisdiction quipped, “Lawyers at large firms can misuse ChatGPT as well as anyone”[4].

There’s also a knowledge gap. Many lawyers aren’t AI experts and may not fully grasp how these systems work. They see a citation that looks legitimate and assume it’s real. After all, if it walks like a duck and quacks like a duck, it must be a duck—except when it’s a hallucination.

The AI Hallucination Problem

AI hallucinations—where models generate false or fabricated information—are a well-documented phenomenon. In legal contexts, these errors can have serious consequences. When a lawyer cites a fictional case, it undermines the integrity of the legal process and can result in sanctions, reputational damage, and even harm to clients.

Interestingly enough, some of the fake citations generated by ChatGPT are so detailed—including docket numbers, judges’ names, and even case summaries—that they can easily fool the untrained eye[5]. This makes manual verification essential, but in the rush to meet deadlines, that step is often skipped.

The Human Factor and Ethical Responsibility

At the end of the day, it’s the lawyer’s responsibility to ensure the accuracy of every citation and argument presented to the court. Courts have repeatedly emphasized that reliance on AI does not absolve attorneys of their ethical obligations. As one judge put it, “Experienced litigators like plaintiffs’ counsel should know that this court is a federal court, and therefore federal procedural law governs evidentiary issues”[4].

The American Bar Association and state bar associations are also weighing in, issuing guidance on the responsible use of AI in legal practice. The message is clear: AI can be a powerful tool, but it must be used with care and oversight.

The legal profession has always been slow to adopt new technology, but the pace of change is accelerating. From the introduction of word processors in the 1980s to the rise of e-discovery tools in the early 2000s, lawyers have had to adapt to new tools. Generative AI is just the latest—and perhaps most disruptive—innovation.

But unlike previous technologies, generative AI introduces a new risk: the potential for widespread misinformation. Early adopters are learning the hard way that these tools require rigorous verification and oversight.

Future Implications and Industry Reactions

Looking ahead, the legal industry is at a crossroads. On one hand, AI offers the promise of increased efficiency and access to justice. On the other, it introduces new risks that must be managed.

Some firms are responding by developing internal protocols for AI use, including mandatory verification of all AI-generated content. Legal tech vendors are also stepping up, offering tools that help detect AI hallucinations and verify citations.

The judiciary, meanwhile, is likely to continue cracking down on sloppy AI use. Expect more sanctions, more guidance, and more cautionary tales in the months and years ahead.

To help illustrate the landscape, here’s a quick comparison of some leading legal AI tools and their associated risks:

Tool/Vendor Strengths Risks/Challenges
ChatGPT (OpenAI) Fast drafting, research, Q&A Hallucinations, false citations
Casetext (CoCounsel) Legal research, case summaries Less prone to hallucinations
Harvey AI Contract analysis, research Still requires verification
LexisNexis AI Comprehensive legal database Higher cost, steep learning curve

The key takeaway? No tool is foolproof, and all require human oversight.

Personal Perspective: As Someone Who’s Followed AI for Years

As someone who’s followed AI for years, I can’t help but marvel at how quickly these tools have infiltrated the legal profession. But I’m also struck by how easily even smart, well-intentioned people can be burned by AI’s limitations. The legal world is built on precedent and precision—two things that AI, for all its brilliance, still struggles to deliver reliably.

By the way, if you’re a lawyer thinking about using ChatGPT for your next brief, maybe double-check those citations. Just a friendly suggestion.

Conclusion and Forward-Looking Insights

The legal profession is in the midst of a technological revolution, but with great power comes great responsibility. The recent spate of sanctions for AI-generated fake citations is a wake-up call: AI is a tool, not a replacement for human judgment and diligence.

As the courts, bar associations, and legal tech companies grapple with these challenges, one thing is certain: the days of blind trust in AI are over. The future will belong to those who use these tools wisely, critically, and ethically.

**

Share this article: