Ghost Cases: Why Lawyers Keep Getting Sanctioned for AI-Hallucinated Citations — and What Courts Are Doing About It

Matthew M. Clarke

Matthew M. Clarke

Partner & Chair of Litigation · Kelley Clarke Law

Securities & Complex Civil Litigation · CA, TX, NY


In an appeal before the California Court of Appeals, the briefs cited cases that did not exist. The attorney had used AI to help draft them. The court spent weeks attempting to track down the fabricated authorities before concluding they were simply made up. The lawyer was sanctioned $10,000 and referred to the State Bar.

That case, Noland v. Land of the Free, L.P., decided in September 2025, would be unremarkable by now, except for one thing. The court also declined to award attorneys’ fees to opposing counsel, because that attorney had failed to detect or report the fake citations to the court. It may be the first judicial decision to raise the question of whether lawyers have a professional duty to catch their opponent’s AI-generated hallucinations. In Noland, if opposing counsel had reported the fake citations, I suspect the Court would have awarded sanctions to them.

That is how far this problem has traveled.

From Novelty to Crisis

The phenomenon of AI hallucination in legal filings, where generative AI tools invent plausible-sounding but entirely fictitious case citations, first drew national attention in 2023 with Mata v. Avianca, in which a New York federal court sanctioned attorneys who submitted a ChatGPT-generated brief full of fabricated precedent. At the time, courts and the bar treated it as a teachable moment, an embarrassing novelty that surely would not persist once lawyers understood the risks.

It has persisted. Badly.

According to a database maintained by Paris-based law lecturer Damien Charlotin, who began tracking the problem as a research project and found himself unable to keep up, there have now been over 1,100 identified legal decisions worldwide involving hallucinated AI content in court filings. Approximately 90% of those decisions were written in 2025 alone. By late 2025, Charlotin was logging five to six new cases per day.

Chief Justice John Roberts warned in his 2023 year-end report that AI use in legal practice “requires caution and humility” and that citing nonexistent cases is “always a bad idea.” Courts across the country have since issued standing orders, ethics opinions, and escalating sanctions. None of it has stopped the filings.

The Cases Piling Up in 2025

The pattern is consistent across dozens of recent cases. A lawyer uses an AI tool, sometimes a dedicated legal research product, sometimes a general-purpose chatbot, to draft or research a brief. The AI generates citations that look real: a case name, a reporter citation, a court, a year, sometimes even a fabricated holding. The lawyer files without independently verifying. Opposing counsel or the court catches the fraud.

What has changed in 2025 is the severity of consequences. Early sanctions ranged from $1,500 to $5,000. Courts are now ordering far more:

  • In the Central District of California, attorneys from Ellis George LLP and K&L Gates LLP — major litigation firms — submitted a brief to a Special Master containing numerous hallucinated citations generated using CoCounsel, Westlaw Precision, and Google Gemini. The Special Master struck the entire brief, denied the discovery relief sought, and ordered the firms to jointly pay $31,100 in opposing counsel’s fees. He described learning that cases he had found persuasive simply did not exist as “scary.”
  • In the Eastern District of Louisiana, attorney Hamilton submitted a motion to transfer venue citing three fabricated cases, then misrepresented to the court that Westlaw had generated the citations. At a show cause hearing, she admitted she had not verified the citations and did not know Westlaw Precision incorporated AI. The court imposed a $1,000 personal sanction, ordered mandatory AI ethics CLE, and referred her to the disciplinary committee.
  • In Illinois, a law firm and one of its partners were ordered to pay a combined $59,500 to opposing counsel who discovered and contested the fake citations — the largest fee-shifting award yet reported in an AI hallucination case.
  • In the District of Oregon, a court fined an attorney $15,500 in December 2025, citing not only the fake citations but the lawyer’s failure to be “adequately forthcoming, candid, or apologetic” about it.
  • In Florida, one attorney was referred to the state bar and had her pro hac vice status revoked after submitting hallucinated citations in eight related cases. A Florida appellate court vacated a trial court order that had partially relied on a hallucinated case — one that had been cited for attorneys’ fees.

The New Frontier: Your Duty to Catch the Other Side’s Hallucinations

The Noland decision from California is worth examining carefully, because it signals where this body of law may be heading. The court found opposing counsel’s failure to detect the fabricated citations relevant to whether they were entitled to fee-shifting. The implication is pointed: lawyers who spot AI hallucinations and alert the court may be rewarded; lawyers who miss them may forfeit remedies even when the other side is sanctioned.

No court has yet articulated a bright-line rule requiring lawyers to audit opposing briefs for AI-generated content. But the direction of travel is clear. As the Noland court put it tersely, and without defining the standard, opposing counsel “did not alert the court to the fabricated citations and appear to have become aware of the issue only when the court issued its order to show cause.” That observation cost them their fee award.

Why It Keeps Happening

The excuses courts have heard are remarkably consistent. Lawyers say they assumed the AI-assisted research tool had verified its own output. They say they delegated the work to a junior associate or paralegal. They say they filed the document without reading the cited cases. In one case, a 40-year veteran solo practitioner admitted he had included AI-generated citations “out of haste and a naïve understanding of the technology.”

Courts have consistently rejected these explanations. As one court put it bluntly: “It should go without saying that it is the lawyer’s duty to read cases before submitting them to a court as precedential authorities. At its barest minimum, it is the lawyer’s duty not to submit case authorities that do not exist.”

The seniority of the offending attorneys is striking. These are not just overwhelmed solo practitioners. Major AmLaw firms have appeared in these sanctions orders. The problem is structural: AI tools are fast, they sound authoritative, and the verification step is easy to skip under deadline pressure.

What Every Practicing Attorney Needs to Understand

The legal framework is now well established. Rule 11 of the Federal Rules of Civil Procedure requires an attorney signing a pleading to certify that legal contentions are “warranted by existing law. That obligation applies regardless of who drafted the document, a supervisor, a junior associate, or an AI tool. Several state bar ethics opinions, including guidance tied to Texas Rules of Professional Conduct, have confirmed that failure to verify AI outputs can breach multiple professional duties simultaneously: competence, candor toward the tribunal, and diligence. See Texas Ethics Opinion 705.

Practically speaking, every attorney using AI for legal research or drafting should treat AI-generated citations as unverified leads, not authorities. Each citation must be independently pulled, read, and confirmed before it appears in a filing. Legal research platforms that incorporate AI (Westlaw Precision, CoCounsel, and others) do not eliminate this obligation; the Hamilton case is a direct warning on that point.

The Bottom Line

AI hallucination in legal filings is no longer a novelty story. It is a disciplinary and malpractice risk that is generating bar referrals, six-figure sanctions, and vacated orders in courts at every level. The sanctions are increasing. The patience of the judiciary is exhausted. And courts are beginning to ask whether the duty of professional competence now includes catching hallucinations on the other side.

Used responsibly, AI is a powerful tool for legal practice. Used carelessly, it is a fast path to a sanctions order, a bar referral, and a very bad call to your client.

At Kelley Clarke Law, we stay current on the legal and ethical developments shaping the practice of law. If you have questions about professional responsibility, litigation strategy, or complex civil matters, we invite you to reach out.

Schedule a consultation →

Leave a comment