Lionel Hutz would be ashamed —

Lawyers have real bad day in court after citing fake cases made up by ChatGPT

Lawyers fined $5K and lose case after using AI chatbot "gibberish" in filings.

A robot hand places blocks spelling
Getty Images | style-photography

A federal judge tossed a lawsuit and issued a $5,000 fine to the plaintiff's lawyers after they used ChatGPT to research court filings that cited six fake cases invented by the artificial intelligence tool made by OpenAI.

Lawyers Steven Schwartz and Peter LoDuca of the firm Levidow, Levidow, & Oberman "abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question," US District Judge Kevin Castel wrote in an order yesterday. The lawyers, Castel wrote, "advocated for the fake cases and legal arguments" even "after being informed by their adversary's submission that their citations were non-existent and could not be found."

The judge issued one fine of $5,000 to be paid by the two lawyers and their firm under joint and several liability. More embarrassingly for the lawyers, they are required to send letters to six real judges who were "falsely identified as the author of the fake" opinions cited in their legal filings. Castel described the legal analysis in one of the fake cases as "gibberish."

"The Court will require Respondents to inform their client and the judges whose names were wrongfully invoked of the sanctions imposed," Castel wrote. "The Court will not require an apology from Respondents because a compelled apology is not a sincere apology. Any decision to apologize is left to Respondents."

Submitting fake opinions to a court harms the lawyers' client, wastes the court's time, forces the opposing party to waste "time and money in exposing the deception," and causes "potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct," Castel wrote. "It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity."

Case dismissed

As we wrote last month, Schwartz admitted using ChatGPT for research and did not verify whether the "legal opinions" provided by the AI chatbot were accurate. Schwartz wrote in an affidavit that he had "never utilized ChatGPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibility that its content could be false."

The real case, Roberto Mata vs. Avianca, was originally filed in a New York state court but was moved to US District Court for the Southern District of New York. Schwartz was representing Mata in state court but wasn't admitted to practice in the federal court. Schwartz continued to write the legal briefs and LoDuca filed them under his own name.

Mata sought damages for injuries suffered during an Avianca flight from El Salvador to New York in August 2019 when a metal snack and drink cart struck his knee. Mata's lawyers used the phony citations from ChatGPT to argue that the case should be moved back to the New York state court where a three-year statute of limitations would apply.

Unsurprisingly, their argument citing phony cases wasn't persuasive to the judge. In addition to punishing the lawyers, Castel yesterday granted Avianca's motion to dismiss the case. The judge agreed with the defendant that a two-year statute of limitations under the Montreal Convention applies and that the plaintiff's lawsuit was filed too late.

“I just never thought it could be made up”

The dispute over fake precedents played out over a few months. On March 1, Mata's lawyers cited the fake cases in a brief that opposed Avianca's motion to dismiss the case.

"But if the matter had ended with Respondents coming clean about their actions shortly after they received the defendant's March 15 brief questioning the existence of the cases, or after they reviewed the Court's Orders of April 11 and 12 requiring production of the cases, the record now would look quite different," Castel wrote. "Instead, the individual Respondents doubled down and did not begin to dribble out the truth until May 25, after the Court issued an Order to Show Cause why one of the individual Respondents ought not be sanctioned."

Castel found that the lawyers were guilty of "bad faith" and "acts of conscious avoidance and false and misleading statements to the Court." While Schwartz wrote the bogus legal filings, LoDuca didn't check them for accuracy.

"Mr. LoDuca simply relied on a belief that work produced by Mr. Schwartz, a colleague of more than twenty-five years, would be reliable," Castel wrote. But Schwartz's practice was exclusively in state court. The lawyers admitted in a memorandum of law that Schwartz attempted "to research a federal bankruptcy issue with which he was completely unfamiliar."

At a June 8 hearing on potential sanctions, Schwartz testified that he was "operating under the false perception that this website [ChatGPT] could not possibly be fabricating cases on its own." Schwartz stated, "I just was not thinking that the case could be fabricated, so I was not looking at it from that point of view... My reaction was, ChatGPT is finding that case somewhere. Maybe it's unpublished. Maybe it was appealed. Maybe access is difficult to get. I just never thought it could be made up."

The Levidow firm did not have Westlaw or LexisNexis accounts, instead using a Fastcase account that had limited access to federal cases. Schwartz testified that he "heard about this new site which I assumed—I falsely assumed was like a super search engine called ChatGPT, and that's what I used."

Reader Comments (189)

View comments on forum

Loading comments...

Channel Ars Technica