Category: Artificial Intelligence

Even Pro Se Litigants Are Using Artificial Intelligence And Citing Hallucinated Cases

In Kent Malone-Bey v. Lauderdale County School Board, 3:25-cv-380, the plaintiff was terminated an instructor by the defendant. He filed suit. At some point he filed a motion to disqualify the law firm defending the case. The motion was denied. The opinion offers another chapter in the AI hallucination saga.

The Court sua sponte addresses a separate issue: Plaintiff’s continued use of fake, nonexistent legal opinions in his filings before the Court. This practice unfortunately is not unique to the instant matter as the number of filings that include fictitious legal opinions is rising. This trend, perhaps, correlates to the frequency at which our society is embracing and utilizing generative artificial intelligence (“AI”) programs in everyday life, including legal matters. AI is certainly a tool capable of providing immense benefits and efficiency in the legal profession or in legal matters. Those benefits, however, are not without risk.

Starting with the obvious, it is well-known that AI can generate fake sources of information, commonly referred to as “AI Hallucinations.” Wadsworth v. Walmart, Inc., 348 F.R.D. 489, 493 (D. Wy. 2025). And individuals who use AI for legal research find that they are not immune from these hallucinations. Indeed, AI programs are known to hallucinate nonexistent cases. Sanders v. United States, 176 Fed. Cl. 163, 169 (2025). As AI programs advance, they only increase the difficulty in which users (and the courts) may determine whether a case provided by an AI program is in fact “hallucinated.”

That is because hallucinated cases look like real cases. They are identified by a case name, a citation to a reporter, the name of a district or appellate court, and the year of the decision. United States v. Hayes, 763 F. Supp. 3d 1054, 1065 (E.D. Cal. 2025). But, they are not real cases. These hallucinated cases are instead inaccurate depictions of information from AI models that suffer from incomplete, biased, or otherwise flawed training data. Wadsworth, 348 F.R.D. at 493. Thus, “[w]hen used carelessly, [AI] produces frustratingly realistic legal fiction that takes inordinately longer to respond to than to create. While one party can create a fake legal brief at the click of a button, the opposing party and court must parse through the case names, citations, and points of law to determine which parts, if any, are true.” Ferris v. Amazon.com Servs., LLC, 2025 WL 1122235, at *1 (N.D. Miss. Apr. 16, 2025).

Here, Plaintiff’s Brief [18] overflows with either citations to fake, nonexistent cases or existing cases with incorrect legal propositions, i.e., the hallmarks of AI hallucinations. It therefore appears that Plaintiff may have used generative AI to assist in legal research and drafting his motion briefs.

As just one example, Plaintiff cites Coleman v. Ret. Plan for Dist. Managers, 969 F.3d 142, 149 (5th Cir. 2020) for the proposition that courts may disqualify counsel to “preserve the integrity of the adversary process.” [18] at 2. But the Coleman case does not exist.[5] By citing (and quoting) a fake opinion,[6] Plaintiff undermines the very integrity that he urges this Court to uphold. Indeed, submitting fictitious cases and quotations to the Court degrades or impugns the integrity of the Court. Hayes, 763 F. Supp. 3d at 1054. Moreover, Plaintiff’s attempt to persuade the Court “by relying on [a] fake opinion[] is an abuse of the adversary system.” Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 461 (S.D.N.Y. 2023) (emphasis added).[7]

Even if Plaintiff did not use AI to assist with citing legal opinions and holdings, he still must comply with Federal Rule of Civil Procedure 11. See Yazdchi v. Am Honda Fin. Corp., 217 F. App’x 299, 304 (5th Cir. 2007). Rule 11 provides that “[b]y presenting to the court a pleading, written motion, or other paper — whether by signing, filing, submitting, or later advocating it — an attorney or unrepresented party certifies that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances… the claims, defenses, and other legal contentions are warranted by existing law.” Fed. R. Civ. P. 11(b)(2).

By citing to fake legal opinions (or hallucinated legal holdings therein), Plaintiff not only wasted the Court’s time and judicial resources, but he also violated his Rule 11 obligations. See Benjamin v. Costco Wholesale Corp., 2025 WL 1195925, at *2 (E.D.N.Y. Apr. 24, 2025) (“an attorney [or pro se party] who submits fake cases clearly has not read those nonexistent cases, which is a violation of Rule 11 of the Federal Rules of Civil Procedure”) (emphasis in original). Plaintiff is therefore reminded of his obligations under Rule 11 as stated herein and advised that future filings before this Court must contain only accurate representations and citations. Additionally, Plaintiff is warned that future filings with citations to fictitious legal opinions and holdings or which do not otherwise comply with Rule 11 may result in sanctions, including but not limited to the striking of filings and the imposition of monetary penalties. Sanders, 176 Fed. Cl. at 170 (2025)Ferris, 2025 WL 1122235, at *2-3.

IT IS, THEREFORE, ORDERED that:

1. Plaintiff’s Motion to Disqualify Butler Snow LLP from Representing Individual Capacity Defendants [17] is DENIED; and

2. Plaintiff is reminded of his Rule 11 obligations as stated herein and warned that future filings with citations to fictitious legal opinions and holdings in filings before the Court, or which do not otherwise comply with Rule 11, may result in sanctions, including but not limited to striking Plaintiff’s filings and monetary penalties.

SO ORDERED.

Second Circuit Affirms Rule 37 Dismissal

Second Circuit Affirms Rule 37 Dismissal

This case, Park v. Kim, 91 F.4th 610 (2d Cir. 2024) was decided in January 2024. The district court had dismissed the case under Rules 37 and 41(b) for willful failure to comply with discovery orders. Worse still, counsel for Park used artificial intelligence to draft the appellate reply brief and cited a case that does not exist.

The reply brief cited only two court decisions. We were unable to locate the one cited as “Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014).” Appellant’s Reply Br. at 6. Accordingly, on November 20, 2023, we ordered Park to submit a copy of that decision to the Court by November 27, 2023. On November 29, 2023, Attorney Lee filed a Response with the Court explaining that she was “unable to furnish a copy of the decision.” Response to November 20, 2023, Order of the Court, at 1, Park v. Kim, No. 22-2057-cv (2d Cir. Nov. 29, 2023), ECF No. 172 (hereinafter, “Response”). Although Attorney Lee did not expressly indicate as much in her Response, the reason she could not provide a copy of the case is that it does not exist — and indeed, Attorney Lee refers to the case at one point as “this non-existent case.” Id. at 2.

Attorney Lee’s Response states:

I encountered difficulties in locating a relevant case to establish a minimum wage for an injured worker lacking prior year income records for compensation determination…. Believing that applying the minimum wage to in injured worker in such circumstances under workers’ compensation law was uncontroversial, I invested considerable time searching for a case to support this position but was unsuccessful.

Consequently, I utilized the ChatGPT service, to which I am a subscribed and paying member, for assistance in case identification. ChatGPT was previously provided reliable information, such as locating sources for finding an antic furniture key. The case mentioned above was suggested by ChatGPT, I wish to clarify that I did not cite any specific reasoning or decision from this case.

Id. at 1-2 (sic).

All counsel that appear before this Court are bound to exercise professional judgment and responsibility, and to comply with the Federal Rules of Civil Procedure. Among other obligations, Rule 11 provides that by presenting a submission to the court, an attorney “certifies that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances… the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law.” Fed. R. Civ. P. 11(b)(2); see also N.Y. R. Pro. Conduct 3.3(a) (McKinney 2023) (“A lawyer shall not knowingly: (1) make a false statement of … law to a tribunal.”). “Rule 11 imposes a duty on attorneys to certify that they have conducted a reasonable inquiry and have determined that any papers filed with the court are well grounded in fact, [and] legally tenable.” Cooter & Gell v. Hartmarx Corp., 496 U.S. 384, 393, 110 S.Ct. 2447, 110 L.Ed.2d 359 (1990). “Under Rule 11, a court may sanction an 615*615attorney for, among other things, misrepresenting facts or making frivolous legal arguments.” Muhammad v. Walmart Stores E., L.P., 732 F.3d 104, 108 (2d Cir. 2013) (per curiam).

At the very least, the duties imposed by Rule 11 require that attorneys read, and thereby confirm the existence and validity of, the legal authorities on which they rely. Indeed, we can think of no other way to ensure that the arguments made based on those authorities are “warranted by existing law,” Fed. R. Civ. P. 11(b)(2), or otherwise “legally tenable.” Cooter & Gell, 496 U.S. at 393, 110 S.Ct. 2447. As a District Judge of this Circuit recently held when presented with non-existent precedent generated by ChatGPT: “A fake opinion is not `existing law’ and citation to a fake opinion does not provide a non-frivolous ground for extending, modifying, or reversing existing law, or for establishing new law. An attempt to persuade a court or oppose an adversary by relying on fake opinions is an abuse of the adversary system.” Mata v. Avianca, Inc., No. 22CV01461(PKC), 678 F.Supp.3d 443, 460-61 (S.D.N.Y. June 22, 2023).

Attorney Lee states that “it is important to recognize that ChatGPT represents a significant technological advancement,” and argues that “[i]t would be prudent for the court to advise legal professionals to exercise caution when utilizing this new technology.” Response at 2. Indeed, several courts have recently proposed or enacted local rules or orders specifically addressing the use of artificial intelligence tools before the court.[3] But such a rule is not necessary to inform a licensed attorney, who is a member of the bar of this Court, that she must ensure that her submissions to the Court are accurate.

Attorney Lee’s submission of a brief relying on non-existent authority reveals that she failed to determine that the argument she made was “legally tenable.” Cooter & Gell, 496 U.S. at 393, 110 S.Ct. 2447. The brief presents a false statement of law to this Court, and it appears that Attorney Lee made no inquiry, much less the reasonable inquiry required by Rule 11 and long-standing precedent, into the validity of the arguments she presented. We 616*616 therefore REFER Attorney Lee to the Court’s Grievance Panel pursuant to Local Rule 46.2 for further investigation, and for consideration of a referral to the Committee on Admissions and Grievances. See 2d Cir. R. 46.2.