Tag: Artificial Intelligence

Bankruptcy Judge Finds AI citations Violate Rule 9011

Bankruptcy Judge Finds AI citations Violate Rule 9011

Bankruptcy rule 11 is known as Rule 9011. In re Martin, 670 B.R. 636 (2025), issued by the Northern District of Illinois sanctions two lawyers for using artificial intelligence to prepare portions of their briefs. One lawyer claimed he was unaware of the prohibition on using artificial intelligence and that he did not know that it might make up fake citations to cases. The court’s response to this is thoughtful:

[Lawyer 1] and [Lawyer 2] ask me not to sanction them at all given that they have already voluntarily: (1) admitted their misconduct and promised not to do it again; (2) withdrawn any application for compensation in this case; and (3) watched an online CLE video. But while I appreciate their candor and efforts, “[t]here must be consequences.” Ferris v. Amazon.com Servs., LLC, No. 24-cv-304, 778 F.Supp.3d 879, 882 (N.D. Miss. Apr. 16, 2025). While I believe this mistake was unintentional, a “citation to fake, AI-generated sources … shatters [] credibility” and “imposes many harms.” Kohls v. Ellison, No. 24-cv-3754, 2025 WL 66514, at *4-5 (D. Minn. Jan. 10, 2025). So the consequences “are steep.” Id. at *5.

The first reason I issue sanctions stems from [Lawyer 1’s] claim of ignorance—he asserts he didn’t know the use of AI in general and ChatGPT in particular could result in citations to fake cases. (Dkt. No. 71 at 3) [Lawyer 1] disputes the court’s statement in Wadsworth that it is “well-known in the legal community that AI resources generate fake cases.” Wadsworth v. Walmart Inc., 348 F.R.D. 489, 497 (D.Wyo. 2025). Indeed, [Lawyer 1] aggressively chides that assertion, positing that 647*647 “in making that statement, the Wadsworth court cited no study, law school journal article, survey of attorneys, or any source to support this blanket conclusion.” (Dkt. No. 71 at 3-4, emphasis in [Lawyer 1’s] brief as filed)

I find [Lawyer 1’s] position troubling. At this point, to be blunt, any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud. This has been a hot topic in the legal profession since at least 2023, exemplified by the fact that Chief Justice John G. Roberts, Jr. devoted his 2023 annual Year-End Report on the Federal Judiciary (in which he “speak[s] to a major issue relevant to the whole federal court system,” Report at 2) to the risks of using AI in the legal profession, including hallucinated case citations.[6] To put it mildly, “[t]he use of non-existent case citations and fake legal authority generated by artificial intelligence programs has been the topic of many published legal opinions and scholarly articles as of late.”[7] At this point there are many published cases on the issue—while only a sampling are cited in this opinion, all but one were issued before June 2, 2025, when [Lawyer 1] filed the offending reply. See, e.g., Jaclyn Diaz, A Recent High-Profile Case of AI Hallucination Serves as a Stark Warning,NPR ILLINOIS (July 10, 2025, 12:49 PM), https://www.nprillinois.org/XXXX-XX-XX/a-recent-high-profile-case-of-ai-hallucination-serves-as-a-stark-warning (“There have been a host of high-profile cases where the use of generative AI has gone wrong for lawyers and others filing legal cases…. It has become a familiar trend in courtrooms across the U.S.”). The Sedona Conference wrote on the topic in 2023.[8] Newspapers, magazines, and other well-known online sources have been publicizing the problem for at least two years.[9] And on January 1, 648*648 2025, the Illinois Supreme Court issued a “Supreme Court Policy on Artificial Intelligence” requiring practitioners in this state to “thoroughly review” any content generated by AI.[10]

Counsel’s professed ignorance of the dangers of using ChatGPT for legal research without checking the results is in some sense irrelevant. Lawyers have ethical obligations not only to review whatever cases they cite (regardless of where they pulled them from), but to understand developments in technology germane to their practice.[11] And there are plenty of opportunities to learn—indeed, the Illinois State Bar Association chose “Generative Artificial Intelligence — Fact or Fiction” as the theme of its biennial two-day Allerton Conference earlier this year, calling the topic “one that every legal professional should have on their radar.”[12] Similar CLE opportunities have been offered across the nation for at least the past two years.

The bottom line is this: at this point, no lawyer should be using ChatGPT or any other generative AI product to perform research without verifying the results. Period. See, e.g., Lacey v. State Farm Gen. Ins. Co., No. CV 24-5205, 2025 WL 1363069, at *3 (C.D. Cal. May 5, 2025) (“Even with recent advances, no reasonably competent attorney should out-source research and writing to this technology— particularly without any attempt to verify the accuracy of that material.”); Mid Cent. Operating Eng’rs, 2025 WL 574234, at *2 (“It is one thing to use AI to assist with initial research, and even non-legal AI programs may provide a helpful 30,000-foot view. It is an entirely different thing, however, to rely on the output of a generative AI program without verifying the current treatment or validity—or, indeed, the very existence—of the case presented.”). In fact, given the nature of generative AI tools, I seriously doubt their utility to assist in performing accurate research (for 649*649 now). “Generative” AI, unlike the older “predictive” AI, is “a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on.” Adam Zewe, Explained: Generative AI, MIT NEWS (Nov. 9, 2023), https://news.mit.edu/2023/explained-generative-ai-1109 (emphasis added). Platforms like ChatGPT are powered by “large language models” that teach the platform to create realistic-looking output. They can write a story that reads like it was written by Stephen King (but wasn’t) or pen a song that sounds like it was written by Taylor Swift (but wasn’t). But they can’t do your legal research for you. ChatGPT does not access legal databases like Westlaw or Lexis, draft and input a query, review and analyze each of the results, determine which results are on point, and then compose an accurate, Bluebook-conforming citation to the right cases—all of which it would have to do to be a useful research assistant. Instead, these AI platforms look at legal briefs in their training model and then create output that looks like a legal brief by “placing one most-likely word after another” consistent with the prompt it received. Brian Barrett, “You Can’t Lick a Badger Twice”: Google Failures Highlight a Fundamental AI Flaw, WIRED (Apr. 23, 2025, 7:44 PM), https://www.wired.com/story/google-ai-overviews-meaning/.

If anything, [Lawyer 1’s] alleged lack of knowledge of ChatGPT’s shortcomings leads me to do what courts have been doing with increasing frequency: announce loudly and clearly (so that everyone hears and understands) that lawyers blindly relying on generative AI and citing fake cases are violating Bankruptcy Rule 9011 and will be sanctioned. [Lawyer 1] “professed ignorance of the propensity of the AI tools he was using to `hallucinate’ citations is evidence that [the] lesser sanctions [imposed in prior cases] have been insufficient to deter the conduct.” Mid Cent. Operating Eng’rs, 2025 WL 574234, at *3.

The second reason I issue sanctions is that, as described above, I also have concerns about the way this particular case was handled. I understand that Debtor’s counsel has a massive docket of cases. But every debtor deserves care and attention. Chapter 13 cases can be challenging to file and manage—especially when they involve complexities like those in this case. If a law firm does not have the resources to devote the time and energy necessary to shepherd hundreds of Chapter 13 cases at the same time, it should refer matters it cannot handle to other attorneys who can—lest a search for time-saving devices lead to these kinds of missteps. What I mean to convey here is that while everyone makes mistakes, I expect— as I think all judges do—attorneys to be more diligent and careful than has been shown here.[13]

Comment: This is an excellent opinion and it should be carefully read and considered by all lawyers.

Ed Clinton, Jr.

Second Circuit Affirms Rule 37 Dismissal

Second Circuit Affirms Rule 37 Dismissal

This case, Park v. Kim, 91 F.4th 610 (2d Cir. 2024) was decided in January 2024. The district court had dismissed the case under Rules 37 and 41(b) for willful failure to comply with discovery orders. Worse still, counsel for Park used artificial intelligence to draft the appellate reply brief and cited a case that does not exist.

The reply brief cited only two court decisions. We were unable to locate the one cited as “Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014).” Appellant’s Reply Br. at 6. Accordingly, on November 20, 2023, we ordered Park to submit a copy of that decision to the Court by November 27, 2023. On November 29, 2023, Attorney Lee filed a Response with the Court explaining that she was “unable to furnish a copy of the decision.” Response to November 20, 2023, Order of the Court, at 1, Park v. Kim, No. 22-2057-cv (2d Cir. Nov. 29, 2023), ECF No. 172 (hereinafter, “Response”). Although Attorney Lee did not expressly indicate as much in her Response, the reason she could not provide a copy of the case is that it does not exist — and indeed, Attorney Lee refers to the case at one point as “this non-existent case.” Id. at 2.

Attorney Lee’s Response states:

I encountered difficulties in locating a relevant case to establish a minimum wage for an injured worker lacking prior year income records for compensation determination…. Believing that applying the minimum wage to in injured worker in such circumstances under workers’ compensation law was uncontroversial, I invested considerable time searching for a case to support this position but was unsuccessful.

Consequently, I utilized the ChatGPT service, to which I am a subscribed and paying member, for assistance in case identification. ChatGPT was previously provided reliable information, such as locating sources for finding an antic furniture key. The case mentioned above was suggested by ChatGPT, I wish to clarify that I did not cite any specific reasoning or decision from this case.

Id. at 1-2 (sic).

All counsel that appear before this Court are bound to exercise professional judgment and responsibility, and to comply with the Federal Rules of Civil Procedure. Among other obligations, Rule 11 provides that by presenting a submission to the court, an attorney “certifies that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances… the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law.” Fed. R. Civ. P. 11(b)(2); see also N.Y. R. Pro. Conduct 3.3(a) (McKinney 2023) (“A lawyer shall not knowingly: (1) make a false statement of … law to a tribunal.”). “Rule 11 imposes a duty on attorneys to certify that they have conducted a reasonable inquiry and have determined that any papers filed with the court are well grounded in fact, [and] legally tenable.” Cooter & Gell v. Hartmarx Corp., 496 U.S. 384, 393, 110 S.Ct. 2447, 110 L.Ed.2d 359 (1990). “Under Rule 11, a court may sanction an 615*615attorney for, among other things, misrepresenting facts or making frivolous legal arguments.” Muhammad v. Walmart Stores E., L.P., 732 F.3d 104, 108 (2d Cir. 2013) (per curiam).

At the very least, the duties imposed by Rule 11 require that attorneys read, and thereby confirm the existence and validity of, the legal authorities on which they rely. Indeed, we can think of no other way to ensure that the arguments made based on those authorities are “warranted by existing law,” Fed. R. Civ. P. 11(b)(2), or otherwise “legally tenable.” Cooter & Gell, 496 U.S. at 393, 110 S.Ct. 2447. As a District Judge of this Circuit recently held when presented with non-existent precedent generated by ChatGPT: “A fake opinion is not `existing law’ and citation to a fake opinion does not provide a non-frivolous ground for extending, modifying, or reversing existing law, or for establishing new law. An attempt to persuade a court or oppose an adversary by relying on fake opinions is an abuse of the adversary system.” Mata v. Avianca, Inc., No. 22CV01461(PKC), 678 F.Supp.3d 443, 460-61 (S.D.N.Y. June 22, 2023).

Attorney Lee states that “it is important to recognize that ChatGPT represents a significant technological advancement,” and argues that “[i]t would be prudent for the court to advise legal professionals to exercise caution when utilizing this new technology.” Response at 2. Indeed, several courts have recently proposed or enacted local rules or orders specifically addressing the use of artificial intelligence tools before the court.[3] But such a rule is not necessary to inform a licensed attorney, who is a member of the bar of this Court, that she must ensure that her submissions to the Court are accurate.

Attorney Lee’s submission of a brief relying on non-existent authority reveals that she failed to determine that the argument she made was “legally tenable.” Cooter & Gell, 496 U.S. at 393, 110 S.Ct. 2447. The brief presents a false statement of law to this Court, and it appears that Attorney Lee made no inquiry, much less the reasonable inquiry required by Rule 11 and long-standing precedent, into the validity of the arguments she presented. We 616*616 therefore REFER Attorney Lee to the Court’s Grievance Panel pursuant to Local Rule 46.2 for further investigation, and for consideration of a referral to the Committee on Admissions and Grievances. See 2d Cir. R. 46.2.