Bankruptcy rule 11 is known as Rule 9011. In re Martin, 670 B.R. 636 (2025), issued by the Northern District of Illinois sanctions two lawyers for using artificial intelligence to prepare portions of their briefs. One lawyer claimed he was unaware of the prohibition on using artificial intelligence and that he did not know that it might make up fake citations to cases. The court’s response to this is thoughtful:
[Lawyer 1] and [Lawyer 2] ask me not to sanction them at all given that they have already voluntarily: (1) admitted their misconduct and promised not to do it again; (2) withdrawn any application for compensation in this case; and (3) watched an online CLE video. But while I appreciate their candor and efforts, “[t]here must be consequences.” Ferris v. Amazon.com Servs., LLC, No. 24-cv-304, 778 F.Supp.3d 879, 882 (N.D. Miss. Apr. 16, 2025). While I believe this mistake was unintentional, a “citation to fake, AI-generated sources … shatters [] credibility” and “imposes many harms.” Kohls v. Ellison, No. 24-cv-3754, 2025 WL 66514, at *4-5 (D. Minn. Jan. 10, 2025). So the consequences “are steep.” Id. at *5.
The first reason I issue sanctions stems from [Lawyer 1’s] claim of ignorance—he asserts he didn’t know the use of AI in general and ChatGPT in particular could result in citations to fake cases. (Dkt. No. 71 at 3) [Lawyer 1] disputes the court’s statement in Wadsworth that it is “well-known in the legal community that AI resources generate fake cases.” Wadsworth v. Walmart Inc., 348 F.R.D. 489, 497 (D.Wyo. 2025). Indeed, [Lawyer 1] aggressively chides that assertion, positing that 647*647 “in making that statement, the Wadsworth court cited no study, law school journal article, survey of attorneys, or any source to support this blanket conclusion.” (Dkt. No. 71 at 3-4, emphasis in [Lawyer 1’s] brief as filed)
I find [Lawyer 1’s] position troubling. At this point, to be blunt, any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud. This has been a hot topic in the legal profession since at least 2023, exemplified by the fact that Chief Justice John G. Roberts, Jr. devoted his 2023 annual Year-End Report on the Federal Judiciary (in which he “speak[s] to a major issue relevant to the whole federal court system,” Report at 2) to the risks of using AI in the legal profession, including hallucinated case citations.[6] To put it mildly, “[t]he use of non-existent case citations and fake legal authority generated by artificial intelligence programs has been the topic of many published legal opinions and scholarly articles as of late.”[7] At this point there are many published cases on the issue—while only a sampling are cited in this opinion, all but one were issued before June 2, 2025, when [Lawyer 1] filed the offending reply. See, e.g., Jaclyn Diaz, A Recent High-Profile Case of AI Hallucination Serves as a Stark Warning,NPR ILLINOIS (July 10, 2025, 12:49 PM), https://www.nprillinois.org/XXXX-XX-XX/a-recent-high-profile-case-of-ai-hallucination-serves-as-a-stark-warning (“There have been a host of high-profile cases where the use of generative AI has gone wrong for lawyers and others filing legal cases…. It has become a familiar trend in courtrooms across the U.S.”). The Sedona Conference wrote on the topic in 2023.[8] Newspapers, magazines, and other well-known online sources have been publicizing the problem for at least two years.[9] And on January 1, 648*648 2025, the Illinois Supreme Court issued a “Supreme Court Policy on Artificial Intelligence” requiring practitioners in this state to “thoroughly review” any content generated by AI.[10]
Counsel’s professed ignorance of the dangers of using ChatGPT for legal research without checking the results is in some sense irrelevant. Lawyers have ethical obligations not only to review whatever cases they cite (regardless of where they pulled them from), but to understand developments in technology germane to their practice.[11] And there are plenty of opportunities to learn—indeed, the Illinois State Bar Association chose “Generative Artificial Intelligence — Fact or Fiction” as the theme of its biennial two-day Allerton Conference earlier this year, calling the topic “one that every legal professional should have on their radar.”[12] Similar CLE opportunities have been offered across the nation for at least the past two years.
The bottom line is this: at this point, no lawyer should be using ChatGPT or any other generative AI product to perform research without verifying the results. Period. See, e.g., Lacey v. State Farm Gen. Ins. Co., No. CV 24-5205, 2025 WL 1363069, at *3 (C.D. Cal. May 5, 2025) (“Even with recent advances, no reasonably competent attorney should out-source research and writing to this technology— particularly without any attempt to verify the accuracy of that material.”); Mid Cent. Operating Eng’rs, 2025 WL 574234, at *2 (“It is one thing to use AI to assist with initial research, and even non-legal AI programs may provide a helpful 30,000-foot view. It is an entirely different thing, however, to rely on the output of a generative AI program without verifying the current treatment or validity—or, indeed, the very existence—of the case presented.”). In fact, given the nature of generative AI tools, I seriously doubt their utility to assist in performing accurate research (for 649*649 now). “Generative” AI, unlike the older “predictive” AI, is “a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on.” Adam Zewe, Explained: Generative AI, MIT NEWS (Nov. 9, 2023), https://news.mit.edu/2023/explained-generative-ai-1109 (emphasis added). Platforms like ChatGPT are powered by “large language models” that teach the platform to create realistic-looking output. They can write a story that reads like it was written by Stephen King (but wasn’t) or pen a song that sounds like it was written by Taylor Swift (but wasn’t). But they can’t do your legal research for you. ChatGPT does not access legal databases like Westlaw or Lexis, draft and input a query, review and analyze each of the results, determine which results are on point, and then compose an accurate, Bluebook-conforming citation to the right cases—all of which it would have to do to be a useful research assistant. Instead, these AI platforms look at legal briefs in their training model and then create output that looks like a legal brief by “placing one most-likely word after another” consistent with the prompt it received. Brian Barrett, “You Can’t Lick a Badger Twice”: Google Failures Highlight a Fundamental AI Flaw, WIRED (Apr. 23, 2025, 7:44 PM), https://www.wired.com/story/google-ai-overviews-meaning/.
If anything, [Lawyer 1’s] alleged lack of knowledge of ChatGPT’s shortcomings leads me to do what courts have been doing with increasing frequency: announce loudly and clearly (so that everyone hears and understands) that lawyers blindly relying on generative AI and citing fake cases are violating Bankruptcy Rule 9011 and will be sanctioned. [Lawyer 1] “professed ignorance of the propensity of the AI tools he was using to `hallucinate’ citations is evidence that [the] lesser sanctions [imposed in prior cases] have been insufficient to deter the conduct.” Mid Cent. Operating Eng’rs, 2025 WL 574234, at *3.
The second reason I issue sanctions is that, as described above, I also have concerns about the way this particular case was handled. I understand that Debtor’s counsel has a massive docket of cases. But every debtor deserves care and attention. Chapter 13 cases can be challenging to file and manage—especially when they involve complexities like those in this case. If a law firm does not have the resources to devote the time and energy necessary to shepherd hundreds of Chapter 13 cases at the same time, it should refer matters it cannot handle to other attorneys who can—lest a search for time-saving devices lead to these kinds of missteps. What I mean to convey here is that while everyone makes mistakes, I expect— as I think all judges do—attorneys to be more diligent and careful than has been shown here.[13]
Comment: This is an excellent opinion and it should be carefully read and considered by all lawyers.
Ed Clinton, Jr.


