On March 10, 2026, a federal prosecutor with thirty years of experience stood before a magistrate judge in the United States District Court for the Eastern District of North Carolina and announced his resignation.
His offense? He had used artificial intelligence to draft a brief, filed it without adequate review, and it contained fabricated quotations and misrepresented case holdings. U.S. Magistrate Judge Robert Numbers was not moved by his explanation and told the prosecutor that taking shortcuts on basic work was outrageous. The time-saving allure of AI ended a long career – to say nothing of the damage it may have done to the case and the Government’s credibility.
It’s Wednesday. There Must Be Another Lawyer AI Blunder.
If you think that story sounds like an isolated incident, you have not been privy to the court orders that seem to appear daily in lawyers’ inboxes, which some of us read with both horror and a small measure of schadenfreude. Courts across the country have been issuing sanctions orders at a pace suggesting that the seductive charms of generative AI are too much for some harried attorneys. Here are a few reported examples from the last few months:
In In re Richburg, a South Carolina case, a solo practitioner with forty years of experience — hurriedly drafting a motion over the weekend — prompted Microsoft CoPilot to find supporting caselaw and included the results without checking whether the cases existed. They did not, and the legal propositions for which they were cited were not applicable law. Although the parties ended up resolving their issues, the bankruptcy court in South Carolina noticed the errant case cites and holdings and scheduled a rule to show cause hearing.
At the hearing, the lawyer acknowledged his mistakes and expressed remorse “for his blind reliance on a technological tool which he did not seem to fully understand.” Although no monetary sanctions were ordered, the bankruptcy court noted that under different circumstances they might be and required the offending lawyer to attend continuing legal education courses on the ethical use of AI in the legal practice and framed its order as a “lesson learned for the bar in general.”
In In re Martin, a bankruptcy attorney used ChatGPT to draft a brief that cited four cases for a key proposition of law — none of which existed. The Illinois bankruptcy court imposed a $5,500 joint and several sanction and required in-person attendance at a national conference session on AI dangers, noting bluntly that “no lawyer should be using ChatGPT or any other generative AI product to perform research without verifying the results.”
In In re Jackson Hospital & Clinic, an attorney in one of the largest law firms in the U.S. filed multiple pleadings riddled with fabricated quotations and misrepresented case holdings in motions filed with the court. After the bankruptcy court noted that the cases and regulations cited by the lawyer did not stand for the propositions she asserted, it ruled against her and denied the motion.
Undeterred, the lawyer then doubled down and — remarkably — submitted a motion for reconsideration that contained even more hallucinated citations. The motion for reconsideration was subsequently withdrawn at the hearing but the court scheduled a rule to show cause hearing for the offending lawyer and her firm shortly thereafter. Ultimately, the Alabama bankruptcy court revoked the lawyer’s pro hac vice admission and her firm paid over $55,000 in opposing counsel fees.
Finally, in Flycatcher Corp. v. Affable Avenue LLC, a New York federal court took the extraordinary step of entering default judgment against a party — ending the case entirely — because its attorney kept filing briefs “peppered with false citations” despite multiple warnings from the court, ultimately submitting fabricated cites even in his response to the show cause order about fabricated cites.
What This Means for You
If you are a bank officer, a special-assets professional, or an in-house attorney working with outside counsel on bankruptcy, collection, or enforcement matters, you have a legitimate stake in how your attorneys are using AI. You pay the bills. The cases described above involved motions, briefs, and responses filed in active litigation — the same filings your counsel may be preparing to advocate your interests.
Consider what can happen when a motion for relief from the automatic stay cites a case that does not exist. Or when an objection to a reorganization plan misquotes the standard from a bankruptcy order. The opposing party catches it and brings it to the court’s attention. At best, the error is embarrassing for your counsel and correctable. At worst, it undermines your credibility (in many respects, you and your attorney are the same to the judge), invites sanctions, and wastes time and money addressing a problem that should never exist.
Claude, ChatGPT, Grok, CoPilot, and other generative AI are large language models trained on vast amounts of text data to predict and generate human-like text one word at a time. They’re sophisticated pattern-matching machines, not thinking entities. They don’t actually understand what they’re saying or verify that their outputs are true. Ask them and they’ll tell you as much. They did not go to law school and most don’t have access to legal databases like Westlaw or Lexis.
Generative AI can produce a plausible-sounding pleading that appears perfect except for that one thing – all the cases are bunk. An attorney who simply prints and files one of these briefs without verifying cites and understanding Generative AI’s limitations does so at his or her peril and potentially – if you are the client – yours.
AI Questions For Your Lawyers
None of this means AI has no place in legal practice. Used thoughtfully and with appropriate verification, AI tools can improve efficiency and provide you with better service. Our colleagues in the firm’s privacy and AI practice have written extensively about how to implement AI responsibly, including how to assess vendors, build governance frameworks, and ensure human oversight of AI-generated work product.
Oversight is key. And as a client, you have both the right and the practical interest to ask about AI.
Here are some reasonable questions for your outside counsel:
Does your firm have a written AI policy? A thoughtful policy creates guidelines on which AI tools may be used, how outputs are to be reviewed, and who is responsible for ensuring accuracy. Firms that have faced sanctions often had no policy — or had one that was not followed.
What training has counsel received on AI use? The courts have been clear that claiming ignorance of AI’s tendency to hallucinate (i.e., make stuff up) is no longer a viable defense. Ongoing training signals that a firm is taking the issue seriously.
How does your firm verify AI-generated legal research? The answer should involve human review of every citation — confirming that cases exist, that they say what the brief claims they say, and that they remain good law. That’s the job of a lawyer – one that can’t be outsourced to ChatGPT.
What data security and privacy protections govern how client information is handled when AI tools are used? Some AI platforms store inputs, use them for model training, or route data through third parties – all of which raise privacy concerns. Information you share with your lawyer should remain confidential.
Are you comfortable with AI being used on your matter? This is not an unreasonable question, and a good attorney will welcome the opportunity to discuss how, or if, he or she uses AI in their practice.
The Bottom Line
The AI genie can’t be put back in the bottle. It is here to stay and it will be largely beneficial if handled responsibly (at least until they take over the world and turn us all into human batteries). But right now too many lawyers are treating Claude and ChatGPT like tenured law professors, to the chagrin of judges and at the expense of their reputations and their clients’ cases. Your attorney is or will make choices about how to use AI tools to represent you. You have the right to know what those choices are. The conversation need not be adversarial — it is the kind of informed engagement that good attorney-client relationships are built on.