By Staff Reporter
July 11, 2025
In a high-profile defamation case involving MyPillow CEO Mike Lindell, a federal judge in Colorado has issued a stern warning about the dangers of artificial intelligence (AI) in legal practice. Lindell’s attorneys, Christopher Kachouroff and Jennifer DeMaster, were fined $3,000 each for submitting a court filing riddled with AI-generated errors, including citations to nonexistent cases. The incident, ruled on by U.S. District Judge Nina Y. Wang on July 7, 2025, underscores the growing challenge of balancing AI’s potential with the need for responsible use in the legal system.
The problematic filing was part of a defamation lawsuit brought by Eric Coomer, a former Dominion Voting Systems executive, against Lindell and his media company, Frankspeech. Coomer accused Lindell of spreading baseless conspiracy theories about the 2020 election, claiming Coomer and Dominion manipulated voting equipment to favor Joe Biden. Last month, a jury found Lindell personally liable for $440,500 and Frankspeech for $1,865,500 in damages, totaling over $2.3 million. MyPillow itself was not found liable.
The issue arose from a February 2025 brief filed by Lindell’s attorneys opposing Coomer’s motion to exclude certain evidence. The document contained over two dozen errors, including “hallucinated” cases—fictitious legal precedents generated by AI tools. Judge Wang, unimpressed by the attorneys’ explanation, noted that Kachouroff initially failed to disclose the use of generative AI, only admitting it when directly questioned. Wang called his claim that DeMaster “mistakenly filed” an unedited draft “puzzlingly defiant” and rejected the notion that the errors were inadvertent.
“This case is a microcosm of the broader dilemma,” said Maura Grossman, a professor at the University of Waterloo and an adjunct law professor at Osgoode Hall Law School. She highlighted three common AI-related issues in legal filings: fake cases, fabricated quotes from real cases, and correct citations misused to support invalid arguments. Grossman emphasized that attorneys must verify AI-generated content, as courts rely on them for accurate legal representation.
The fines, described by Wang as the “least severe sanction” to deter future misconduct, were ordered to be paid by August 4, 2025. Lindell has announced plans to appeal the defamation verdict, but the sanctions on his attorneys have sparked broader discussion about AI’s role in the legal field.
Damien Charlotin, who tracks AI-related legal errors, noted that such incidents have surged since spring 2025, with 206 documented cases of AI-generated hallucinations in court filings worldwide. “Many more likely go unreported due to embarrassment,” he told NPR. The rapid adoption of AI tools, outpacing regulatory frameworks, has led to other missteps, such as a New York plaintiff’s attempt to use an AI-generated avatar to argue a case, which drew judicial ire.
Grossman advised lawyers to admit AI-related errors promptly to avoid harsher penalties, warning, “You are likely to get a harsher penalty if you don’t come clean.” The Lindell case serves as a cautionary tale for the legal profession, highlighting the need for rigorous oversight when using AI tools. As technology continues to reshape the courtroom, attorneys must navigate its benefits while ensuring accuracy and accountability.
The incident has also drawn attention on platforms like X, where users have criticized the attorneys’ reliance on AI, with one post calling their explanation “troubling” and another noting the filing’s “nearly 30” fake citations. As AI’s presence in legal practice grows, the Lindell case stands as a stark reminder: technology can enhance efficiency, but only if wielded with diligence and transparency.