Trending Topic: AI-Generated Fake Citations Spark Ethical and Legal Concerns in Court
Phoenix, AZ – August 22, 2025 – In a recent development in the OnlyFans litigation, attorneys from Hagens Berman Sobol Shapiro LLP have been accused of submitting opposition briefs containing AI-generated fake case citations, misquoted real cases, and language not found in prior judicial orders, according to a Daily Journal report published on August 19, 2025. The allegations, raised by opposing counsel in a case filed in Arizona, highlight a growing concern in the legal profession: the misuse of generative AI tools leading to “hallucinations”—fabricated or inaccurate content presented as factual. This incident adds to a string of similar cases across the U.S., raising questions about ethical obligations and the need for AI literacy among lawyers.
Details of the OnlyFans Litigation Incident
The accusations center on multiple opposition briefs filed in July 2025 by Hagens Berman partner Robert B. Carey in an Arizona federal court. Opposing counsel claimed the briefs cited a nonexistent case, misquoted existing cases, and included language falsely attributed to a previous court order. Carey, in a statement to the Daily Journal, attributed the errors to an attorney outside the firm who was undergoing a family health crisis and used AI to draft the documents. He emphasized that the mistakes were unintentional and apologized, noting efforts to correct the filings. The specific details of the OnlyFans case, including the nature of the litigation, were not detailed in the report, as access to the full article requires a subscription.
This incident is part of a broader pattern of AI-related errors in legal practice. The Daily Journal report underscores the risks of relying on generative AI tools like ChatGPT or Claude without rigorous verification, particularly in high-stakes litigation involving platforms like OnlyFans, where intellectual property, privacy, and commercial disputes are common.
Broader Context of AI Hallucinations in Legal Practice
The use of AI in legal research has led to a surge in documented “hallucination” cases, with a database by French lawyer and data scientist Damien Charlotin tracking 120 instances since June 2023, including 48 in 2025 alone. These hallucinations occur when AI models generate plausible but fictitious legal citations or facts, often due to their reliance on statistical patterns rather than verified data. Notable examples include:
- Mata v. Avianca (2023): New York lawyers were fined $5,000 for citing fake cases generated by ChatGPT in a personal injury lawsuit against an airline, with the judge calling it an “unprecedented circumstance”.
- Lacey v. State Farm (2025): Attorneys from Ellis George LLP and K&L Gates were sanctioned $31,100 for submitting a brief with nine incorrect citations, two of which were nonexistent, in a California federal court. Special Master Michael Wilner described the incident as a “collective debacle,” nearly misleading him into including fake citations in a judicial order.
- Anthropic Copyright Case (2025): Latham & Watkins apologized for an AI-generated false citation in an expert report defending Anthropic in a music lyrics copyright lawsuit, highlighting the risks even at major firms.
These cases illustrate a recurring issue: lawyers failing to verify AI-generated outputs, violating ethical duties under rules like Federal Rule of Civil Procedure 11(b)(2), which requires reasonable inquiry into the accuracy of legal filings. The American Bar Association has emphasized that these obligations apply to AI-generated content, with unintentional misstatements still constituting incompetence.
Ethical and Professional Implications
The OnlyFans litigation incident underscores the ethical perils of AI in legal practice. Special Master Wilner, in the Lacey case, criticized attorneys for “bad faith” conduct tantamount to misleading the court, emphasizing that lawyers must verify citations before submission. Similarly, Judge Marcia Crose in a 2024 Texas case imposed a $2,000 penalty and mandated AI training for a lawyer citing fake cases, reinforcing the duty of competence.
Legal experts like Andrew Perlman, dean of Suffolk University’s Law School, argue that relying on AI without verification is “pure and simple” incompetence. Harry Surden, a University of Colorado law professor, attributes these errors to a “lack of AI literacy,” urging lawyers to learn the strengths and weaknesses of these tools. The Ontario Superior Court’s 2025 ruling in Ko v. Li further warned that submitting AI-hallucinated cases could constitute contempt of court, emphasizing the lawyer’s duty not to mislead.
Industry Response and Mitigation Strategies
The legal profession is grappling with how to integrate AI responsibly. Recommendations include:
- Verification Protocols: Lawyers should cross-check AI-generated citations using trusted databases like Westlaw or LexisNexis, as advised by Kay Pang and Nicola Shaver on LinkedIn.
- AI as a Tool, Not a Substitute: A Baker Botts article suggests treating AI outputs like work from a junior associate, requiring oversight and independent judgment.
- Training and Education: Firms are urged to implement AI training programs, as mandated in some court sanctions, to enhance digital literacy.
- Court Oversight: Some federal courts, like the Northern District of Texas, now require attorneys to disclose AI use in filings, with automated quality control programs proposed to flag hallucinations.
The OnlyFans case highlights the need for these measures, as even reputable firms like Hagens Berman face scrutiny for AI missteps. Posts on X reflect public concern, with users like @RandomTheGuy_ noting a 74-year-old man scolded for using an AI lawyer without disclosure, underscoring the judiciary’s growing intolerance for unchecked AI use.
Looking Ahead
The accusations in the OnlyFans litigation add to a mounting tally of AI-related errors, with Charlotin’s database indicating over 20 cases in the past month alone. As generative AI becomes more prevalent, with 63% of lawyers using it for work per a 2023 Thomson Reuters survey, the profession must balance its efficiency with ethical rigor. Courts are responding with sanctions, fines, and warnings, but penalties remain “mild,” placing the onus on attorneys to self-regulate.
For Hagens Berman and the broader legal community, the incident serves as a wake-up call to prioritize verification and training. As the OnlyFans case progresses, it may prompt further judicial guidance on AI use, potentially influencing how firms navigate technology in litigation. The challenge lies in harnessing AI’s potential while upholding the profession’s core duty: to serve justice with accuracy and integrity.
Sources: DailyJournal.com, LawSites, Reuters.com, BakerBotts.com, McLane.com, AboveTheLaw.com, Jdsupra.com, Mashable.com, Forbes.com, LegalDive.com, NPR.org, DamienCharlotin.com, ArsTechnica.com, WorldLawyersForum.org, Law360.com, McMillan.ca, KSL.com, LeidenLawBlog.nl, Reason.com