Tech Trade Group’s Expert Witness Accused of Submitting AI-Generated Fabrications

Trending Topic: AI Misuse in Legal Proceedings Sparks Ethical and Procedural Concerns

Baton Rouge, LA – August 21, 2025 – NetChoice, a prominent tech trade group representing companies like X and Meta, has withdrawn its sole expert witness, Dr. Anthony Bean, from a lawsuit challenging Louisiana’s age verification law after allegations surfaced that his testimony contained fabricated quotes and citations generated by artificial intelligence. The state’s attorney general, Liz Murrill, detailed the accusations in a court filing on August 15, 2025, prompting NetChoice to retract Bean’s declaration to avoid further undermining its case.

The lawsuit, filed in the U.S. District Court for the Middle District of Louisiana, contests the constitutionality of Louisiana’s Senate Bill 162, which mandates age verification for social media platforms to protect minors. Dr. Bean, a clinical psychologist, was NetChoice’s key expert, but Louisiana’s legal team alleged that his report included non-existent sources and fabricated quotations, identified through a cursory comparison with original documents. “Dr. Bean’s reliability is shot, and the proof is overwhelming,” the state’s attorneys wrote, urging the court to exclude his testimony and block NetChoice from amending it.

AI-Generated Errors Undermine Credibility

The allegations against Bean highlight a growing concern in the legal field: the misuse of AI tools like ChatGPT, which can produce “hallucinated” citations that appear legitimate but are entirely fictitious. While some source articles provided by NetChoice appeared to align with Bean’s claims, the quotes themselves were fabricated, raising questions about whether he relied on AI to draft his report. Bean did not respond to requests for comment, and NetChoice acknowledged the issue, stating, “Had we known of Dr. Bean’s misattributions, we would have withdrawn his declaration because it did not meet our standards.” The group expressed disappointment but remains confident in its challenge to the law.

This incident follows a pattern of AI-related errors in legal filings. In a similar case, a Stanford professor’s expert testimony was dismissed after AI-generated citations were found to be fabrications, and in 2023, New York lawyers were fined $5,000 for citing fake cases produced by ChatGPT. These cases underscore the risks of unverified AI use, prompting judges like Chief Judge Randy Crane of the Southern District of Texas to issue orders emphasizing attorneys’ responsibility to ensure the accuracy of AI-assisted filings.

Broader Implications for Legal Practice

The controversy has fueled discussions about the ethical use of AI in law, with posts on X reflecting public concern over its potential to erode trust in judicial proceedings. Users have noted that while AI can save time, its outputs require rigorous human verification, akin to reviewing work from an intern. The incident has also intensified scrutiny of NetChoice’s litigation strategy, as Louisiana’s attorneys seek sanctions, including reasonable fees and costs, arguing that the fabricated report derailed the litigation process.

NetChoice’s withdrawal of Bean’s testimony does not end its challenge to the age verification law, but it weakens its position as the case progresses. The firm is now barred from submitting new expert testimony, a significant setback in a high-stakes lawsuit that could influence similar regulations nationwide. As courts grapple with the integration of AI tools, this case serves as a cautionary tale, highlighting the need for robust verification protocols to maintain credibility in legal proceedings.

Sources: Law.com, News.BloombergLaw.com, Law360.com, Reuters.com