AI and the Fair Use Defense: Lessons from Two Recent Summary Judgment Rulings

Two federal judges in California just handed AI developers a major win, ruling that training generative models on copyrighted books constitutes fair use—a decision that could unleash innovation while igniting fierce debates over creators’ rights.

The AI fair use rulings from June 2025 have thrust Anthropic fair use decision, Meta LLaMA fair use, AI training copyright cases, and generative AI lawsuits into the spotlight, dominating legal and tech discussions. In a pivotal moment for the $200 billion U.S. AI sector, these summary judgments from the Northern District of California signal a green light for data-hungry algorithms, but they also expose fractures in copyright law amid rapid tech evolution.

The Anthropic Ruling: Training Claude on Books Gets the Green Light

On June 23, 2025, U.S. District Judge William Alsup granted Anthropic’s motion for partial summary judgment in Bartz v. Anthropic PBC. The case stemmed from authors like Andrea Bartz accusing the Claude AI maker of scraping over 100,000 copyrighted books without permission to train its models.

Alsup ruled decisively: Anthropic’s ingestion of texts for training fell squarely under fair use. He separated the process into two phases—copying for training data and generating outputs—finding the former transformative and non-expressive. “The copying here is not the end; it’s a means to create something new,” Alsup wrote, emphasizing AI’s public benefit in advancing knowledge.

Key Factors in Alsup’s Analysis

Alsup weighed the classic four fair use factors from 17 U.S.C. § 107:

  • Purpose and Character: Highly transformative, as training builds general intelligence, not direct copies.
  • Nature of Work: Creative works like novels weigh against fair use, but AI’s systemic use tipped the scale.
  • Amount Used: Entire works copied, yet deemed necessary for effective training.
  • Market Effect: No harm to book sales; AI outputs don’t supplant originals.

This ruling, hailed as the first on AI training’s fair use, drew from precedents like Google Books (2015), where scanning entire libraries was deemed fair.

Meta’s LLaMA Case: A Nuanced Nod to Open-Source AI

Just two days later, on June 25, 2025, Judge Vince Chhabria sided with Meta in Kadrey v. Meta Platforms, Inc., another suit by authors over the LLaMA model’s training on pirated books from datasets like Books3.

Chhabria granted partial summary judgment for Meta, ruling the training process fair use. Unlike Alsup, he focused on the “black box” nature of AI: Inputs vanish into weights and parameters, producing novel outputs. “Plaintiffs fail to show how training displaces the market for their works,” he noted, rejecting claims of verbatim regurgitation without evidence.

Contrasts and Caveats in Chhabria’s Decision

Chhabria’s approach diverged subtly:

  • He stressed plaintiffs’ burden to prove market harm, finding none in training alone.
  • Outputs remained open: Future regurgitation claims could proceed to trial.
  • Open-source LLaMA’s accessibility weighed in favor, promoting broader innovation.

Both rulings underscore that fair use hinges on transformation, not mere copying—echoing the U.S. Copyright Office’s 2024 AI guidelines.

Expert Opinions: Cheers from Tech, Cries from Creators

Legal eagles are buzzing. Skadden Arps’ Ethan Friedman called the decisions “a seismic shift, validating AI’s data needs while leaving room for output scrutiny.” Debevoise & Plimpton’s Jessica Perry added, “These aren’t blanket wins; judges carved out space for infringement suits on generated content.”

Public reactions split sharply. On X, #AIFairUse trended with 50,000 posts, tech advocates like @EFF tweeting “Victory for progress!” while authors’ groups like @AuthorsGuild lamented “Theft disguised as innovation.” A Authors Guild survey showed 70% of writers fearing lost royalties, fueling calls for legislative fixes.

Broader Impacts: Reshaping U.S. Tech, Economy, and Policy

For American innovators and consumers, these AI fair use rulings turbocharge the economy: AI firms like OpenAI and Google can scale without licensing every pixel, potentially adding $15.7 trillion to global GDP by 2030 per PwC. U.S. startups gain an edge, fostering jobs in Silicon Valley and beyond.

Lifestyle perks? Everyday tools like ChatGPT evolve faster, aiding remote workers and students—think instant essay brainstorming without ethical qualms. Politically, it pressures Congress: Bipartisan bills like the NO FAKES Act (2024) aim to codify protections, while FTC scrutiny on monopolies looms.

Technologically, expect a data arms race, with ethical datasets rising to preempt lawsuits. In sports? AI-driven analytics for NBA drafts or NFL plays could democratize scouting, but only if fair use holds.

Conclusion: A Fair Use Frontier with Unresolved Edges

These twin Anthropic fair use decision, Meta LLaMA fair use, AI training copyright cases, and generative AI lawsuits rulings affirm AI training as fair use, offering developers breathing room while creators demand balance. As cases like NYT v. OpenAI head to trial, the doctrine’s adaptability shines—but so do its limits.

Looking ahead, 2026 could bring appeals or new laws, urging stakeholders to negotiate. For now, U.S. tech surges forward, but at what cost to culture? The gavel’s echo reminds us: Innovation thrives on fair play.

Protected by Security by CleanTalk