Copyright Battles and the EU AI Act: A Complex Intersection
Brussels, Belgium – August 22, 2025 – The European Union’s Artificial Intelligence Act (AI Act), enacted on July 12, 2024, and effective from August 1, 2024, aimed to establish the world’s first comprehensive legal framework for AI, promoting trustworthy and human-centric technology. However, ongoing copyright battles have exposed significant flaws in the Act, particularly its handling of generative AI and text and data mining (TDM) exceptions, fueling debates that threaten its effectiveness and prompting calls for reform. Critics, including legal scholars and creative industry stakeholders, argue that the Act’s ambiguous provisions and “devastating” copyright loopholes undermine protections for artists, writers, and other rights holders, potentially contributing to its perceived downfall.
The EU AI Act and Copyright Provisions
The AI Act, designed to regulate AI systems based on risk categories, includes specific obligations for providers of general-purpose AI (GPAI) models, such as those powering generative AI tools like ChatGPT and DALL-E. While the Act does not directly regulate copyright, it imposes transparency and compliance requirements under Article 53, mandating that AI providers:
Trending Topic: Tensions Between AI Innovation and Copyright Protection Threaten EU AI Act’s Effectiveness
- Respect EU copyright law, particularly the TDM opt-out mechanism under Article 4(3) of the Copyright in the Digital Single Market Directive (CDSM, Directive (EU) 2019/790).
- Provide a “sufficiently detailed summary” of training data used, per a template from the European AI Office, to enable rights holders to enforce their rights.
These provisions aim to balance AI innovation with intellectual property protections. However, as noted in a February 19, 2025, Guardian article, Axel Voss, a key architect of the 2019 Copyright Directive, criticized the AI Act for creating a “legal gap” that leaves creatives vulnerable. The Act’s reliance on the CDSM’s TDM exception, originally intended for limited private use, has been contentious, as it allows tech companies to harvest vast amounts of copyrighted material for AI training unless rights holders explicitly opt out.
Copyright Battles Fueling Criticism
The AI Act’s copyright framework has sparked significant backlash from creative industries, who argue it fails to adequately protect their rights. Key issues include:
- TDM Exemption Misapplication: A 2024 study by legal scholar Tim Dornis and computer scientist Sebastian Stober concluded that training generative AI models on copyrighted works constitutes “copyright infringement” rather than permissible TDM, challenging the AI Act’s framework. The TDM exemption, intended for non-commercial research, is seen as misaligned with the commercial scale of AI training, as Voss noted, calling it a “misunderstanding” that benefits Big Tech.
- Lack of Transparency: The AI Act requires providers to disclose training data summaries, but the draft rules, as critiqued by Voss and 15 cultural organizations in a February 2025 letter to the European Commission, lack sufficient detail to ensure accountability. Artists like Aafke Romeijn, a Dutch electropop artist, have highlighted the practical impossibility of verifying whether their works were used, as companies are not obligated to report specific content.
- Enforcement Challenges: The Act’s reliance on rights holders to opt out via machine-readable protocols is impractical for individual creators, who lack the resources to monitor or enforce their rights. Romeijn, in the Guardian article, questioned the feasibility of suing tech giants, citing high costs and reputational risks. The withdrawal of the proposed AI Liability Act in February 2025 further complicates enforcement, as it would have provided a framework for addressing AI-related copyright infringements.
These issues have led to accusations that the AI Act prioritizes tech innovation over creative protections, with Nina George, president of the European Writers Council, labeling the lack of enforcement tools a “scandal.”
Impact on the AI Act’s Perceived Downfall
The copyright battles have not directly caused the AI Act’s “downfall” but have significantly undermined its credibility and implementation. A March 25, 2025, Bruegel report warned that the Act’s restrictive copyright measures, such as requiring developers to respect opt-outs, could reduce the quality of AI models in the EU, stifle innovation, and make the region less attractive for AI development compared to jurisdictions with more lenient copyright regimes, like the U.S. or Japan. This has led to concerns that the Act fails to achieve its goal of fostering competitive, trustworthy AI.
The extraterritorial reach of the Act, outlined in Recital 106, attempts to level the playing field by applying EU copyright standards to all GPAI providers operating in the EU, regardless of where training occurs. However, a November 28, 2024, Kluwer Copyright Blog post by João Pedro Quintais questions the legal feasibility and effectiveness of this approach, given the global nature of AI training and jurisdictional complexities.
Industry and Public Sentiment
The creative community has mobilized against the AI Act’s perceived shortcomings. The European Council of Literary Translators’ Associations, representing 10,000 translators, emphasized in February 2025 that authors should control whether their works are used for AI training and receive fair remuneration. Meanwhile, tech companies argue that restrictive copyright rules hinder innovation, with some, like those cited in a January 29, 2025, ScienceDirect paper, advocating for clearer TDM exemptions to support AI development.
Sentiment on X reflects this divide, with users like @EU_Commission promoting the Act’s transparency goals, while others, including creatives, criticize its failure to protect artists. The debate underscores a broader tension between fostering AI innovation and safeguarding intellectual property, with no clear resolution as the Act’s main implementation nears in August 2026.
Looking Ahead
The AI Act’s copyright provisions, while well-intentioned, have exposed significant gaps that threaten its effectiveness. Proposals for reform, as suggested in a May 12, 2025, EUIPO study, include enhancing transparency through standardized reporting, developing licensing agreements for AI training, and strengthening opt-out mechanisms. Legal scholars like Eleonora Rosati, cited in a 2024 Cambridge Core article, call for a balanced approach to address liability for AI-generated outputs, which could inform future amendments.
As litigation and advocacy grow—evidenced by U.S. cases like Andersen v. Stability AI and NY Times v. OpenAI—the EU faces pressure to refine its framework to protect creators without stifling innovation. The AI Act’s success hinges on addressing these copyright battles, ensuring it remains a model for global AI regulation rather than a cautionary tale of regulatory overreach.
Sources: TheGuardian.com, Europarl.europa.eu, RAND.org, Cambridge.org, ScienceDirect.com, Kluwer Copyright Blog, Bruegel.org, EUIPO.europa.eu