The EU AI Law Is Ready, but Regulators Face Challenges in Enforcement
The European Union’s Artificial Intelligence Act (Regulation (EU) 2024/1689), the world’s first comprehensive AI regulatory framework, entered into force on August 1, 2024, marking a significant milestone in global tech governance. With a phased implementation schedule, the Act’s first obligations began on February 2, 2025, banning certain AI practices, and a critical wave of governance and general-purpose AI (GPAI) requirements took effect on August 2, 2025. However, as the EU pushes to set a global standard for AI safety and ethics, concerns are mounting that regulators are unprepared to enforce the ambitious law, potentially undermining its impact.
The EU AI Act: A Groundbreaking Framework
The EU AI Act adopts a risk-based approach, categorizing AI systems into four levels: unacceptable risk (banned), high risk (strictly regulated), limited risk (transparency requirements), and minimal risk (unregulated). Key provisions include:
- Prohibited Practices: As of February 2, 2025, manipulative AI techniques, exploitative systems targeting vulnerable groups, and real-time biometric surveillance in public spaces (with exceptions) are banned.
- GPAI Obligations: Since August 2, 2025, providers of general-purpose AI models (e.g., GPT-4, DALL-E) must maintain technical documentation, ensure copyright compliance, and publish training data summaries. Models with systemic risks face additional requirements like risk management and incident reporting.
- High-Risk Systems: By August 2, 2026, systems used in employment, education, or critical infrastructure must meet stringent safety, transparency, and oversight standards.
- Penalties: Non-compliance can lead to fines of up to €35 million or 7% of global annual turnover for prohibited practices, with other violations carrying fines up to €15 million or 3%.
The Act aims to foster trustworthy, human-centric AI while protecting fundamental rights, with the AI Office and national authorities overseeing enforcement.
Regulatory Readiness: A Growing Concern
Despite the Act’s robust framework, regulators across the EU’s 27 member states are struggling to keep pace, raising doubts about effective enforcement. Key challenges include:
- Delayed Technical Standards: The European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC) have not finalized technical standards critical for compliance, leaving companies guessing how to meet requirements. The European Commission has acknowledged this delay and is considering “alternative solutions” to bridge the gap.
- Under-Resourced Authorities: By August 2, 2025, member states were required to designate national competent authorities (market surveillance and notifying bodies) and report their resources to the Commission. However, many countries lag in staffing and funding these bodies, with some still clarifying their roles. This resource gap could hinder consistent enforcement across the EU.
- Incomplete Governance Structures: The AI Office and AI Board, operational since August 2, 2025, are central to coordinating enforcement, particularly for GPAI models. Yet, the scientific panel of independent experts, meant to support the AI Office, faces delays in becoming fully functional, with applications for experts open until September 14, 2025.
- Enforcement Uncertainty: While penalties are now in effect, many investigatory and enforcement powers, especially for GPAI models, don’t apply until August 2, 2026. This creates ambiguity about how regulators will address non-compliance in the interim, particularly with complex GPAI requirements.
- Industry Pushback: Over 45 leading European companies, including Airbus and Siemens, urged a two-year pause on high-risk and GPAI rules in a July 3, 2025, letter, citing regulatory complexity and unclear guidelines. The Commission rejected a broad delay but signaled flexibility for specific provisions if standards aren’t ready.
Industry and Public Sentiment
The lack of regulatory readiness has sparked varied reactions. On X, users like @orfonline noted that “Implementation delays and a lack of cohesive economic strategy could hinder the EU’s competitiveness in the global AI race,” reflecting concerns about Europe falling behind. Meanwhile, @lawdotcom’s post, “The EU AI Law Is Ready. Regulators? Not So Much,” captured the sentiment that the Act’s ambition outpaces practical enforcement.
Businesses, particularly in high-risk sectors like health and pharma, face a “guessing game” without clear standards, risking costly compliance missteps. Startups, however, may benefit from proposed regulatory relief, such as deferred timelines and simplified requirements, as the EU seeks to balance innovation with oversight. Critics argue that inconsistent enforcement could weaken the Act’s global influence, allowing other regions to shape AI standards.
Critical Perspective
While the EU AI Act is hailed as a landmark, its enforcement challenges reveal a disconnect between policy ambition and operational reality. The staggered timeline was meant to ease implementation, but delays in standards and under-resourced regulators risk creating a patchwork of compliance across member states. This could disproportionately burden smaller firms while letting larger tech giants, with deeper legal resources, navigate loopholes. The Commission’s openness to “alternative solutions” suggests pragmatism but also highlights the Act’s vulnerability to dilution if enforcement falters. Moreover, the historical use of heavy-handed tactics in EU regulation (e.g., GDPR) raises questions about whether the AI Act’s penalties will be applied consistently or become symbolic without robust oversight.
What’s Next?
Businesses are advised to act now despite uncertainties:
- Conduct AI Audits: Identify and assess AI systems for risk categories, especially high-risk applications like hiring algorithms or biometric tools.
- Engage with the AI Pact: This voluntary framework offers a way to align with regulators and build trust, particularly for high-risk sectors.
- Monitor Guidance: Stay updated on AI Office and national authority announcements, as evolving guidelines will shape compliance.
For regulators, the focus is on accelerating standard development and bolstering resources. The EU’s global influence hinges on proving the Act is enforceable, not just aspirational. With the next major milestone—high-risk system compliance—set for August 2, 2026, the coming year will be critical for closing the readiness gap.
Sources: Information drawn from DLA Piper, Legal Newsfeed, Fisher Phillips, PwC, and posts on X. Always verify with trusted sources, as X posts may contain unverified claims.