Loading...

EU's Digital Omnibus Postpones AI Act Compliance for High-Risk Use

The EU proposes to delay key AI Act rules for high-risk applications until 2027 amid Big Tech pushback while simplifying digital regulations.

AvatarMB

By Marcus Bell

3 min read

Image for illustrative purpose.
Image for illustrative purpose.

The European Union has proposed delaying the strictest provisions of its AI Act for high-risk applications until December 2027, pushing back the original deadline from August 2026.

This is part of the EU's new Digital Omnibus package aimed at simplifying and easing its fast-evolving regulatory landscape amid pressure from major technology companies.

The Digital Omnibus covers a broad range of digital laws, including the AI Act, GDPR, e-Privacy Directive, and others.

Officials emphasize that simplification does not equate to deregulation, but rather a critical review intended to reduce red tape and help Europe remain competitive globally.

Why is the EU delaying the high-risk AI rules until 2027?

The Commission cites delays in member states' implementation and the need for businesses to adapt to complex new regulations as core reasons for postponement.

Many EU countries have not yet established the enforcement bodies necessary to oversee compliance, causing operational bottlenecks.

Furthermore, the digital ecosystem remains highly dynamic, and the delay enables the EU to work on harmonizing standards and clarifying obligations.

This approach aims to balance innovation support with robust safety and rights protections.

Did you know?
The EU AI Act is the first ever legal framework globally to regulate AI systems based on their risk levels.

What changes does the Digital Omnibus package introduce?

The most notable change is the postponement of AI regulations for higher-risk applications, including biometric ID, job recruitment algorithms, health tools, creditworthiness scoring, and law enforcement AI.

The package also proposes simplifying consent rules for pop-up cookies and easing aspects of GDPR related to AI training data.

The deadline for sector-specific high-risk AI, such as medical devices and aviation systems, may be pushed further to August 2028.

Besides delays, the package aims to integrate cross-cutting digital legislation to reduce administrative burdens, potentially saving billions for EU businesses.

How does Big Tech influence the EU's AI regulation plans?

Lobbying from major technology companies, including Alphabet, Meta, and OpenAI, has played a significant role. These companies favor the delay as it grants them extended timeframes to adjust AI practices and use broader personal data pools for AI training.

The Cloud Computing Industry Association, representing giants like Amazon and Apple, welcomed the delay but is calling for increased clarity and expanded modernization in the AI rules.

Critics argue that the changes disproportionately benefit Big Tech at the expense of consumer protection.

ALSO READ | Cloudflare HTTP 500 Errors Affect Millions of Users Globally

What are the implications for consumer rights and data privacy?

Experts warn that the delay could undermine safeguards against biased AI decisions affecting loans, insurance, and employment without users' knowledge or consent. Consumer advocates stress that simpler rules should not result in weaker protections.

The proposed amendments to GDPR within the package would allow tech firms more freedom to utilize European personal data for AI development, raising privacy concerns.

European consumer organizations urge legislators to prioritize enforcement structures to uphold rights in this evolving landscape.

What uncertainties remain regarding the AI Act enforcement?

Despite the proposals, the Digital Omnibus is subject to negotiation by EU member states and the European Parliament. Some lawmakers oppose amending recently adopted laws so soon and object to bypassing traditional legislative impact assessments.

There is also potential for the Commission to accelerate compliance deadlines if standards mature faster than expected.

Consequently, businesses face an unpredictable regulatory timeline, complicating AI deployment planning.

Looking forward, the EU aims to strike a careful balance between fostering innovation and protecting fundamental rights as artificial intelligence technologies rapidly advance.

The final shape of AI regulation will likely hinge on ongoing political negotiations and real-world readiness to enforce high-risk AI safeguards.

(0)

Please sign in to leave a comment

Related Articles
© 2025 Wordwise Media.
All rights reserved.