Three Months from 2 August 2026: How the EU AI Act Is Reshaping European Business as the Omnibus Stalls

Three months ahead of the 2 August 2026 deadline for the entry into application of the EU AI Act’s high-risk obligations, organisations across the European Union face a complex landscape. The Omnibus regulation that would have postponed the deadline to 2 December 2027 remains unadopted, with the third trilogue on 13 May 2026 the last realistic opportunity for closure before summer. For companies operating high-risk AI systems, the legal position is unambiguous: plan against the existing deadline.

What ‘high-risk’ actually means

The AI Act defines high-risk AI systems through two routes. Annex III covers stand-alone systems in eight defined use cases: biometrics, critical infrastructure, education and vocational training, employment and worker management, access to essential private and public services, law enforcement, migration and border control, and administration of justice and democratic processes. Annex I covers AI as a safety component in products already regulated under EU sectoral safety law — machinery, medical devices, toys, vehicles, aviation, in-vitro diagnostics. Together these two routes capture the great majority of commercial AI deployment in regulated economic sectors.

What the obligations actually require

Providers of high-risk AI systems must comply with a structured set of obligations. They must establish a risk management system covering the entire AI lifecycle. They must document training, validation and testing data and ensure data governance. They must produce technical documentation sufficient for conformity assessment. They must implement human oversight measures. They must achieve appropriate levels of accuracy, robustness, and cybersecurity. They must register the system in the EU AI database before placing it on the market. Deployers face their own obligations on monitoring, log preservation and incident reporting.

The compliance reality on the ground

The Omnibus was not proposed in a vacuum. It was a Commission response to industry concerns that harmonised standards were not yet finalised, that conformity assessment infrastructure was not in place, that national competent authorities had not been designated in many member states. Those concerns remain accurate. CEN-CENELEC’s Joint Technical Committee 21 has delivered initial drafts of the harmonised standards but final publication is staggered through 2026 and into 2027. National authority designations are progressing but are uneven across the 27 member states.

What companies are actually doing

Three patterns have emerged across European operators. Large multinationals with sophisticated compliance functions have built dedicated AI governance teams, typically anchored in the Chief Risk Officer or Chief Information Security Officer reporting line, and are running parallel compliance pipelines for both the original deadline and the postponed dates. Mid-size European firms are at a more uncertain position, often dependent on industry association guidance and on conformity assessment bodies that have not yet completed their own designation. SMEs face the most acute uncertainty — they are the constituency for whom the Omnibus simplification provisions were specifically designed.

The asymmetric outcomes

What 2 August 2026 actually delivers depends on whether the Omnibus passes. If yes, the high-risk deadline moves to 2 December 2027, organisations get an additional sixteen months, and the standards-and-authorities ecosystem catches up. If no, the original deadline applies — and the Commission is expected to issue forbearance guidance acknowledging the standards gap, with enforcement realistically calibrated against availability of the necessary harmonised infrastructure. In either case, the underlying compliance architecture is fixed: high-risk AI in Europe will be a regulated activity, and 2026 is the year that European business has to decide how it builds for that reality.

Similar Posts