The AI Act, three months out: guidelines, sandboxes, and the cliff edge of August 2026
With less than three months remaining until the 2 August 2026 enforcement deadline for the bulk of the EU AI Act, May 2026 is shaping up as a defining moment for European technology regulation. The legal text was adopted in 2024. The implementation is happening now — and not without difficulty.
The Commission’s guidelines: the missing pieces
The European Commission was expected to publish, by 2 February 2026, guidelines specifying the practical implementation of Article 6 — the provision that classifies AI systems as high-risk and triggers the heaviest obligations. That deadline has slipped. Industry has been lobbying hard for clarity, particularly on borderline cases: when does a workforce productivity tool become a high-risk employment-related AI? When does an automated decision support tool cross into the high-risk category?
Regulatory sandboxes: live in some, planned in others
Each member state must establish at least one AI regulatory sandbox by 2 August 2026. Spain led the way, with its sandbox already accepting applications and producing case studies. France, Germany, the Netherlands, and Italy have followed. Smaller member states are still in setup phase. Sandboxes are critical because they offer a controlled environment in which providers can test high-risk systems with regulator engagement before going to market.
The General-Purpose AI Code of Practice
For providers of general-purpose AI models — the foundation models that power most modern AI applications — the Commission published the General-Purpose AI Code of Practice in 2025, establishing voluntary compliance commitments around safety, transparency, and copyright. Most major model providers have signed on, though some have flagged interpretation challenges, particularly around copyright disclosures and systemic risk assessments.
The Digital Omnibus negotiation
The Commission’s Digital Omnibus proposal, adopted on 19 November 2025, contains targeted amendments to the AI Act intended to ease implementation — including a possible postponement of certain high-risk obligations until December 2027. The European Parliament and the Council are negotiating the package now. A failure to agree before the August 2026 deadline would mean the original timeline applies as written.
The compliance reality
For most companies, May 2026 is the moment of truth on compliance readiness. Mature programmes have classified their AI systems, set up governance, drafted technical documentation, and engaged with notified bodies. Less prepared organisations are now in catch-up mode, with consultancies reporting capacity is fully booked through the summer. The risk of last-minute compliance failures — and of regulator scrutiny in the autumn — is rising.
Beyond the deadline
August 2026 is not the end of AI Act implementation. 2 August 2027 brings the next major phase: the rules for high-risk AI systems embedded into regulated products (medical devices, machinery, vehicles), the latest deadline for legacy GPAI models to comply, and the broader uplift of conformity assessment infrastructure. The regulation will keep producing new obligations, new guidance, and new enforcement decisions for years. May 2026 is, in many ways, only the start.
