Artificial intelligence and business compliance

The 2 August 2026 deadline: how the EU AI Act is reshaping European business

For European businesses deploying or developing artificial intelligence, the most consequential date on the 2026 calendar is 2 August 2026. That is when the bulk of the EU AI Act’s substantive obligations become directly applicable across the bloc — including requirements for high-risk AI systems listed in Annex III of the regulation.

What changes on 2 August 2026

From that date, AI systems used in employment decisions, credit scoring, education, essential public services, law enforcement, migration, and the administration of justice fall under enforceable high-risk obligations. Providers must conduct conformity assessments, prepare technical documentation, implement risk management systems, ensure human oversight, and register systems in an EU database. Deployers — companies that purchase and use AI tools — also carry obligations around monitoring, logging, and reporting.

The fine structure

Penalties under the AI Act are among the steepest in EU regulatory history. Non-compliance with prohibited practices can trigger fines of up to EUR 35 million or 7% of worldwide annual turnover, whichever is higher. Other infringements range up to EUR 15 million or 3% of turnover. National authorities will lead enforcement on most matters; the European AI Office, established within the Commission, oversees general-purpose AI models.

The Digital Omnibus question

The European Commission’s Digital Omnibus proposal, adopted on 19 November 2025, is now under negotiation between the Parliament and the Council. It contemplates postponing some high-risk obligations until December 2027 to ease implementation. Industry has welcomed the prospect of relief, but legal advisers are unanimous: companies should not assume the extension will materialise. The August 2026 deadline remains the binding date in law.

What companies are doing now

Pragmatic compliance is taking shape across European industry. Multinationals have set up cross-functional AI Act steering committees, mapped their AI systems against the four risk categories (prohibited, high-risk, limited, minimal), and started drafting technical documentation. Smaller companies often discover that they are deployers of high-risk systems they purchased without realising — a recruitment screening tool, a credit decision engine, a fraud detection model.

The regulatory sandboxes

Each member state must establish at least one AI regulatory sandbox by 2 August 2026 — a controlled environment in which firms can test high-risk AI systems under regulatory supervision before market launch. Spain was the first to operationalise its sandbox; France, Germany, the Netherlands, and Italy followed. The sandboxes are intended to balance innovation with compliance, and to create early case law on edge cases.

The wider picture

The AI Act is rarely a standalone compliance project. It overlaps with the GDPR (especially for biometric and emotion-recognition systems), with the Digital Services Act (for online platforms), with sectoral rules (medical devices, machinery, aviation), and with national employment law. Companies that treat August 2026 as a checklist deadline will be ill-prepared. Those that treat it as the start of a long-term governance programme will be better positioned — both legally and competitively — when the rules tighten further from 2027 onwards.

Similar Posts