EU Code of Practice on Synthetic Content to Finalise in May-June 2026 as AI Watermarking Becomes Standard

The European Union’s Code of Practice on synthetic content — the operational instrument that defines how providers of general-purpose AI systems must comply with the watermarking and disclosure obligations under the AI Act — is expected to finalise in May or June 2026. The Code is the most concrete piece of EU AI governance currently in motion that does not depend on the Omnibus regulation, and its content will set the de facto global standard for AI-generated content disclosure.

What the Code does

Article 50 of the AI Act requires providers of AI systems generating synthetic audio, image, video or text content to ensure that the outputs are marked in a machine-readable format and detectable as artificially generated or manipulated. The legal obligation is binding from 2 August 2026 for new GPAI systems placed on the market after that date and from 2 August 2027 for systems already on the market in transitional form. The Code of Practice operationalises this obligation: it defines what watermarking technologies are considered adequate, what testing protocols providers must run, and how the EU AI Office will monitor compliance.

The technology landscape

Two families of watermarking technology are competing for de facto adoption. Invisible watermarking embeds an imperceptible signal in the generated content using cryptographic techniques — Google DeepMind’s SynthID, Adobe’s Content Credentials, OpenAI’s pixel-level marking. The signal survives common transformations (resizing, light editing) but can be removed by sophisticated adversaries. Provenance metadata attaches a verifiable signed manifest to the file, which can be queried by social media platforms and content distribution networks. The C2PA (Coalition for Content Provenance and Authenticity) standard has emerged as the leading provenance specification, backed by Microsoft, Adobe, the BBC and the New York Times.

The Omnibus interaction

The Omnibus regulation introduces a transitional period for systems already on the market before 2 August 2026, and the institutions disagree on its length. The Commission and the Council favour six months (compliance by 2 February 2027); the Parliament prefers three months (compliance by 2 November 2026). This is one of the few Omnibus issues that has not been politically settled — and it directly affects the synthetic content obligation. The Code of Practice will need to handle both possible legal timelines, providing implementation guidance valid in either scenario.

The enforcement dimension

The EU AI Office, established within the European Commission’s DG CNECT, is the new regulator responsible for synthetic content compliance. Its tools include the power to request technical documentation, run testing on commercially available systems, and impose financial sanctions of up to 3% of global annual turnover. The first compliance assessments are expected in autumn 2026 and will focus on the largest GPAI providers — OpenAI, Anthropic, Google, Meta, Microsoft, Mistral, and a handful of EU-based providers. The findings will set precedent for how strict the enforcement architecture proves to be.

The wider information ecosystem

The Code of Practice arrives at a moment when synthetic content has become a measurable share of the global information ecosystem. Estimates suggest that AI-generated content already represents over 15% of new internet content by volume in 2026, up from negligible shares two years ago. Whether that figure stabilises, grows further, or is bounded by reliable detection — the EU’s bet — depends substantially on the technical adequacy of the watermarking standards that the May-June 2026 Code will define. For governments, platforms and users alike, the operational answer to how do I know if this was made by AI? is being decided this spring.

Similar Posts