AI Act: EU policymakers nail down rules on AI models, butt heads on law enforcement

This article is authored by Luca Bertuzzi for Euractiv.

After 22 hours of intense negotiations, EU policymakers found a provisional agreement on the rules for the most powerful AI models, but strong disagreement in the law enforcement chapter forced the exhausted officials to call for a recess.

The AI Act is a landmark bill to regulate Artificial Intelligence based on its capacity to cause harm. The file is at the last stage of the legislative process as the EU Commission, Council, and Parliament meet in so-called trilogues to hash out the final provisions.

The final trilogue started on Wednesday (6 December) and lasted almost uninterruptedly for an entire day until a recess was called for Friday morning. In this first part of the negotiation, an agreement has been found on regulating powerful AI models.


The regulation’s definition of AI takes all the main elements of the OECD’s definition, although it does not repeat it word for word.

As part of the provisional agreement, free and open-source software will be excluded from the regulation’s scope unless they are a high-risk system, prohibited applications or an AI solution at risk of causing manipulation.

On the negotiators’ table after the recess will be the issue of the national security exemption, since EU countries, led by France, asked for a broad exemption for any AI system used for military or defence purposes, including for external contractors.

Another point to discuss is whether the regulation will apply to AI systems that were on the market before it started to apply if they undergo a significant change.

Foundation models

According to a compromise document seen by Euractiv, the tiered approach was maintained with an automatic categorisation as ‘systemic’ for models that were trained with computing power above 10~25 floating point operations.

A new annexe will provide criteria for the AI Office to make qualitative designation decisions ex officio or based on a qualified alert from the scientific panel. Criteria include the number of business users and the model’s parameters, and can be updated based on technological developments.

Transparency obligations will apply to all models, including publishing a sufficiently detailed summary of the training data “without prejudice of trade secrets”. AI-generated content will have to be immediately recognisable.

Importantly, the AI Act will not apply to free and open source models whose parameters are made publicly available, except for what concerns implementing a policy to comply with copyright law, publishing the detailed summary, obligations for systemic models, and the responsibilities along the AI value chain.

For the top-tier models, the obligations include model evaluation, assessing and keeping track of systemic risks, cybersecurity protection, and reporting on the model’s energy consumption.

The codes of practice are only meant to complement the binding obligations until harmonised technical standards are put in place, and the Commission will be able to intervene via delegated acts if the process is taking too long.

Please click on this link to read the full article.

Image credit: Image by macrovector on Freepik

Your account