AI Act: EU Commission attempts to revive tiered approach shifting to General Purpose AI

This article was authored by Luca Bertuzzi for Euractiv.

The European Commission circulated on Sunday (19 November) a possible compromise on the AI law to break the deadlock on foundation models, applying the tiered approach to General Purpose AI and introducing codes of practice for models with systemic risks. 

The AI Act is a landmark bill to regulate Artificial Intelligence based on its potential risks. The legislative proposal is currently at the last phase of the legislative process, so-called trilogues, between the EU Commission, Council and Parliament.

In the past weeks, the EU policymakers involved have been butting heads on regulating powerful foundation models like GPT-4, which powers the world’s most famous chatbot, ChatGPT, a versatile type of system known as General Purpose AI.

On 10 November, Euractiv reported on how the whole legislation risked derailing after the clash, with Europe’s three largest economies speaking out against the tiered approach initially envisagedon foundation models and pushing back against any regulation other than codes of conduct.

However, not having any obligations for foundation models is considered not an option for the European Parliament. The MEPs involved in the file are meeting on Tuesday (21 November) to discuss foundation models, governance, and law enforcement.

On Sunday, the EU executive shared a compromise with the European Parliament’s co-rapporteurs, who shared it with their colleagues on Monday. The text maintains the tiered approach but focuses on General Purpose AI, tones down the obligations and introduces codes of practice.

GPAI models and systems

The text is a significant rework compared to what was circulated by the Spanish presidency, and the leading MEPs provided feedback earlier this month. At the core, there is now a distinction between General Purpose AI (GPAI) models and systems.

“‘General-purpose AI model’ means an AI model, including when trained with a large amount of data using self-supervision at scale, that is capable to [competently] perform a wide range of distinctive tasks regardless of the way the model is released on the market,” reads the new definition.

By contrast, a GPAI system would be “based on an AI model that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.”

The idea is that GPAI models can entail systemic risks related to ‘frontier capabilities’ based on ‘appropriate’ technical tools and methodology. In the notes, the co-rapporteurs question the terminology and vagueness of the text.

Please click on this link to read the full article.

Image credit: Image by vecstock on Freepik

Your account