France, Germany, Italy push for ‘mandatory self-regulation’ for foundation models in EU’s AI law

This article is authored by Luca Bertuzzi for Euractiv.

The three biggest EU countries are pushing for codes of conduct without an initial sanction regime for foundation models rather than prescriptive obligations in the AI rulebook, according to a non-paper seen by Euractiv.

The AI Act is a flagship EU legislation to regulate Artificial Intelligence based on its capacity to cause harm. The file is currently at the last phase of the legislative process, where the EU Commission, Council, and Parliament gather in ‘trilogues’ to hash out the law’s final dispositions.

The negotiations on the world’s first comprehensive AI law have been disrupted by the rise of ChatGPT, a versatile type of AI system known as General Purpose AI, which is built on OpenAI’s powerful foundation model GPT-4.

On 10 November, Euractiv reported that the entire legislation was at risk following mounting opposition from France, which gained support from Germany and Italy in its push against any regulation on foundation models.

The EU heavyweights – France, Germany, and Italy – asked the Spanish presidency of the EU Council, which negotiates on behalf of member states, to retreat from the tiered approach on which there seemed to be a consensus at the last political trilogue in mid-October.

In response, European Parliament officials walked out of a meeting to signal that leaving foundation models out of the law was not politically acceptable. In recent weeks, the Spanish presidency attempted to mediate a solution between the EU parliamentarians and the most reluctant European governments.

However, the three countries circulated on Sunday (19 November) a non-paper that shows little room for compromise, considering that horizontal rules on foundation models would go against the technology-neutral and risk-based approach of the AI Act that is meant to preserve innovation and safety at the same time.

“The inherent risks lie in the application of AI systems rather than in the technology itself. European standards can support this approach following the new legislative framework,” the document said, adding that the signatories are “opposed to a two-tier approach for foundation models”.

“When it comes to foundation models, we oppose instoring un-tested norms and suggest to instore to build in the meantime on mandatory self-regulation through codes of conduct,” the non-paper further said, noting that these follow the principles defined at the G7 with the Hiroshima process.

Please click on this link to read the full article.

Image credit: Image by WangXiNa on Freepik

Your account