AI Act Trilogue Deal: Comments by Philipp Hacker

Philipp Hacker, Professor of Law and Technology at ENS has written a comprehensive analysis on the AI Act deal on his LinkedIn Page. He writes:

The political agreement on the AI Act is extremely important in a dual sense. First, it sends a strong signal that the EU is still functional as a major force in international technology regulation.

Second, in many respects, the contents strike a sensible balance between allowing innovation and protecting fundamental rights and public safety. But some gaps remain. The rules for foundation models (FMs) are a step in the right direction, but do not go far enough. The minimum standards are actually extremely toothless (mere transparency, copyright) – a tiger too toothless, in my view. Even 10^24 FLOP models exhibit AI safety and cybersecurity risks that cannot be left to self-regulation. If you want to play Champions League, you have to stick to the Champions League rules.

This is why FM regulation is necessary: If you exclude the FMs, the regulatory burden is shifted to the downstream providers. Fixing the error in the deployment a thousand times is worse than tackling the problem at the source (= FM), a clear least-cost avoider argument from standard (and very economically liberal) law and economics. FM regulation is efficient, self-regulation is inefficient and dangerous in this domain.

Does sensible FM regulation deter innovation? No. A new study finds that even for quite advanced but not even top-notch 10^24 FLOPs models, such as Bard, ChatGPT etc. (i.e., lower than GPT-4 and Gemini), expected compliance costs only add up to roughly 1% of total development costs (https://lnkd.in/ecZTE9RF).  This is a sum that everyone, including Mistral, Aleph Alpha etc., can and should invest in basic industry best practices for AI safety.

Third, however, the attractiveness of the EU as a future hub for AI innovation and deployment should have been strengthened: the AI Act deal should have been paired with an announcement of massive amounts – in the dimension of billions of euros – in EU and collective Member State funding for AI research and deployment: in compute, chips infrastructure, and talent retention. Only in this way, we can secure strategic independence in a key technology of the 21st century, and prevent the same geostrategic dependencies that brought Europe to the border of chaos in the field of oil and gas supply. Europe is lagging far behind when it comes to cutting-edge AI model production – with only very few exceptions –, and this is clearly becoming a geostrategic problem in the current international environment. Inter alia, we need a well-funded European DARPA.

To read the full post, please check the LinkedIn post of Philipp Hacker.

Image credit: Image by Freepik

Your account