An AI Law for Europe
Kai Zenner, Axel Voss, Monish Darda and Rolf Schwartman write (in German) for Frankfurter Allgemeine Zeitung about the newly approved AI Act by the EU parliament. They write (translated from German):
The 14th June 2023 will be an important day for artificial intelligence (AI) in Europe. After 18 months of negotiations, the European Parliament’s response to the proposed AI regulation is put to the vote. The draft of the EU Commission was a legislative novelty. For the first time, rules on product safety and the protection of human rights were combined in order to better cope with the special requirements in the development and application of AI systems. The starting point was the realization that the increasing unpredictability, opacity and complexity of current machine learning and deep learning systems is increasingly creating legal gaps. At the same time, however, in order to ensure that the new regulations do not stifle European innovation in AI, the Commission also adopted a risk-based approach. Thus, the legislative proposal gradually distinguishes from low-risk to prohibited AI technologies. The former, such as an AI-controlled toy car, is only subject to the general rules of the law, be it the EU Toy Directive or data protection and youth protection law. If an AI is high-risk, the draft of the AI regulation subjects its providers and users to special obligations. When this is the case is determined sector-specific, from education to justice. A few AI systems, such as social scoring, which classifies and evaluates people according to their social behavior, are completely banned.
– Kai Zenner
To read the full article, please click on this link.
Barry Scannell also writes an intriguing commentary about the AI Act in his LinkedIn Post. He says:
It’s a big day tomorrow in the world of AI regulation! The EU Parliament is preparing to vote on its proposed text of the EU’s AI Act, which seeks to regulate high risk AI systems and foundation models like those on which ChatGPT are based. An excellent article in Frankfurter Allgemeine Zeitung (leider, alles auf deutsch) has Axel Voss, Kai Zenner and others providing fantastic insights into the state of play of the AI Act.
Many people are wondering about timelines. If the vote is passed tomorrow, the trilogue process begins, and it is hoped that the negotiations will be completed by the end of 2023. Kai Zenner notes that if an agreement is reached by Christmas 2023, the Act could come into effect on June 9, 2024, just before the EU Parliament elections. With the two-year transition period extending to mid-2026, developers and users of foundation models and high-risk AI systems need to get cracking with getting their regulatory ducks in a row.
Three years is not a long time. Many of the regulations have to be implemented into AI systems at the design stage and compliance must be in place before it enters the EU market.
Kai also noted that the EU Parliament needed 43 technical meetings and twelve meetings at the political level to discuss the total of 89 recitals, 85 articles and nine annexes of the AI Act. The framework is the result of 18 months of negotiation and is designed to combine rules on product safety with protections for human rights in the context of AI systems. MEP Voss points out that of 85 articles, 82 focus on the risks of AI.
Kai notes that in the compromise text secured by Parliament, both the context of high-risk AI system deployment and the novel issues brought raised by ChatGPT were addressed. Additionally, the extent of AI regulation has been clarified and fully adapted to align with the more accepted OECD AI definition.
MEP Voss suggests that the proposed regulation could instigate a “Brussels effect”, whereby other nations adopt the EU’s stringent legal standards for AI.
There is an extra-territorial effect of the AI Act – no matter where you are in the world, if you want to supply the EU market with high-risk AI systems or foundation models – it’ll apply to you. However, Voss also notes the risk that any legal uncertainties can be exploited by American tech giants to their advantage.
MEP Voss suggests improvements in four key areas for the AI regulation to protect civil rights and foster innovation. These encompass better regulation, EU-wide harmonisation, promotion of innovation, and focus on SMEs and start-ups.
On start-ups, Voss is of the view that their innovative strength is Europe’s greatest strength, but they suffer most from onerous and expensive compliance with EU digital laws. Voss suggests that only with the necessary knowledge about the built-in external components will European AI start-ups ultimately be able to meet the requirements of the AI Act for their products or services.
Big day tomorrow!
– Barry Scannel
Image credit: wirestock on Freepik.