AI Act deal: Key safeguards and dangerous loopholes
Angela Müller and Matthias Spielkam have written this press release for Algorithm Watch.
After a negotiation marathon in the global spotlight, EU lawmakers closed a deal late Friday night on the AI Act, the EU’s law on regulating the development and use of Artificial Intelligence systems. The EU Parliament managed to introduce a number of key safeguards to improve the original draft text, but Member States still decided to leave people without protection in situations where they would need it most.
Compared to the original draft by the European Commission, which dates back to spring of 2021, EU lawmakers have introduced crucial safeguards to protect fundamental rights in the context of AI. Thanks to the intense advocacy efforts of civil society organizations, the Act now foresees a mandatory fundamental rights impact assessment and public transparency duties for deployments of high-risk AI systems – key demands that AlgorithmWatch has been fighting for over the last three years. People affected will also have the right to an explanation when their rights were affected by a decision based on a high-risk AI system and will be able to launch complaints about them.
At the same time, these big wins are weakened by major loopholes, such as the fact that AI developers themselves have a say in whether their systems count as high-risk. Also, there are various exceptions for high-risk systems used in the contexts of national security, law enforcement, and migration, where authorities can often avoid the reach of the Act’s core provisions.
Among the most contested issues in the 36 hours of final negotiations were the prohibitions of certain AI systems. While the EU Parliament had taken a clear stance on systems that are not compatible with one of the main purposes of the AI Act – the protection of fundamental rights –, Member States pursued a different agenda. The final list of bans is considerably longer than in the Commission’s original proposal, containing also a partial ban on predictive policing systems, a ban on systems that categorize people based on sensitive data (such as their political opinion or sexual orientation) or a ban on emotion recognition systems used in workplaces and in education. All these safeguards are important in protecting people from the most misguided uses of AI.
That said, EU lawmakers still decided to introduce major loopholes to again allow for such misguided uses through the backdoor. AI systems that are used to ‘recognize’ the emotions of asylum seekers or AI used to identify the faces of people in real-time in public space with the objective of searching for a suspect of crime are legalized through the loopholes and exceptions that the list of bans apparently foresees. Thus, the level of protection these bans actually provide can only be assessed once we see the final text.
The lawmakers’ deal also includes provisions regulating so-called general purpose AI systems (GPAI) and the models they are based on. Through a two-tiered approach, these obligations target mostly high-impact systems. Providers will have to assess and mitigate the systemic risks that come with them, evaluate and test their models, and report serious incidents as well as their energy efficiency.
While a deal on the AI Act has now been announced, the more technical drafts that will be written over the next weeks will be decisive. Not only will key provisions be clarified, but it could also be the case that a lot of important details will still have to be agreed upon over the next weeks. The deal that has been announced is mostly a deal at the political level, taken under high pressure, which suggests that some issues will still have to be resolved at what the EU calls the «technical level», meaning in consultation of experts of the Commission, Parliament and Member States. After this has been done, the Parliament as well as the Council will have to formally approve the law’s text.
Together with its civil society partners, AlgorithmWatch will keep up the pressure on lawmakers during this phase in order to avoid even further erosion of fundamental rights protections.
Please click on this link to read the full press release.
Image credit: Image by Freepik