📣 Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond
Author(s): Sandra Wachter.
Abstract: Predictive and generative artificial intelligence (AI) have both become integral parts of our lives through their use in making highly impactful decisions. AI systems are already deployed widely—for example, in employment, healthcare, insurance, finance, education, public administration, and criminal justice. Yet severe ethical issues, such as bias and discrimination, privacy invasiveness, opaqueness, and environmental costs of these systems, are well known. Generative AI (GAI) creates hallucinations and inaccurate or harmful information, which can lead to misinformation, disinformation, and the erosion of scientific knowledge. The Artificial Intelligence Act (AIA), Product Liability Directive, and the Artificial Intelligence Liability Directive reflect Europe’s attempt to curb some of these issues. With the legal reach of these policies going far beyond Europe, their impact on the United States and the rest of the world cannot be overstated.
In this Essay, I show how the strong lobbying efforts of big tech companies and member states were unfortunately able to water down much of the AIA. An overreliance on self-regulation, self-certification, weak oversight and investigatory mechanisms, and far-reaching exceptions for both the public and private sectors are the product of this lobbying. Next, I reveal the similar enforcement limitations of the liability frameworks, which focus on material harm while ignoring harm that is immaterial, monetary, and societal, such as bias, hallucinations, and financial losses due to faulty AI products. Lastly, I explore how these loopholes can be closed to create a framework that effectively guards against novel risks caused by AI in the European Union, the United States, and beyond.
Please click on this link to read the full article.
Image credit: Image by freepik