What are the risks from Artificial Intelligence?

What are the risks from Artificial Intelligence? Authors: Peter Slattery and colleagues. The risks posed by Artificial Intelligence (AI) are of considerable concern to academics, auditors, policymakers, AI companies, and the public. However, a lack of shared understanding of AI risks can impede our ability to comprehensively discuss, research, and react to them. This paper […]

Top 10 operational impacts of the EU AI Act

Top 10 operational impacts of the EU AI Act Authors: Uzma Chaudhry, Ashley Casovan, and Joe Jones for IAPP. On 12 July 2024, the final text of the EU AI Act was published in the Official Journal of the European Union. The next step for the AI Act is its entry into force 20 days after publication, […]

The AI Act: responsibilities of the European Commission (AI Office)

The AI Act: responsibilities of the European Commission (AI Office) This article is written by Kai Zenner on his webpage. Since the technical negotiations on the AI Act have been concluded in January 2024, I hear very different numbers and deadlines when it comes to secondary legislation but also other implementing and enforcement tasks for […]

Article Alert: Risk thresholds for frontier AI

Article Alert: Risk thresholds for frontier AI Authors: Leonie Koessler, Jonas Schuett, and Markus Anderljung Abstract: Frontier artificial intelligence (AI) systems could pose increasing risks to public safety and security. But what level of risk is acceptable? One increasingly popular approach is to define capability thresholds, which describe AI capabilities beyond which an AI system […]

Article Alert: Do large language models have a legal duty to tell the truth?Article Alert:

Article Alert: Do large language models have a legal duty to tell the truth? Authors: Sandra Wachter, Brent Mittelstadt, and Chris Russel. Abstract: Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education and shared social truth in democratic societies. LLMs produce responses […]

AI Accountability Framework

AI Accountability Framework The Information Technology Industry Council (ITI) has released its AI Accountability Framework, the third product out of their AI Futures Initiative. Their risk-based Framework represents the consensus set of practices that actors across the tech ecosystem believe are critical to advancing responsible AI development and deployment and reinforces the notion that responsibility […]

Research Alert: A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities

Research Alert: A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities Authors: Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal, and Luciano Floridi Abstract: Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains […]

Global AI Regulation Tracker: Interactive Map

Global AI Regulation Tracker: Interactive Map Raymond Sun, a technical lawyer, developer and a content creator, has developed an interactive world map that tracks AI law, regulatory and policy developments around the world. Simply, click on a region (or use the search bar) to view its profile. Other features are also available to support your research […]

A vision for the AI Office: Rethinking digital governance in the EU

A vision for the AI Office: Rethinking digital governance in the EU This article has been co-authored by Kai Zenner, Philipp Hacker and Sebastian Hallensleben for Euractiv. Spearheading the implementation of the world’s first comprehensive legislation on Artificial Intelligence (AI), the AI Office requires robust leadership and an innovative structure that mirrors the dynamism of […]