Do Foundation Model Providers Comply with the Draft EU AI Act?

Do Foundation Model Providers Comply with the Draft EU AI Act? Authors: Rishi Bommasani, Kevin Klyman, Daniel Zhang, and Percy Liang (Stanford University) In their analysis, Stanford researchers evaluate foundation model providers like OpenAI and Google for their compliance with the draft EU AI Act. To read the full analysis, please click on this link.

An AI Law for Europe

An AI Law for Europe Kai Zenner, Axel Voss, Monish Darda and Rolf Schwartman write (in German) for Frankfurter Allgemeine Zeitung about the newly approved AI Act by the EU parliament. They write (translated from German): The 14th June 2023 will be an important day for artificial intelligence (AI) in Europe. After 18 months of negotiations, […]

Research Article: The European Parliament’s AI Act – Should we call it progress?

Research Article: The European Parliament’s AI Act – Should we call it progress? In their new journal, Meeri Haataja and Joanna Bryson write: The European Union (EU) has been leading the world with its influential digital regulation. However, the EU’s legislative process is sufficiently complex and careful that some national legislation clearly influenced by the […]

Book Alert: Artificial Intelligence Law – Between Sectoral Rules and Comprehensive Regime

Book Alert: Artificial Intelligence Law – Between Sectoral Rules and Comprehensive Regime Authors: Céline Castets-Renard and Jessica Eynard Artificial intelligence technologies are spreading across all aspects of social life: from automated decision-making tools used by administrations to facial recognition, personal assistants, recruitment software, and medical diagnostic aids, no sector of activity escapes their deployment. While […]

UN backs global AI watchdog: Urgent calls for governance

UN backs global AI watchdog: Urgent calls for governance Ilkhan Ozsevin writes for aimagazine.com about the UN proposal on “join[ing] forces with PM’s and cross-industry experts in calling for the establishment of an international AI watchdog“. This proposal is inline with the advocacy of AI Transparency Institute in the past year regarding regulating AI. To […]

Sandboxing the AI Act

Testing the AI Act Proposal with Europe’s Future Unicorns DIGITALEUROPE is delighted to present this report from our pre-regulatory sandboxing initiative, which aimed to evaluate the proposed AI Act and its potential implications for European start-ups and SMEs. In our manifesto for the current legislative term, A stronger digital Europe, we emphasised the need for agile […]

Trust & Tech Governance

Trust & Tech Governance: Towards a more engaged, collaborative, communicative approach Society Inside and Fraunhofer Institute have compiled a framework for facilitating trust and technology governance. Please click on this link to read it in full.

Framework for Meaningful Stakeholder Involvement

Framework for Meaningful Stakeholder Involvement in the Design and Delivery of Regulation and Governance Society Inside and European Center for Not-for-profit Law have compiled a framework for the meaningful stakeholder involvement in policy making. Click on this link to read it in full.

Two models of AI oversight – and how things could go deeply wrong

Two models of AI oversight – and how things could go deeply wrong Gary Marcus writes on his blog: The Senate hearing that I participated in a few weeks ago was in many ways the highlight of my career. I was thrilled by what I saw of the Senate that day: genuine interest, and genuine humility. […]

What should the regulation of generative AI look like?

What should the regulation of generative AI look like? This commentary published in Brookings, and co-authored by Nicol Turner Lee, Niam Yaraghi, Mark MacCarthy and Tom Wheeler envisions the different ways generative AI can be regulated. We are living in a time of unprecedented advancements in generative artificial intelligence (AI), which are AI systems that […]