Rethinking how to build guardrails for AI

Rethinking how to build guardrails for AI A recent report jointly published by authors from Carnegie Mellon University and Center for AI Safety, revealed in what ways LLM safety measures can be bypassed, allowing the generation of harmful information in large amounts. Abstract: Large language models (LLMs) like ChatGPT, Bard, or Claude undergo extensive fine-tuning […]
Exploring Institutions for Global AI Governance

Exploring Institutions for Global AI Governance A new white paper by Google’s DeepMind investigates models and functions of international institutions that could help manage opportunities and mitigate risks of advanced AI. To read the full white paper, please click on the this link.
FTC is investigating whether ChatGPT harms consumers

FTC is investigating whether ChatGPT harms consumers Following a complaint by CAIDP (Center for AI and Digital Policy), the Federal Trade Commission has launched an investigation of OpenAI whether its products, primarily, ChatGPT, is causing harms to consumers. The Center for AI and Digital Policy (CAIDP) has escalated its case against OpenAI, creator of the ChatGPT AI […]
Harmonized Rules on AI and the EU AI Act

Harmonized Rules on AI and the EU AI Act The European Parliament and the European Council have laid down a proposal regarding harmonized rules on AI (the EU AI Act) while amending certain union legislative acts. Please click on this link to read the full proposal.
The Race to Regulate AI

The Race to Regulate AI: Why Europe has an Edge Over America and China Author: And Bradford (Foreign Affairs) And Bradford writes for foreignaffairs.com: Artificial intelligence is taking the world by storm. ChatGPT and other new generative AI technologies have the potential to revolutionize the way people work and interact with information and each other. […]
Do Foundation Model Providers Comply with the Draft EU AI Act?

Do Foundation Model Providers Comply with the Draft EU AI Act? Authors: Rishi Bommasani, Kevin Klyman, Daniel Zhang, and Percy Liang (Stanford University) In their analysis, Stanford researchers evaluate foundation model providers like OpenAI and Google for their compliance with the draft EU AI Act. To read the full analysis, please click on this link.
An AI Law for Europe

An AI Law for Europe Kai Zenner, Axel Voss, Monish Darda and Rolf Schwartman write (in German) for Frankfurter Allgemeine Zeitung about the newly approved AI Act by the EU parliament. They write (translated from German): The 14th June 2023 will be an important day for artificial intelligence (AI) in Europe. After 18 months of negotiations, […]
Research Article: The European Parliament’s AI Act – Should we call it progress?

Research Article: The European Parliament’s AI Act – Should we call it progress? In their new journal, Meeri Haataja and Joanna Bryson write: The European Union (EU) has been leading the world with its influential digital regulation. However, the EU’s legislative process is sufficiently complex and careful that some national legislation clearly influenced by the […]
Book Alert: Artificial Intelligence Law – Between Sectoral Rules and Comprehensive Regime

Book Alert: Artificial Intelligence Law – Between Sectoral Rules and Comprehensive Regime Authors: Céline Castets-Renard and Jessica Eynard Artificial intelligence technologies are spreading across all aspects of social life: from automated decision-making tools used by administrations to facial recognition, personal assistants, recruitment software, and medical diagnostic aids, no sector of activity escapes their deployment. While […]
Sandboxing the AI Act

Testing the AI Act Proposal with Europe’s Future Unicorns DIGITALEUROPE is delighted to present this report from our pre-regulatory sandboxing initiative, which aimed to evaluate the proposed AI Act and its potential implications for European start-ups and SMEs. In our manifesto for the current legislative term, A stronger digital Europe, we emphasised the need for agile […]