A vision for the AI Office: Rethinking digital governance in the EU

This article has been co-authored by Kai Zenner, Philipp Hacker and Sebastian Hallensleben for Euractiv.

Spearheading the implementation of the world’s first comprehensive legislation on Artificial Intelligence (AI), the AI Office requires robust leadership and an innovative structure that mirrors the dynamism of AI, write Philipp Hacker, Sebastian Hallensleben, and Kai Zenner.

Dr Philipp Hacker holds the chair for law and ethics of the digital society at the European New School of Digital Studies at European University Viadrina Frankfurt. Sebastian Hallensleben is a director at German standards association VDE and chair of JTC 21 at EU standardisation body CEN/CENELEC. Kai Zenner is head of office and digital policy adviser of MEP Axel Voss (European People’s Party Group) and was a technical negotiator on the AI Act.

The office faces a difficult mission, an almost endless list of responsibilities, harsh deadlines, and a limited budget. But its vast potential makes overcoming these obstacles worthwhile.

Starting as an internal Commission structure, it could become the model or nucleus of a fully independent “Digital Implementation & Enforcement Agency,” bundling competencies outside of the Commission and across a variety of digital regulations.

Such an entity would not only streamline digital governance on platforms, data, and AI, but also enhance the EU’s capacity to manage the complexities of the digital age, steering our international leadership.

To set the course towards this vision, the AI Office should be designed already with a clear and strategic structure under firm leadership, allowing its excellent staff to perform their task in an agile, efficient and collaborative manner.

A comprehensive structure

The structure of the AI Office should reflect the multi-dimensional nature of AI itself, comprising of five specialized units, each targeting a different aspect of AI governance, supplemented by input from external experts and advisors.

A “Trust and Safety Unit” would focus on funding and drawing up relevant mechanisms in the development and use of AI as well as on the broader context of curbing the misuse of AI, particularly for disinformation.

An “Innovation Excellence Unit” would ensure the coherent application of the EU AI Act across various sectors and set up incentives to promote the development and deployment of AI.

An “International Cooperation Unit” would position the EU as a leader in global AI policy discussions and work towards alignment with international partners and governmental organizations such as the OECD and the UN.

A “Research & Foresight Unit” would act as a think tank, providing insights into AI market trends and emerging risks as well as technological capabilities.

Finally, a “Technical Support Unit” could horizontally underpin the other four Units with relevant advice as well as technical measures, including by commissioning external technical research and supplementing projects of the other Units with scientific advisors.

The right people

Attracting and retaining top talent is critical for the AI Office’s success. The Commission should focus on creating an appealing work environment that prioritizes employee agency and development, while valuing skills and diverse experiences over formal qualifications.

Demographic and cultural diversity play an important role. In order to attract a significant number of technical AI experts from outside the EU institutions, it is paramount to offer flexible and remote work arrangements, competitive salaries, and significant freedoms in how to achieve certain tasks.

A well-respected leadership duo that combines deep institutional knowledge with cutting-edge AI expertise, could equip the AI office to steer this structure and navigate the complexities of global digital policy. This could be supplemented with external scientific and technical AI governance advisors to ensure that the EU AI Office continuously benefits from state-of-the-art expertise.

Operational pillars

To maintain that talent and perform at its best, the AI Office needs to break out of the traditional institutional mold in operational values.

The office should emphasize operational agility and an experimental workspace. Like a successful start-up, it must encourage a culture of minimal bureaucracy and high autonomy, allowing for creative solutions and innovations. Small self-organized and agile teams that work in a project-based manner, are defining the principles and structures of their work, combined with collaboration tools and AI testing spaces are best suited.

Being able to obtain the required cutting-edge hardware and software as well as giving the employees of the AI Office access to sufficient computing power and state-of-the-art AI components will likely require new ways of public procurement.

A new kind of openness

Finally, transparency and accessibility must be pillars of the AI Office’s philosophy. Regular interactions should not only happen with other departments of the Commission and other EU institutions, but with EU citizens as well as representatives from civil society, academia and SMEs.

Staffers hired specifically for stakeholder involvement seem to be necessary. Town hall meetings and collaboration rooms can also ensure that the AI Office remains open and responsive to public input and scrutiny, beyond the convening of the advisory forum or the scientific panel foreseen in the AI Act.

A whistleblower mechanism could complement these efforts as an additional way to inform the AI Office about certain incidents. Finally, the new structure can draw on existing and future infrastructures, such as the Union testing facilities, as well as on start-ups and SMEs providing relevant services to other AI Safety Institutes and AI companies.

The AI Office is not just a regulatory body. It could become a blueprint for digital governance in the EU and internationally. Combining visionary structure, excellent talent, agility and transparency, it could lead the EU through the green and digital transitions, ensuring that AI technologies are harnessed in a manner that is safe and equitable.

The eventual establishment of a “Digital Implementation & Enforcement Agency,” funded through the 2028 EU budget would further solidify this ambition, and create the necessary expertise and independence to navigate the complex landscape of digital markets.

Please click on this link to read the full article.

Image credit: Wikimedia Foundation

Your account