AI Act Trilogue Topics: Open Source

Authors: Catelijne Muller and Maria Rebrean

On June 14 of this year, the legislative process of the AIA entered its final phase, the Trilogue. During this phase the co-legislators (EP and Council) will negotiate the final text of the AIA under the brokerage of the European Commission. In this series of blogposts, we reflect on crucial, decisive and divisive topics for this process. After our earlier post on the ‘extra layer for high risk AI’, we now delve into the topic of ‘open-source AI components’.

A blanket exemption for Open Source AI components?

In its negotiating position on the AIA, the EP proposes to exempt open-source AI components from the scope of the AIA, as long as these are not placed on the market or put into service as part of a prohibited AI practice, a high risk AI system or a medium risk AI system. This exemption would not apply to foundation models.

We note that a decision to add a blanket exemption for open source AI (components) should not be taken lightly, for several reasons.

No clear definition of ‘open source’ or ‘open source AI components’ exists

First of all, no clear definition of ‘open source’ exists. Open source is in fact an umbrella term for a set of licences that allow users to run, copy, distribute, study, change and improve software and data, including models. The Open Source Initiative (OSI), founded in 1998, took it upon itself to draft an Open source Definition (OSD), and used it to create a list of OSI-approved licenses. In the years after, due to an increase in the number of these licences, OSI started campaigning for reducing the proliferation of open source licences.

In the meantime, the variety and number of activities that were labelled open source grew, making it ever more difficult to pinpoint what exactly is open source. This is even more important when talking of AI, or AI components. What exactly would be considered ‘open source AI components’ that would, under the proposal of the EP, be exempted from the scope of the AIA, remains equally unclear.

On June 7th of this year, the OSI initiated “a multi-stakeholder process to define “Open Source AI”, proving that a proper definition of “open source AI” is needed but does not yet exist. Considering the complex development path of an AI system, an AI component could encompass several things, including for example source codes, data sets, models, or training processes.[1] Without a suitable definition of open source AI components, blanketly excluding them from the scope of the AIA would create legal uncertainty and even an undesired loophole that can lead to illegitimate claims of an open source status.

As the open source community expands and AI technologies rely on open source materials for their development, ensuring long lasting and effective accountability is key in shielding the market as well as health, safety and fundamental rights. As these are the exact AIA objectives, applying it (or elements of it) to open source AI components, might be the right path forward.

To read the full article, please click on this link.

Image credit: Image by Freepik

Your account