What does the EU AI Act mean for Affective Computing and Emotion AI?

This article is written by Michel Valstar on his LinkedIn blog.

The EU AI Act will heavily affect innovation and commercialisation of affective computing in the EU. Despite talk of having measures to support innovation, the EU AI Act will seriously stifle innovation in this area, hitting in particular SMEs and startups hard. The act has been adopted by the EU, and is now being implemented by its member states. That means that very soon providers of Affective Computing and Emotion AI systems will have to comply with its stipulations in the EU. The act defines prohibited, high risk and low risk AI systems, with pretty onerous obligations for providers of high risk systems and relatively few obligations for low risk systems.

The AI Act is very comprehensive. The definition of AI is very broad, covering not only machine learning systems but also expert systems and indeed any system that uses statistics to make a prediction. Interestingly, and relevant for you reader, is that it singles out two areas that are highly relevant for practitioners of Affective computing. Firstly, biometric identification, i.e. the recognition of a natural person based on data from their body, including the face and the voice. This is unsurprising given the long history of protection of individual rights and protection from the state in the EU. Secondly, and a bit more surprising, emotion recognition systems are singled out and mentioned frequently throughout the document (10 times in the AI act and twice in the annexes. ‘Gender’ is only mentioned once, for example).

It is significant that emotion recognition is singled out by the AI Act. In particular, the Act prohibits placing on the market, putting into service for this specific purpose, or use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions except in cases where the use of the AI system is intended for medical or safety reasons. Also, ANY emotion recognition system will be high risk.

An emotion recognition system is defined as an “AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.”. Biometric data, in turn, is defined as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data”. Note that an earlier draft of the Act added that biometric data had to “allow or confirm the unique identification of that natural person.”, which is the usual definition of biometric data. In the latest (final?) version of the act, biometric data is not biometric anymore in that it doesn’t have to be data that can be used to identify a natural person. I see why this was done, but it will be very confusing for practitioners.

Now, that being said, if you’re an affective computing practitioner, you may take a slightly different view of what an emotion recognition system is. The definition in the Act is:

The notion of emotion recognition system for the purpose of this regulation should be defined as an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data. This refers to emotions or intentions such as happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement. It does not include physical states, such as pain or fatigue. It refers for example to systems used in detecting the state of fatigue of professional pilots or drivers for the purpose of preventing accidents. It does also not include the mere detection of readily apparent expressions, gestures or movements, unless they are used for identifying or inferring emotions. These expressions can be basic facial expressions such as a frown or a smile, or gestures such as the movement of hands, arms or head, or characteristics of a person’s voice, for example a raised voice or whispering.

Whenever you need this much text to define a legal concept, and you feel the need to include examples of things that are included or excluded because otherwise you worry your definition isn’t clear, you know you’re in trouble. Excluding physical states appears unproblematic, until you realise that emotion is a physical state itself. And excluding what we call Behaviour Primitives such as frowns or smiles, unless used to infer emotions, just creates more loopholes again. At any rate, not every affective computing system will be an emotion recognition system, and you’ll have to apply some judgment to determine whether your system is or not.

Ultimately, I think what this update to the EU AI Act tried to do is to protect people from discriminatory outcomes of emotion recognition systems in the workplace and in education. In my reading, emotion recognition system applications that are allowed include:

  • Recognising when a driver is emotionally distracted and take a safety measure based on that – regardless on whether this is a private driver or an employee
  • Recognising a medical condition using an emotion recognition system in a driver or vehicle occupant
  • Detecting pain, fatigue, or depression in a remote operator of automated vehicles

What I think is not allowed would be to:

  • Train a doctor to be more empathetic or have better bedside manners using emotion recognition systems
  • Assess the performance of an employee using an emotion recognition system
  • Assess the performance of a job candidate using an emotion recognition system

Again, the examples above still need to comply with GDPR and other relevant regulations and laws. From my reading of the objections to emotion recognition system (definition 26c), I have an inkling that prohibiting providing training of your workforce using emotion recognition systems is an unintended consequence – time will tell.

So, ALL affective computing systems now come with specific transparency obligations (set out in Title IV) which basically means that any system that uses affective computing must inform the user that it does that, so you cannot put an emotion recognition system in a product and not tell the user that it’s there. And THE VAST MAJORITY of affective computing systems will be high risk.

I’m an academic researcher. Surely this doesn’t affect me?

The bad news for academic researchers is that the EU AI Act does appear to cover open-source software that they make available for other researchers to use, even if that’s done for free. The crucial aspect here is that they have to comply if they’re a provider, which is defined as “a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge;”. Slightly confusingly, “Placing on the Market” is a concept which comes from something called exhaustion of rights and is particularly relevant to IP rights.

The idea is that once goods have been “placed on the market” (in this case in the EEA) so that the purchaser is then free to deal with them how they like, the IP rights in the product are exhausted and cannot then be enforced against a later purchaser.  This is particularly important for parallel imports around the EU and is fundamental to the concept of freedom of movement of goods.

In the 1990s there were a number of cases involving Levis jeans sold in Tesco, perfume and sunglasses which confirmed these principles for trade marks (in the EU when the UK was still in the EU).  In fact, most of the case law in the area relates to parallel imports of pharmaceutical products.  It is the same principle which means that you can resell your old iPhone without Apple suing you for patent infringement.

The position is slightly more complicated for software (and by extension AI systems) because there is no tangible object at all to which the IP rights belong.  In Usedsoft v Oracle (Case C-128/11) the Court of Justice of the European Union held that granting an indefinite non-exclusive licence to software for a fee also amounted to placing on the market and therefore exhausted Oracle’s rights in that copy of its software.

A number of US cases including LifeScan Scotland v Shasta Technologies state that the fact that you have given something away instead of selling it is not an argument for saying your IP rights should not now be exhausted.  The US also recognises that open source software can confer non-monetary benefits on the distributor.

This is just an opinion, and there does not appear to be settled law on this point in the EU. It may be clarified as the text for the AI Act is finalised in the next few months.

However, “placing on the market” would not include sharing between two academic research groups for research where the system is kept confidential. So collaborations between academic (groups) would remain possible, yet you would be liable to uphold the obligations of the AI Act if you make your source code/AI systems publicly available.

I might be wrong here, and if so, I would really welcome if you could write to me to explain why I’m wrong, so I can update this article.

To an extent, it makes sense that open source systems are included. You may not intend to cause harm, but if you are a researcher who makes say an emotion recognition system to help people practice for a job interview, and the general public starts to actually use it to practice for job interviews, then you now put the general public at much the same risk as a commercial provider would. This may be a problem if you are seeking to create impact with your research – doing so will make you a provider, and providers must comply with a lot of onerous obligations.

Please click on this link to read the full article.

Image credit: Image by gstudioimagen on Freepik

Your account