A judgment of the Court of Justice of the EU endangers “scoring” algorithms

This article is originally published in French on Euractive France by Théophane Hartmann. The following text is a translated version of the original text and may contain errors.

The Court of Justice of the European Union (CJEU) ruled on Thursday, December 7, that any decision-making that uses rating systems using personal data is illegal. This judgment could have significant repercussions for social security funds and credit institutions.

Years after the entry into force of the General Data Protection Regulation (GDPR), the Court of Justice of the European Union (CJEU) issued its first judgment on the article relating to automated individual decision-making.

“This decision of the CJEU clarifies the fact that the GDPR contains a ban on subjecting people to automated decision-making that has a significant impact on them,” Gabriela Zanfir-Fortuna, Vice-President of Privacy Protection at the Future of Privacy Forum, told Euractiv.

Between 2018 and 2021, a scandal seized the Netherlands – which led to the resignation of Mark Rutte’s third government – about a faulty rating algorithm whose use led tax authorities to wrongly accuse thousands of people of defrauding a childcare benefit scheme.

On Thursday, the Court ruled that any type of automated rating is prohibited if it has a significant impact on people’s lives. The verdict concerns SCHUFA, Germany’s largest private credit agency, which assigns a rating to people based on their solvency.

According to the judgment, SCHUFA’s “scoring” is in violation of the GDPR if SCHUFA customers – such as banks – attribute it a “decisive” role in their contractual decisions.

This decision could have important consequences. In France, the Caisse nationale des allocations familiales (CNAF) has been using an automated risk “scoring” algorithm since 2010 on the basis of which home checks are triggered for suspicion of fraud.

Le Monde and Lighthouse Reports reported that the CNAF’s “data mining” algorithm analyzes and rates 13.8 million households each month in order to prioritize controls.

The CNAF algorithm uses about forty criteria based on personal data to which a risk coefficient is assigned, rating all beneficiaries between 0 and 1 each month. The closer the final score of the beneficiaries is to 1, the more likely they are to receive a home inspection.

Bastien Le Querrec, lawyer at La Quadrature du Net, told Euractiv: “The fact that the CNAF uses an automatic score for all its beneficiaries, and, given the predominant importance of this score in the rest of the process, this score has, in the sense of La Quadrature du Net, significant implications on people’s lives and should therefore fall within the scope of the CJEU decision, i.e. be prohibited, unless a French law allows it, in strict compliance with the GDPR“.

In other words, the “scoring” system would be illegal if it were not specifically authorized by French law and if it did not strictly comply with EU data protection rules.

Philippe Latombe, French centrist deputy (MoDem) and member of the CNIL, told Euractiv that he saw the CNAF algorithm as a risk assessment system, filtering people on the basis of their data, which hoppen to be personal data, because of the organization’s objective: to issue allowances to people in need.

If each criterion taken separately may seem logical to combat fraud, the sum of the criteria can be discriminatory if they are correlated,” continued Mr. Latombe.

Environmentalist MP Aurélien Taché commented: “As usual, [the government] fights the poor rather than poverty, and with social “scoring“, it no longer even respects the most basic principles in the defense of freedoms and the right to privacy“.

Restrictions on “scoring” algorithms

The GDPR allows public and private organizations to use “data mining” algorithms in only three cases: explicit consent of individuals, a contractual necessity or a legal obligation.

Ms. Zanfir-Fortuna explained that the decision of the Court of Justice of the EU removes the “legitimate interest” of organizations, such as the commercial interests of companies, as a sufficient legal basis to perform a “scoring” that uses personal data.

In addition, if a government wishes to give a legal basis to law enforcement authorities for the use of “scoring” algorithms, national laws will have to base their legitimacy on EU laws and the EU Charter of Fundamental Rights.

These algorithms must be “necessary in a democratic society and meet the criterion of proportionality,” said Ms. Zanfir-Fortuna. Therefore, feeding notation algorithms with personal data is now much more limited in the EU.

Implications

Mr. Latombe said that the situation of the CNAF “raises the question of the algorithmic transparency of ParcoursSup“, the French government portal designed to allocate places in French universities and other graduate courses.

The La Quadrature du Net website also indicates that Health Insurance, Old Age Insurance, Mutualities Sociales Agricoles and Pôle Emploi, use similar “scoring” algorithms, whose lawfulness, in view of the judicial case mentioned, could now be questioned.

Under the European AI Regulation, the future most important European law of artificial intelligence regulation, AI systems intended to determine access to public services will be considered “high-risk” and subject to a strict regime in terms of risk management and data governance.

Please click on this link to read the original article.

Image credit: Photo by Tingey Injury Law Firm on Unsplash

Your account