How the EU Council is rewriting the AI Act?

REPORTS

REPORT - 6 December 2021


How the EU Council is rewriting the AI Act


Gianclaudio Malgieri and Vincenzo Tiani, 6 December 2021


On 21 April 2021, the European Commission published its proposal for a regulation for artificial intelligence, the AI Act. Seven months later, the Council of the EU, under the Slovenian presidency, presented a compromise text introducing some noteworthy changes and anticipating further discussion points. In sum, the proposed changes have lights and shadows, but also some gaps.


First of all, the definition of AI systems (which was accused to be too broad in the European Commission proposal) has been narrowed. The Council proposes to exclude all the “more traditional software systems” which cannot “achieve a given set of human defined objectives by learning, reasoning or modelling” from the definition. In addition, the Council version excludes all “general purpose AI systems” from the scope of the AIA. In other terms, if a general AI system (as many developed by big techs) has merely the potential to perform risky practices, but is not (yet) developed in those risky contexts, it should not be per se subject to the AIA.


On the one hand, these amendments could bring more clarity and fewer burdens to AI developers, while on the other hand, they could prove to a be slippery slope towards protection gaps and fewer design duties.

 

A risk-based approach

As AI cannot be dealt with on a sector-specific basis given its possible application in every field, after the stakeholder consultations following the first AI White Paper published in February 2020, the Commission opted for a risk-based approach. The AI Act therefore provides for four degrees of risk: unacceptable risk, high risk, limited risk, and minimal risk.

 

A bigger scope for blacklisted AI practices

One of the most debated provisions is Article 5, which identifies the “blacklist” of AI practices, i.e., the category of applications where the risk is considered unacceptable, with some exceptions. The Council has intervened with several changes on this point.


First, the original Commission proposal prohibited social scoring, defining it as public bodies using (or delegating the use of) AI systems for evaluation or classification of the trustworthiness of individuals based on their social behaviour or personal characteristics, with social scores leading to decontextualised or unjustifiably detrimental results. Now, the Council proposes to extend such a prohibition also to private actors and not merely to public authorities (Art. 5.1.c), but also to consider social scoring any kind of individual evaluation/classification, not only the one based on “trustworthiness”.


Second, the Commission’s proposal prohibited AI practices that could either produce physical or psychological harm because of subliminal mental manipulation or exploitation of individual vulnerabilities based on age (to protect especially the elderly and children) and disability. Now, the Council proposes to more explicitly exclude the AI user’s “intention” of behaviour manipulation from the scope of the prohibition: the mere effect of distorting behaviours (producing psycho-physical damages) is prohibited. More importantly, the Council proposes to prohibit not only the exploitation of age/disability vulnerabilities, but also the exploitation of vulnerabilities based on social and economic condition of the individual.


Indeed, many commentators had claimed that the Commission proposal was too soft for what concerns private actors that commercially manipulate consumers’ behaviour. These Council’s amendments (prohibiting detrimental social scoring even in private contexts, explicitly deleting the necessity to prove the “intention” of behavioural manipulation and adopting a wider and more commercially relevant definition of vulnerability) might be a first reaction to those criticisms.


On the issue of real-time biometric recognition, the Commission’s proposal calls for a general ban on its use by law enforcement in public spaces with some exceptions.


The Council introduced several amendments:

  • the attribute 'remote' has been removed, thus extending the scope of the prohibition beyond mere surveillance in large areas.
  • Threats to critical infrastructure and health have been introduced as exceptions, and the circumstance that the threat is imminent has been removed, thus extending the scope of its legitimate use (Art. 5.1.d.ii).


In urgent cases, if prior authorisation cannot be requested from the judicial authority, it must be requested without delay during the use of the means of biometric recognition and no longer afterwards. Then, if the judicial authority refuses the authorization, the law enforcement authorities must immediately desist from continuing the operations (Art. 5.3).


The Council considered that, since national security activities are among the prerogatives of the Member States, applications of AI exclusively in this domain should not fall within the scope of the Regulation (Art. 2.3; Recital 12). This choice may have undesirable effects on the protection of fundamental rights. For example, a biometric recognition system developed in this sense would also fall outside the already narrow perimeter of Art. 5. The risk is therefore that the safeguards required in the development of these technologies are not taken into account from the outset, potentially increasing the margin of error in the implementation phase.


Two other changes in Annex III are noteworthy, listing some examples of high-risk use of AI:

  • Biometric identification (and not also categorization) AI systems “intended to be used for the 'real-time' and 'post' remote biometric identification of natural persons without their agreement”. The Council has deleted the term “remote” (see below), but also “categorization”, leaving only the purpose of “identification”. This means that AI systems that can categorize suspected criminals (e.g., on the basis of their ethnicity, risk of committing new crimes, etc.) will be considered high-risk only if they also identify individuals (or if they fall in another category of high risk anyway).
  • the addition of the term "agreement" could lead to validate as not high risk all those cases in which even a concluding behaviour, but not informed, of the subject could be considered valid. For this reason, also to ensure greater consistency with the GDPR, it might be appropriate to replace the word 'agreement' with 'consent'.


In addition, law enforcement uses of AI to analyse large amounts of data from different sources for investigative purposes to find patterns in crime analysis are removed from the list of “high-risk AI” (Annex III, 6,g). Even in this case, the non-application of safeguards for high-risk AI could have a negative effect on fundamental rights by leading to false correlations and unnecessary arrests.

 

The exception for research


The Council also introduces a broad exception for “AI systems and their outputs used for the sole purpose of research and development” (Art. 2.6 and 2.7; Recital 12a). Here again, although with the necessary differences from the use of law enforcement, this choice may require further consideration. In particular, research practices to test AI (e.g., emotion recognition systems, biometric categorization based on pseudoscientific bases, etc.) might lead to serious effects to fundamental rights and freedoms, in particular if minimum legal and ethical requirements are not respected in a research project. An alternative to guarantee the freedom to conduct research, but within a safer perimeter, could be to facilitate access to sandboxes in which the rules could be suspended for research reasons.

 

The Commission should update the risk list, but on a regular basis


One last relevant change concerns the update of the high-risk AI list. In the original proposal, the Commission has the possibility to amend that list (e.g., adding new “high-risk” practices to the list), considering certain parameters. Now, the Council has asked that the Commission mandatorily updates that list every 24 months.

If many commentators had suggested that other (fully independent) bodies could amend that list (e.g., the European Data Protection Board or the European AI Board), these new timing requirements is of course a positive change.

 

Other unresolved issues


Although the general approach is therefore shared, the Council mentioned in its Progress report of 19 November 2021 further questions such as what constitutes high risk. The report suggests supplementing the proposal with some practical guidance on data quality and transparency of information to be provided to users.

 

Many criticisms to the Commission proposal are still unsolved. Just to mention one example, emotion recognition and biometric categorization AI systems are still not even considered “high risk”: mere transparency duties would be required for them.


Moreover, the lack of individual rights and remedies in the proposal (together with the lack of any duty to perform participatory design of AI or consultations) seems to be a persisting problem.


In addition, the assumption that the training, validation and testing of datasets is complete and error-free appears to be too stringent. For the Council, while this is the output to aspire to, setting the bar so high seems excessive. The risk that only larger companies can afford the costs of compliance is highlighted, to the detriment of SMEs. Accordingly, many propose to lower penalties for SMEs.


Some states also highlighted the need to re-evaluate the chain of responsibility, since in the case of AI the value chain has much more complex and less defined boundaries between the various actors. Finally, the Council is concerned that companies may find themselves in breach of regulations at the intersection of the AI Act, GDPR and the General Product Safety Regulation, a circumstance on which they hope the Commission will find a solution.


On the issue of liability, it is worth remembering that, precisely in view of the need to allocate civil liability, on 18 October the European Commission published a public consultation, open until 10 January, to gather contributions from SMEs, academics, NGOs, consumer associations and Member States.

 

Next steps

This is not the final text of the Council, but it clearly introduces important new elements on some of the most debated issues, such as biometric recognition and fundamental rights. The work of the Parliament, which has just appointed two co-rapporteurs, with both IMCO and LIBE as responsible committees, will also begin shortly.

 

 

Connect with us


Brussels Privacy Hub

Law Science Technology & Society (LSTS)

Vrije Universiteit Brussel

Pleinlaan 2 • 1050 Brussels

Belgium

info@brusselsprivacyhub.eu

@privacyhub_bru

Stay informed


Keep up to date of our activities and developments. Sign up to our newsletter:

My Newsletter

Copyright © Brussels Privacy Hub