WS26

WORKSHOP

SUMMARY

MEET THE AUTHOR SERIES • 17 DECEMBER 2018

Algorithmic Discrimination Under European Law


Summary by Alessandra Calvi, Brussels Privacy Hub, VUB

SUMMARY


On 17 December 2018, the Brussels Privacy Hub hosted the sixth event in the Meet the Author Series on “Algorithmic Discrimination under EU law”. This time, our guest Philipp Hacker (Humboldt University of Berlin / WZB Berlin Social Science Center) presented his article “Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies against Algorithmic Discrimination under EU Law” 1. He debated the outcomes of his work with Paul Nemitz (European Commission), Prof. Gloria González Fuster (Brussels Privacy Hub / Law, Science, Technology and Society (LSTS) and Hielke Hijmans (Brussels Privacy Hub), who acted as moderator.


Hielke Hijmans opened the event by recalling the success of the Meet the Author Series, and by introducing the speakers. He passed the floor to Philipp Hacker who briefly presented his paper. He outlined the paper’s three main observations, namely, that:  firstly, the (il)legality of algorithm discrimination depended on the source of the bias (training data or differential distribution of characteristics); secondly, the limited access to data and to algorithmic models made it very difficult for alleged victims of biased algorithms to raise complaints in courts; and thirdly, these shortcomings could be addressed via the enforcement tools provided by the General Data Protection Regulation (GDPR). He proposed to consider algorithm discrimination as a violation of fair processing of data. Furthermore, as an extension to the concept of data protection by design, he suggested the implementation of an “equal protection by design” principle.


Following this introduction of the paper, Hielke Hijmans presented some key questions for the debate. These questions relate to the following subjects:

In the potential of artificial intelligence (AI) decision making to get rid of human bias. The paper states that AI threatens equality and may lead to discrimination. On the other hand, is it argued that AI is a good instrument to delimit the effect of human biases.

In a recent speech on data protection case law, the president of the CJEU (Lenaerts) emphasised the risks of profiling in the context of digital developments as main threat to human rights. How should this be valued? 

Non-discrimination principles as a potential standard for defining in an AI context fairness, an essential foundation of data protection.

The importance of data protection instruments, such as the transparency provisions requiring to inform about the logic involved in algorithms, for reaching objectives of non-discrimination law.

Possible synergies between enforcement authorities. How could this be organised and do the differences in institutional frameworks present an obstacle for cooperation?

       

The first discussant, Paul Nemitz, appreciated the author’s approach aimed at maximising the potential of current legislation via the combination of different branches of law, which could effectively guide judges in their interpretation. He urged EU institutions to strive for the coherence of the EU legal system both in hierarchical and in horizontal relations. He agreed with the author that antidiscrimination law was equipped with limited means and wondered whether any limits on the applicability of data protection tools in other branches of EU law would stand in the way of the author’s combined approached. In this context, a reference to competition law was made.


He supported  the principle of “equal protection by design” or “antidiscrimination by design”, as proposed by Hacker, and suggested to introduce a principle of “democracy, rule of law, and fundamental rights by design”. He invited the audience to reflect on the paradigm of machine learning. Machine learning deliver results, based on past situations, which contrasts with the Kantian societal idea that solutions should not be based on facts but on values. In other words, solutions should not reflect what the world is but how it should be.


The second discussant, Gloria González Fuster, warned against the existence of “false friends” among different branches of EU law, stating that the meaning of “accountability”, “transparency” and “fairness” in the GDPR is specific, and thus these terms cannot be uncritically applied as such in other sectors of law. What could indicate parallelisms and a certain complementarity might, in reality, hide a gap. She contested the possibility to reduce discriminatory processing to a violation of the “fairness” of processing, as the notion of unfairness in automatic decision making in the GDPR was not always related to discrimination. For example, in the case where personal data were not accurate, not properly collected or not meaningful, data processing could be in tension with fairness, but not necessarily discriminatory.


She admitted a relation between discrimination and the special categories of data under the GDPR, which are more especially protected in principle because they relate to grounds where there exists a heightened risk of discrimination. Nevertheless, the special categories of data under GDPR only vaguely correspond to those of Article 21 of the Charter on Fundamental Rights of the European Union (CFR), since whereas the Article 21 provides for an open clause of potentially discriminatory grounds, the GDPR has a closed list, that does not encompass for instance gender-based discrimination. In the end, she highlighted the importance of preventing discrimination rather than adopting actions a posteriori.


Finally, the author was given the opportunity to respond to comments received. Regarding the comment that data protection concepts and anti-discrimination law concepts might not easily be interchangeable, Hacker reiterated that there was an opportunity to use concepts belonging to other branches of law if there are sufficient links between them. For fairness, this was supported by the Article 29 Working Party (WP29), who also suggested that there is relation between fairness in data protection law and discrimination.


Hacker then urged companies to be more proactive and to involve people affected by technology in the testing phase. He warned against the risks of gaps under EU data protection law, whereas anonymous training data of algorithms would fall outside the scope of the GDPR. He suggested the creation of legal mechanisms to ensure that training data are up to date, to put responsibility on the companies.


A lively Q&A session with the audience followed. The attendees expressed their concern about the difficulties in understanding the logic involved in algorithmic decision-making and proposed to involve NGOs in the auditing of automated decisions. Standardization could also play an important role, it was argued. Forms of civil liability in case of violations could be introduced, but due to the variety of parties involved in the process, an approach similar to product liability should apply, it was mentioned. The ideas to strengthen cooperation between data protection and equality regulators or, even further, to create specialized bodies in charge of dealing with algorithm discrimination composed by data protection and equality specialists were also raised.


Hijmans concluded the debate expressing his appreciation on the different ideas raised and inviting the audience to the next Meet the Author event subject and date to be communicated.


  1 Hacker, Philipp, Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies Against Algorithmic Discrimination Under EU Law, Common Market Law Review 55, pp. 1143–1186, 2018






Connect with us


Brussels Privacy Hub

Law Science Technology & Society (LSTS)

Vrije Universiteit Brussel

Pleinlaan 2 • 1050 Brussels

Belgium

info@brusselsprivacyhub.eu

@privacyhub_bru

Stay informed


Keep up to date of our activities and developments. Sign up to our newsletter:

My Newsletter

Copyright © Brussels Privacy Hub