WS33


SUMMARY

28 NOVEMBER 2019


AI governance: theoretical and practical implications of EU “regulation” on AI


Summary by Alessandra Calvi, Researcher at VUB

SUMMARY


On 9 October 2019, the Brussels Privacy Hub hosted a seminar entitled AI governance: theoretical and practical implications of EU “regulation” on AI. The event discussed the appropriateness of Artificial Intelligence (AI) governance measures adopted in the EU – especially the Ethics Guidelines for Trustworthy AIthe AI policy recommendationsadopted by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG)– to meet the concerns raised by AI. The guidelines will be piloted by public and private organisations across Europe until December 2019, with the aim to understand, inter alia, to which extent they should be tailored to different sectors and use cases. Speakers were Nathalie Smuha (Assistant lecturer and Researcher FWO Fellowship at KU Leuven), Francesco Pili (Associate at Van Bael & Bellis), Daniel Schoenberger (Head of Legal Switzerland & Austria at Google Switzerland GmbH) and Daniel Leufer (Mozilla Fellow at Access Now). Joris van Hoboken (Professor of law at VUB, LSTS) chaired the debate.


Joris Van Hoboken welcomed the audience and opened the debate asking the panelists to outline the implications of the EU regulatory approach towards AI.  

Nathalie Smuhadescribed the three components for trustworthy AI as described b the AI HLEG, namely that it should be lawful, ethical and robust. She pointed out that there was no legal vacuum on AI, as many legal rules already apply to AI systems today, yet she warned against the risks of enforcement loop-holes and legal gaps in the current framework. For this reason, the AI HLEG recommendations advised the Commission to set up an interservice group, tasked to conduct an extensive mapping of relevant EU regulations applying to AI, and identifying measures to bridge potential legal gaps. She pointed out that, most probably, this mapping would be more limited in scope than originally hoped, since the incoming European Commission President Ursula von der Leyen had pledged to present a legislative proposal for a coordinated EU approach to ethical Artificial Intelligence (AI) within her first 100 days of taking office, leaving a short time window for the task.


Francesco Pili provided his perspective of private practitioner, highlighting that guidelines were soft law, and were therefore not legally binding. He even questioned their belonging to the category of European acts, considering that they are outside the EU constitutional framework and adopted by a body (AI HLEG) that was lacking democratic accountability. He suggested three possible uses of guidelines in practice: (1) as means of interpretation for judges, providing that, although there was no legal vacuum on AI, there were still gaps to be filled; (2) as self-commitments for companies, therefore suitable to create legal expectation and to be enforced in court, although it might be difficult for consumers to demonstrate violations; (3) as enforceable provisions, when contained in contractual arrangements.


Daniel Schoenberger pointed out that, the term machine learning rather than AI would be more appropriate. He defended the idea that many applications of machine learning (e.g. medical environment, energy consumption etc.) could change the world for the better, but that, since technology was not neutral, the decisions that companies like Google take can have an impact on people. He admitted that machine learning may generate problems of discrimination, privacy, liability and accountability. He presented the seven AI principlesfollowed by Google, outlining how their approach was not so far from AI HLEG one.


Daniel Leufer provided two main reasons to demonstrate why AI was currently not trustworthy. First, he criticised the hype around the term ‘artificial intelligence’, which was originally chosen for the funding proposal for the 1956 Dartmouth Summer Research Project. The term ‘complex information processing’ was proposed by some participants (Simon & Newell) at the conference as an alternative, but the term AI was adopted by others such as Marvin Minsky, in part because of its marketing value.  He questioned whether we would be so enthusiastic about the technology if it had been called complex information processing instead of AI. Second, he emphasized the lack of transparency in relation to the deployment of the technology, as outlined by AlgorithmWatch, an NGO tracking the use of automated decision making in EU. He accepted the idea that guidelines and self-regulation can be a good start to make AI more trustworthy, but stressed that they are never sufficient. He pointed out that, since it is already responsibility for States and companies to protect human rights, the enforcement of legislative framework should be a priority. He called for a Human Rights Impact Assessment for AI.


Joris van Hoboken questioned what will happen after the pilot phase has run out, warning against the risk that the mapping exercise might be incomplete.


Nathalie Smuha pointed out that the pilots are aimed to improve the guidelines. She explained that there were three types of feedback that can be provided: an online survey can be filled in by anyone until the 1stof December; deep dive interviews are conducted with organisations belonging to different sectors, and Members of AI Alliance can provide continuous feedback through the platform. In early 2020, all feedback would considered by the AI HLEG to prepare a revised version of the guidelines. Next steps would depend on the new Commission. She continued by explaining that the specific challenges of the use of AI at sectoral level should be carefully considered, and – while some call for an AI-specific regulation – she defended the idea to not abandon technology neutral rules. She admitted that the actual problem of AI was not just the lack of a bullet-proof legal framework, but the lack of adequate enforcement mechanisms.


Francesco Pili highlighted that pilots were encouraging companies to implement the guidelines and recommendations already. He pointed out that, depending on the legal basis that would be chosen by the Commission (i.e. Regulation or Directive) for the future approach to AI, the outcomes would be different. Moreover, hepointed out the need to avoid fragmentation and market distortions. He insisted on the importance of transparency, seen also as a way to strengthen accountability of companies.


Daniel Schoenberger defended that AI should be socially beneficial, which was an utilitarian concept. He pointed out that the public sector was far behind the private sector concerning the use AI, and that was why all discussions put emphasis on the private one. Still, considering the importance of the public sector, the debate should regard also them. As to the right to transparency, he defended the idea that information should be empowering for data subjects, and that is why a modular card system can be an effective way to ensure it.


Daniel Leufer criticized the fact that Google’s AI Principles do not clearly define the balance between benefits and harms, and expressed scepticism about their utilitarian framing, contrasting this with the fundamental rights framework which underlies the EU’s Ethics Guidelines. He argued for the idea that, from a human right perspective, certain harms should be ruled out by default. Furthermore, Leufer agreed that certain issues are possibly not best framed as issue with AI, but are actually continuous with non-AI situations (e.g. the main problem with a facial recognition system may be in how its database of images was collected, a problem which has nothing specific to do with any machine learning technology it may employ)., He also pointed out that there are AI-specific issues, however, such as when  an AI system identifies and discriminates against a group of people with no humanly recognisable commonality, and which therefore falls outside traditional protected categories). 


Then, the Q&A with the audience started.


The idea to adopt a sectoral approach was advanced. It was pointed out that AI should not be treated as something new, providing that it comes on top of existing surveillance, profiling, etc., and it was questioned how this would be reflected in future approach to AI. The opportunity to identify areas where decision making should never be delegated to machines was discussed. Doubts on the possibility to reconcile the exercise of data subjects’ rights with the existence of datasets providing input to AI were raised. The need to address transparency was highlighted. It was suggested that Commission should tackle AI in public procurement, in order to motivate private organisation to get involved. Joris van Hoboken concluded the debate thanking the speakers and the participants for the fruitful debate.

Connect with us


Brussels Privacy Hub

Law Science Technology & Society (LSTS)

Vrije Universiteit Brussel

Pleinlaan 2 • 1050 Brussels

Belgium

info@brusselsprivacyhub.eu

@privacyhub_bru

Stay informed


Keep up to date of our activities and developments. Sign up to our newsletter:

My Newsletter

Copyright © Brussels Privacy Hub