There is still a lot to be done on the European Artificial Intelligence Regulation

European Union

As you may have already read in reports over the last few weeks, the European Commission has released its proposed regulation on artificial intelligence or “Artificial Intelligence Act” (AIA).

Briefly summarised, the paper aims to create a European legislative framework to regulate AI. It does so through a risk-based approach, defining four levels into which all artificial intelligence systems should fall:

Unacceptable risk: total ban for a limited group of particularly harmful uses of AI that contravene EU values as they would violate fundamental rights, e.g. social scoring as implemented in China, exploitation of children’s vulnerabilities, use of subliminal techniques and – with several exceptions – remote and real-time biometric identification systems (including facial recognition) for law enforcement in publicly accessible areas.

High risk: a limited number of AI systems, listed in Annex III of the proposal, which have a potentially negative impact on the security of individuals or their fundamental rights. I foresee that Annex III, given the constraints, mandatory requirements and procedures related to the deployment of such systems, will become a battleground between regulators and companies in the coming months, with the latter seeking to have some of their listed technologies removed or exceptions created.

Among the stringent requirements are the obligation to use high-quality datasets, the creation of adequate technical documentation, record-keeping, transparency and provision of information to users, human supervision, as well as robustness, accuracy and cybersecurity. In the event of an incident, national authorities will be granted access to the information needed to investigate whether the use of the AI system has been carried out in accordance with the law. This means, in simple terms, that if an EU country finds a violation in an artificial intelligence system from the United States, China or Russia, to give some examples, the authorities of that country ought to be able to examine all the technical documentation, including the datasets used, as the first paragraph of Article 64 clearly indicates. Datasets that are sometimes (and rightly so) considered by companies as trade secrets. And don’t think that a company can easily evade such controls: the proposed regulation provides for fines of up to 6% of annual worldwide turnover for those who break the rules.

(We anticipate that nobody will want their AI systems to end up on the “high-risk” list)

Low risk: AI systems for which specific transparency requirements are mandated, e.g. when there is a clear risk of manipulation (think of the use of chatbots). In those cases, transparency requires that users are aware that they are interacting with a machine.

Minimal risk: all other AI systems can be developed and used in compliance with existing legislation without further legal obligations. According to the paper, the vast majority of artificial intelligence systems currently used in the EU fall into this category. Providers of such systems can choose, on a voluntary basis, to apply the requirements for reliable AI and adhere to voluntary codes of conduct.

The great emphasis on risks, on the priority of human beings and on the ethics of artificial intelligence only perpetuates the strategic positioning of the European Union. A positioning aimed at differentiating Europe from the other large AI ‘clusters’, such as the United States, China, Russia and – let’s not forget – the United Kingdom.

The first three powers make no secret of the fact that they are striving for an AI that is resolute, bold, and a key driver in shaping civil society, in line with the wishes of those who govern their respective countries: rich and cutting-edge for the Americans; tight and disciplined for the Chinese; subservient and obedient for the Russians. Moreover, the world powers are betting decisively on the military factor of AI (for example, the Russians have just created the first armed unit with assault robots, while a mixed private-government US team has recommended that the Pentagon continue research into autonomous weapons), something that makes many European observers shudder.

As for the United Kingdom, the jury is still out. The UK has not yet officially published its strategy (it will do so this year) but in the meantime, its “AI Council” released a roadmap with principles and programmes which in some passages we liked a lot (“ensure that every child leaves school with a basic sense of how AI works” should be hung on every office of any Ministry of Education until it enters everyone’s head), while others are still rather nebulous.

The European Union, therefore, finds itself somewhat alone in picking up the moral sceptre of ethical and human-centric AI. In the midst of major players and superpowers who will not hesitate to develop artificial intelligence that is friendly with friends and less so with enemies – depending, of course, on each point of view – Europe wants an AI that does not harm anyone or as little as possible, and to do so it is adopting the only method it knows: creating regulatory quicksand here and there.

Which is not all that surprising. Let us not forget that what we have in our hands, although presented with all the fanfare it deserves, is still a proposal. It will first have to go through the EU Parliament and EU Council, it will necessarily have to be harmonised with the EU Charter of Fundamental Rights, the proposed Data Governance Act and the revised Machinery Directive (the process can be followed here), and it will be subject to interpretation by the individual Member States, depending on which authority they’ll choose to monitor the whole framework. Those who drafted it were well aware that the ball would then pass to others, who would be in charge of correcting, integrating, cutting, moving commas according to the waves the document would make.

The relentless battle of some associations, the concerns of many citizens and the open letter of 116 members of the European Parliament against facial recognition technologies must also have played a role in pushing the authors of the text to “read the air” – as the Japanese say – and put spokes in the wheel of any artificial intelligence application that might be seen as controversial.

The broadly cautious approach was thus determined by a mix of factors: 1) the clear European stance that puts human beings first and foremost, 2) firm pressure from civil society that does not want citizens to suffer the harmful effects of new technologies,  3) equally firm political pressure that follows closely on the heels of civil society, 4) geopolitical issues that lead the EU to differentiate itself from the positions of other world powers, and – last but not least – 5) the awareness that the process still awaiting the proposed regulation is long and full of potential changes.

However, the fact that some of the authors (or perhaps other people close to the drafting of the document) had doubts about the contents of the text can be guessed from the fact that it was leaked a week before the official publication. It doesn’t take a Philadelphia lawyer to understand why the regulation was given a ‘test drive’ before its official release: uncertainties about certain passages were easy to spot.

The press leak actually resulted in several corrections in the final document, which compared to the unofficially filtered draft dropped, for example, the section that included among high-risk AI systems those that could manipulate people to “behave, form an opinion or take a decision to their detriment that they would not have taken otherwise“. This is too broad a wording, which as we see it would outlaw half of the advertising messages, and which in the final text was removed in favour of a note referring to existing regulations.

If we analyse the reactions to the proposed regulation, we can divide the comments into two clusters: ‘too much regulation will kill AI’ and ‘the direction is good, but there are still many loopholes to close’.

The first are those who deal with research and the application of artificial intelligence and fear the arrival of too many limitations (as well as a mountain of extra paperwork to prepare), and some foreign commentators who see the European Union shooting itself in the foot. The voice of the businesses can be summed up by Benjamin Mueller, of the Centre of Data Innovation lobby, supported by Amazon and Apple among others, who stated that the regulation will “limit the areas in which AI can realistically be used“. Among the non-EU commentators, we could mention Jeremy Warner of The Telegraph, who observes that the European Union is regulating itself “into AI oblivion”, thus urging the UK to take advantage of it by giving itself a “growth-friendly” AI, which does not ban technologies a priori but solves problems as they arise.

On the other hand, many of those associations who consider that the rules are still too lax, whether because of possible oversights or ambiguous approaches and terms, are pushing for even stricter regulation. One example is offered by Article 5.1.d (Article 5 regulates all AI systems that are considered ‘prohibited’), which lists a number of permitted exceptions to remote and real-time facial recognition. The second of three exceptions indicates that facial recognition would be permitted in the case of “prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack”.

EC AI regulation proposal, art 5.1.d
EC AI regulation proposal, art 5.1.d

Algorithm Watch, in its analysis of the document, believes that this wording leaves ‘a wide discretionary power to the authorities’. This is puzzling, as we would not know how to makes less discretionary a situation where there is a threat to people’s safety that is at the same time 1) specific, 2) substantial and 3) imminent. Or a terrorist attack. We may be being old-fashioned, but for us, in these very specific situations the police can turn on as many cameras as they deem necessary. Not to mention that point 5.3 indicates that such activities must first be expressly and previously authorised by a judge.

But not even the European Data Protection Supervisor is satisfied with the text, which he says is too permissive. In a statement, Wojciech Wiewiórowski complains about the fact that Europe hasn’t opted to prohibit tout-court biometric recognition (among which facial recognition obviously stands out) in places with public access, preferring instead to allow it even if with very strict rules. According to the EDPS, a moratorium would have been preferable, given that the technology presents extremely high risks of deep and undemocratic intrusion into individuals’ private lives.

In fact, an apparent quirk of the regulation prohibits remote biometric recognition ‘in real time’ (apart from the exceptions we have seen), but allows ‘post facto’ recognition. At first look, this seems absurd: if we set up cameras to record a flow of people, then examine the recordings the next day trying to identify the faces of all passers-by, privacy risks do not miraculously disappear just because the recognition did not take place in real-time. The apparent loophole was noticed by many observers, but some remarks by Lucilla Sioli, Director for Digital Industry at the European Commission, clarify the rationale behind this decision: real-time facial recognition by law enforcement authorities can lead to wrongful detainment, which is something the regulation tries to avoid.

It seems however to be a clear oversight to ban biometric recognition used by the police, while forgetting to ban biometric recognition used by other sectors of government and private companies, as many associations have pointed out, including the organisers of the ‘reclaim your face‘ initiative, who point out the incredible oversight in a comment on the proposed regulation.

But there are not only criticisms and protests. One potentially beneficial effect of the simple act of regulating such a complex matter, as some commentators on Wired point out, is that the European Union has broken through a membrane that seemed impenetrable, making its voice heard in an extremely complicated and highly dynamic area of technology. By resolutely entering this field, it has made life easier for legislators in other countries, even outside the continent, who will now be able to take inspiration, adapt, and regulate the potentially harmful activities of artificial intelligence.

Or they can let European rules ‘conquer the world’ as the GDPR has already done. Many have pointed out, not least the Economist, that since the European regulation will also apply to companies not resident in the EU, but producing or distributing AI systems used by EU citizens, this will result in many companies needing to adapt their procedures or stop serving Europe. The GDPR teaches that the latter path is generally taken by few, and obtorto collo compliance with European regulations creates de facto regulation even in those countries where it was hitherto non-existent. However, as Lawfare points out, this is unlikely to apply to countries with an advanced AI sector such as the US, which may collaborate with Europe on certain areas of AI governance, but will certainly not be colonised by its rules.

The post There is still a lot to be done on the European Artificial Intelligence Regulation appeared first on Artificial Intelligence news.

I have an Artificial Intelligence Professional certificate from IBM and one on machine learning from Google Cloud. I am a member of several AI industry associations: AAAI, ACM (SIGAI), AIxIA. I participate to the European AI Alliance of the European Commission and I work with the European Defence Agency and the Joint Research Centre.