Automating Society Report 2020

Research

European Union

by Kristina Penner and Fabio Chiusi

Setting the stage for the future of ADM in Europe

As automated decision-making systems take center stage for distributing rights and services within Europe, institutions across the region increasingly recognize their role in public life, both in terms of opportunities and challenges.

Since our first report in January 2019 – and even though the EU is still mired in the broader debate around “trustworthy” artificial intelligence – several bodies, from the EU Parliament to the Council of Europe, have published documents aimed at setting the EU and Europe on a course to deal with ADM over the coming years, if not decades.

In summer 2019, newly elected Commission President, Ursula von der Leyen, a self-stated “tech optimist”, pledged to put forward “legislation for a coordinated European approach on the human and ethical implications of artificial intelligence” and to “regulate Artificial Intelligence (AI)” within 100 days of taking office. Instead, in February 2020, the European Commission published a ‘White Paper’ on AI containing “ideas and actions” – a strategy package that aims to inform citizens and pave the way for future legislative action. It also makes the case for European “technological sovereignty”: in Von der Leyen’s own terms, this translates into “the capability that Europe must have to make its own choices, based on its own values, respecting its own rules”, and should “help make tech optimists of us all”.

A second fundamental endeavor affecting ADM in Europe is the Digital Services Act (DSA), announced in Von der Leyen’s ‘Agenda for Europe’ and supposed to replace the E-Commerce Directive that has been in place since 2000. It aims to “upgrade our liability and safety rules for digital platforms, services and products, and complete our Digital Single Market” – thus leading to foundational debates around the role of ADM in content moderation policies, intermediary liability, and freedom of expression more generally.

An explicit focus on ADM systems can be found in a Resolution approved by the EU Parliament’s Internal Market and Consumer Protection Committee, and in a Recommendation “on the human rights impacts of algorithmic systems” by the Council of Europe’s Committee of Ministers.

The Council of Europe (CoE), in particular, was found to be playing an increasingly important role in the policy debate on AI over the last year, and even though its actual impact on regulatory efforts remains to be seen, a case can be made for it to serve as the “guardian” of human rights. This is most apparent in the Recommendation, ‘Unboxing Artificial Intelligence: 10 steps to protect Human Rights’, by the CoE’s Commissioner on Human Rights, Dunja Mijatović, and in the work of the Ad Hoc Committee on AI (CAHAI) founded in September 2019.

Many observers see a fundamental tension between business and rights imperatives in how EU institutions, and especially the Commission, are framing their reflections and proposals on AI and ADM. On the one hand, Europe wants “to increase the use of, and demand for, data and data-enabled products and services throughout the Single Market”; thereby becoming a “leader” in business applications of AI, and boosting the competitiveness of EU firms in the face of mounting pressure from rivals in the US and China. This is all the more important for ADM, the assumption being that, through this “data-agile” economy, the EU “can become a leading role model for a society empowered by data to make better decisions – in business and the public sector”. As the White Paper on AI puts it, “Data is the lifeblood of economic development”.

Whereas on the other hand, the automatic processing of data about a citizen’s health, job, and welfare can form decisions with discriminatory and unfair results. This “dark side” of algorithms in decision-making processes is tackled in the EU toolbox through a series of principles. In the case of high-risk systems, rules should guarantee that automated decision-making processes are compatible with human rights and meaningful democratic checks and balances. This is an approach that EU institutions label as “human-centric” and unique, and as fundamentally opposed to those applied in the US (led by profit) and China (led by national security and mass surveillance).

However, doubts have emerged as to whether Europe can attain both goals at the same time. Face recognition is a case in point: even though, as this report shows, we now have plenty of evidence of unchecked and opaque deployments in most member countries, the EU Commission has failed to act swiftly and decisively to protect the rights of European citizens. As leaked drafts of the EC White Paper on AI revealed, the EU was about to ban “remote biometric identification” in public places, before shying away at the last minute and promoting a “broad debate” around the issue instead.

In the meantime, controversial applications of ADM for border controls, even including face recognition, are still being pushed in EU-funded projects.

Policies and political debates

The European Data Strategy Package and the White Paper on AI

While the promised comprehensive legislation “for a coordinated European approach on the human and ethical implications of artificial intelligence”, announced in Von der Leyen’s “Agenda for Europe”, has not been put forward within her “first 100 days in office”, the EU Commission published a series of documents that provide a set of principles and ideas to inform it.

On February 19, 2020, a “European Strategy for Data” and a “White Paper on Artificial Intelligence” were jointly published, laying out the main principles of the EU’s strategic approach to AI (including ADM systems, even though they are not explicitly mentioned). These principles include putting “people first” (“technology that works for the people”), technological neutrality (no technology is good or bad per se; this is to be determined by its use only) and, of course, sovereignty, and optimism. As Von der Leyen puts it:  “We want to encourage our businesses, our researchers, the innovators, the entrepreneurs, to develop Artificial Intelligence. And we want to encourage our citizens to feel confident to use it. We have to unleash this potential”.

The underlying idea is that new technologies should not come with new values. The “new digital world” envisioned by the Von der Leyen administration should fully protect human and civil rights. “Excellence” and “trust”, highlighted in the very title of the White Paper, are considered the twin pillars upon which a European model of AI can and should stand, differentiating it from the strategies of both the US and China.

However, this ambition is lacking in the detail of the White Paper. For example, the White Paper lays out a risk-based approach to AI regulation, in which regulation is proportional to the impact of “AI” systems on citizen’s lives. “For high-risk cases, such as in health, policing, or transport”, it reads, “AI systems should be transparent, traceable and guarantee human oversight”. Testing and certification of adopted algorithms are also included among the safeguards that should be put in place, and should become as widespread as for “cosmetics, cars or toys”. Whereas, “less risky systems” only have to follow voluntary labelling schemes instead: “The economic operators concerned would then be awarded a quality label for their AI applications”.

But critics noted that the very definition of “risk” in the Paper is both circular and too vague, allowing for several impactful ADM systems to fall through the cracks of the proposed framework.

Comments gathered in the public consultation, between February and June 2020, highlight how controversial this idea is. 42.5% of respondents agreed that “compulsory requirements” should be limited to “high-risk AI applications”, while 30.6% doubted such a limitation.

Moreover, there is no description of a clear mechanism for the enforcement of such requirements. Neither is there a description of a process to move towards one.

The consequences are immediately visible for biometric technologies, and face recognition in particular. On this, the White Paper proposed a distinction between biometric “authentication”, which is seen as non-controversial (e.g., face recognition to unlock a smartphone), and remote biometric “identification” (such as deployment in public squares to identify protesters), which could arouse serious human rights and privacy concerns.

Only cases in the latter category would be problematic under the scheme proposed by the EU. The FAQ in the White Paper states: “this is the most intrusive form of facial recognition and in principle prohibited in the EU”, unless there is “substantial public interest” in its deployment.

The explanatory document claims that “allowing facial recognition is currently the exception”, but findings in this report arguably contradict that view: face recognition seems to be rapidly becoming the norm. A leaked draft version of the White Paper seemed to recognize the urgency of the problem, by including the idea of a three-to-five-year moratorium on live uses of face recognition in public places, until – and if – a way to reconcile them with democratic checks and balances could be found.

Just before the official release of the White Paper, even EU Commissioner, Margrethe Vestager, called for a “pause” on these uses.

However, immediately after Vestager’s call, Commission officials added that this “pause” would not prevent national governments from using face recognition according to the existing rules. Ultimately, the final draft of the White Paper scrapped any mention of a moratorium, and called for “a broad European debate on the specific circumstances, if any, which might justify” its use for live biometric identification purposes instead. Among them, the White Paper includes justification, proportionality, the existence of democratic safeguards, and respect for human rights.

Throughout the whole document, risks associated with AI-based technologies are more generally labeled as “potential”, while the benefits are portrayed as very real and immediate. This led many in the human rights community to claim that the overall narrative of the White Paper suggests a worrisome reversal of EU priorities, putting global competitiveness ahead of the protection of fundamental rights.

Some foundational issues are, however, raised in the documents. For example, the interoperability of such solutions and the creation of a network of research centers focused on applications of AI aimed at “excellence” and competence-building.

The objective is “to attract over €20 billion of total investments in the EU per year in AI over the next decade”.

A certain technological determinism seems to also affect the White Paper. “It is essential”, it reads, “that public administrations, hospitals, utility and transport services, financial supervisors, and other areas of public interest rapidly begin to deploy products and services that rely on AI in their activities. A specific focus will be in the areas of healthcare and transport where technology is mature for large-scale deployment.”

However, it remains to be seen whether suggesting a rushed deployment of ADM solutions in all spheres of human activity is compatible with the EU Commission’s efforts in addressing the structural challenges brought about by ADM systems to rights and fairness.

EU Parliament’s Resolution on ADM and consumer protection

A Resolution, passed by the EU Parliament in February 2020, more specifically tackled ADM systems in the context of consumer protection. The Resolution correctly pointed out that “complex algorithmic-based systems and automated decision-making processes are being made at a rapid pace”, and that “opportunities and challenges presented by these technologies are numerous and affect virtually all sectors”. The text also highlights the need for “an examination of the current EU legal framework”, to assess whether “it is able to respond to the emergence of AI and automated decision-making”.

Calling for a “common EU approach to the development of automated decision-making processes”, the Resolution details several conditions that any such systems should possess to remain consistent with European values. Consumers should be “properly informed” about how algorithms affect their lives, and they should have access to a human with decision-making power so that decisions can be checked and corrected if needed. They should also be informed, “when prices of goods or services have been personalized on the basis of automated decision-making and profiling of consumer behavior”.

In reminding the EU Commission that a carefully drafted risk-based approach is needed, the Resolution points out that safeguards need to take into consideration that ADM systems “may evolve and act in ways not envisaged when first placed on the market”, and that liability is not always easy to attribute when harm comes as a result of the deployment of an ADM system.

The Resolution echoes art. 22 of the GDPR when it notes that a human subject must always be in the loop when “legitimate public interests are at stake”, and should always be ultimately responsible for decisions in the “medical, legal and accounting professions, and for the banking sector”. In particular, a “proper” risk assessment should precede any automation of professional services.

Finally, the Resolution lists detailed requirements for quality and transparency in data governance: among them, “the importance of using only high-quality and unbiased data sets in order to improve the output of algorithmic systems and boost consumer trust and acceptance”; using “explainable and unbiased algorithms”; and the need for a “review structure” that allows affected consumers “to seek human review of, and redress for, automated decisions that are final and permanent”.

Making the most of the EU Parliament’s “right of initiative”

In her inaugural address, von der Leyen clearly expressed her support for a “right of initiative” for the European Parliament. “When this House, acting by majority of its Members, adopts Resolutions requesting the Commission to submit legislative proposals”, she said, “I commit to responding with a legislative act in full respect of the proportionality, subsidiarity, and better law-making principles”.

If “AI” is indeed a revolution requiring a dedicated legislative package, allegedly coming over the first quarter of 2021, elected representatives want to have a say about it. This, coupled with von der Leyen’s stated intent of empowering their legislative capabilities, could even result in what Politico labeled a “Parliament moment”, with parliamentary committees starting to draft several different reports as a result.

Each report investigates specific aspects of automation in public policy that, even though they are meant to shape upcoming “AI” legislation, are relevant for ADM.

For example, through its “Framework of ethical aspects of artificial intelligence, robotics and related technologies”, the Committee for Legal Affairs calls for the constitution of a “European Agency for Artificial Intelligence” and, at the same time, for a network of national supervisory authorities in each Member State to make sure that ethical decisions involving automation are and remain ethical.

In “Intellectual property rights for the development of artificial intelligence technologies”, the same committee lays out its view for the future of intellectual property and automation. For one, the draft report states that “mathematical methods are excluded from patentability unless they constitute inventions of a technical nature”, while at the same time claiming that, regarding algorithmic transparency, “reverse engineering is an exception to the trade secrets rule”.

The report goes as far as considering how to protect “technical and artistic creations generated by AI, in order to encourage this form of creation”, imagining that “certain works generated by AI can be regarded as equivalent to intellectual works and could therefore be protected by copyright”.

Lastly, in a third document (“Artificial Intelligence and civil liability”), the Committee details a “Risk Management Approach” for the civil liability of AI technologies. According to it, “the party who is best capable of controlling and managing a technology-related risk is held strictly liable, as a single entry point for litigation”.

Important principles concerning the use of ADM in the criminal justice system can be found in the Committee of Civil Liberties, Justice and Home Affairs’ report “Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters”. After a detailed list of actual and current uses of “AI” – these are, actually, ADM systems – by police forces, the Committee “considers it necessary to create a clear and fair regime for assigning legal responsibility for the potential adverse consequences produced by these advanced digital technologies”.

It then goes about detailing some of its features: no fully automated decisions, algorithmic explainability that is “intelligible to users”, a “compulsory fundamental rights impact assessment (...) of any AI systems for law enforcement or judiciary” prior to its deployment or adoption, plus “periodic mandatory auditing of all AI systems used by law enforcement and the judiciary to test and evaluate algorithmic systems once they are in operation”.

A moratorium on face recognition technologies for law enforcement is also called for in the report, “until the technical standards can be considered fully fundamental rights compliant, results derived are non-discriminatory, and there is public trust in the necessity and proportionality for the deployment of such technologies”.

The aim is to eventually boost the overall transparency of such systems, and advising Member States to provide a “comprehensive understanding” of the AI systems adopt- ed by law enforcement and the judiciary, and – along the lines of a “public register” – to detail “the type of tool in use, the types of crime they are applied to, and the companies whose tools are being used”.

The Culture and Education Committee and the Industrial Policy Committee were also working on their own reports at the time of writing.

All these initiatives led to the creation of a Special Committee on “Artificial Intelligence in a Digital Age” (AIDA) on June 18, 2020. Composed of 33 members, and with an initial duration of 12 months, it will “analyse the future impact” of AI on the EU economy, and “in particular on skills, employment, fintech, education, health, transport, tourism, agriculture, environment, defence, industry, energy and e-government”.

High-Level Expert Group on AI & AI Alliance

In 2018, the High-Level Expert Group (HLEG) on AI, an expert committee made up of 52 experts, was set up by the European Commission to support the implementation of the European strategy on AI, to identify principles that should be observed in order to achieve “trustworthy AI”, and, as the steering committee of the supporting AI Alliance, to create an open multi-stakeholder platform (consisting of more than 4000 members at the time of writing) to provide broader input for the work of the AI high-level expert group.

After the publication of the first draft of the AI Ethics Guidelines for Trustworthy AI in December 2018, followed by feedback from more than 500 contributors, a revised version was published in April 2019. It puts forward a “human-centric approach“ to achieve legal, ethical, and robust AI throughout the system’s entire life cycle. However, it remains a voluntary framework without concrete and applicable recommendations on operationalization, implementation, and enforcement.

Civil society, consumer protection, and rights organizations commented and called for the translation of the guidelines into tangible rights for people. For example, digital rights non-profit Access Now, a member of the HLEG, urged the EC to clarify how different stakeholders can test, apply, improve, endorse, and enforce “Trustworthy AI” as a next step, while at the same time recognizing the need to determine Europe’s red lines.

In an op-ed, two other members of the HLEG claimed that the group had “worked for one-and-a-half years, only for its detailed proposals to be mostly ignored or mentioned only in passing” by the European Commission who drafted the final version. They also argued that, because the group was initially tasked with identifying risks and “red lines” for AI, members of the group pointed to autonomous weapon systems, citizen scoring, and automated identification of individuals by using face recognition as implementations of AI that should be avoided. However, representatives of industry, who dominate the committee, succeeded in getting these principles deleted before the draft was published.

This imbalance towards highlighting the potentials of ADM, compared to the risks, can also be observed throughout its second deliverable. In the HLEG’s “Policy and investment recommendations for trustworthy AI in Europe”, made public in June 2019, there are 33 recommendations meant to “guide Trustworthy AI towards sustainability, growth and competitiveness, as well as inclusion – while empowering, benefiting and protecting human beings”. The document is predominantly a call to boost the uptake and scaling of AI in the private and public sector by investing in tools and applications “to help vulnerable demographics” and “to leave no one behind”.

Nevertheless, and despite all legitimate criticism, both guidelines still express critical concerns and demands regarding automated decision-making systems. For example, the ethics guidelines formulate “seven key requirements that AI systems should meet in order to be trustworthy”. These guidelines go on to provide guidance for the practical implementation of each requirement: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being, and accountability.

The guidelines also provide a concrete pilot, called the “Trustworthy AI assessment list”, which is aimed at making those high-level principles operational. The goal is to have it adopted “when developing, deploying or using AI systems”, and adapted “to the specific use case in which the system is being applied”.

The list includes many issues that are associated with the risk of infringing on human rights through ADM systems. These include the lack of human agency and oversight, technical robustness and safety issues, the inability to avoid unfair bias or provide equal and universal access to such systems, and the lack of meaningful access to data fed into them.

Contextually, the pilot list included in the guidelines provides useful questions to help those who deploy ADM systems. For example, it calls for “a fundamental rights impact assessment where there could be a negative impact on fundamental rights”. It also asks whether “specific mechanisms of control and oversight” have been put in place in the cases of “self-learning or autonomous AI” systems, and whether processes exist “to ensure the quality and integrity of your data”.

Detailed remarks also concern foundational issues for ADM systems, such as their transparency and explainability. Questions include “to what extent the decisions and hence the outcome made by the AI system can be understood?” and “to what degree the system’s decision influences the organisation’s decision-making processes?” These questions are highly relevant to assess the risks posed by deploying such systems.

To avoid bias and discriminatory outcomes, the guidelines point to “oversight processes to analyze and address the system’s purpose, constraints, requirements and decisions in a clear and transparent manner”, while at the same time demanding stakeholder participation throughout the whole process of implementing AI systems.

Added to that, the recommendations on policy and investment foresee the determination of red lines through an institutionalized “dialogue on AI policy with affected stakeholders”, including experts in civil society. Furthermore, they urge to “ban AI-enabled mass scale scoring of individuals as defined in [the] Ethics Guidelines, and [to] set very clear and strict rules for surveillance for national security purposes and other purposes claimed to be in the public or national interest”. This ban would include biometric identification technologies and profiling.

Relevant to automated decision-making systems, the document also states that “clearly defin[ing] if, when and how AI can be used (...) will be crucial for the achievement of Trustworthy AI”, warning that “any form of citizen scoring can lead to the loss of [the citizen’s] autonomy and endanger the principle of non-discrimination”, and “therefore should only be used if there is a clear justification, under proportionate and fair measures”. It further stresses that “transparency cannot prevent non-discrimination or ensure fairness”. This means that the possibility of opting out of a scoring mechanism should be provided, ideally without any detriment to the individual citizen.

On the one hand, the document acknowledges “that, while bringing substantial benefits to individuals and society, AI systems also pose certain risks and may have a negative impact, including impacts which may be difficult to anticipate, identify or measure (e.g. on democracy, the rule of law and distributive justice, or on the human mind itself.)”. On the other, however, the group claims that “unnecessarily prescriptive regulation should be avoided”.

In July 2020, the AI HLEG also presented their final Assessment List for Trustworthy Artificial Intelligence (ALTAI), compiled after a piloting process together with 350 stakeholders.

The list, which is entirely voluntary and devoid of any regulatory implications, aims to translate the seven requirements laid out in the AI HLEG’sEthics Guidelines into action. The intention is to provide whoever wants to implement AI solutions that are compatible with EU values – for example, designers and developers of AI systems, data scientists, procurement officers or specialists, and legal/compliance officers with a self-assessment toolkit.

Council of Europe: how to safeguard human rights in ADM

In addition to the Ad hoc Committee on Artificial Intelligence (CAHAI), set up in September 2019, the Committee of Ministers of the Council of Europe has published a substantial and persuasive framework.

Envisioned as a standard-setting instrument, its “Recommendation to Member states on the human rights impacts of algorithmic systems” describes “significant challenges” that arise with the emergence and our “increasing reliance” on such systems, and that are relevant “to democratic societies and the rule of law”.

The framework, which underwent a public consultation period with detailed comments from civil society organizations, goes beyond the EU Commission’s White Paper when it comes to safeguarding values and human rights.

The Recommendation thoroughly analyzes the effects and evolving configurations of algorithmic systems (Appendix A) by focusing on all stages of the process that go into making an algorithm, i.e., procurement, design, development, and ongoing deployment.

While generally following the ‘human-centric AI approach’ of the HLEG guidelines, the Recommendation outlines actionable “obligations of States” (Appendix B) as well as responsibilities for private sector actors (Appendix C). In addition, the Recommendation adds principles such as “informational self-determination”, lists detailed suggestions for accountability mechanisms and effective remedies, and demands human rights impact assessments.

Even though the document clearly acknowledges the “significant potential of digital technologies to address societal challenges and to socially beneficial innovation and economic development”, it also urges caution. This is to ensure that those systems do not deliberately or accidentally perpetuate “racial, gender and other societal and labor force imbalances that have not yet been eliminated from our societies”.

On the contrary, algorithmic systems should be used proactively and sensitively to address these imbalances, and pay “attention to the needs and voices of vulnerable groups”.

Most significantly, however, the Recommendation identifies the potentially higher risk to human rights when algorithmic systems are used by Member States to deliver public services and policy. Given that it is impossible for an individual to opt-out, at least without facing negative consequences from doing so, precautions and safeguards are needed for the use of ADM in governance and administration.

The Recommendation also addresses the conflicts and challenges arising from public-private-partnerships (“neither clearly public nor clearly private”) in a wide range of uses.

Recommendations for Member State governments include abandoning processes and refusing to use ADM systems, if “human control and oversight become impractical” or human rights are put at risk; deploying ADM systems if and only if transparency, accountability, legality, and the protection of human rights can be guaranteed “at all stages of the process”. Furthermore, the monitoring and evaluation of these systems should be “constant”, “inclusive and transparent”, comprised of a dialogue with all relevant stakeholders, as well as an analysis of the environmental impact and further potential externalities on “populations and environments”.

In Appendix A, the COE also defines “high-risk” algorithms for other bodies to draw inspiration from. More specifically, the Recommendation states that “the term “high-risk” is applied when referring to the use of algorithmic systems in processes or decisions that can produce serious consequences for individuals or in situations where the lack of alternatives prompts a particularly high probability of infringement of human rights, including by introducing or amplifying distributive injustice”.

The document, which did not require the unanimity of members to be adopted, is non-binding.

Regulation of terrorist content online

After a long period of sluggish progress, the regulation to prevent the dissemination of terrorist content gained momentum in 2020. Should the adopted regulation still include automated and proactive tools for recognizing and removing content online, these would likely fall under Art. 22 of the GDPR.

As the European Data Protection Supervisor (EDPS) puts it: “since the automated tools, as envisaged by the Proposal, could lead not only to the removal and retention of content (and related data) concerning the uploader, but also, ultimately, to criminal investigations on him or her, these tools would significantly affect this person, impacting on his or her right to freedom of expression and posing significant risks for him or her rights and freedoms”, and, therefore, fall under Art. 22(2).

Also, and crucially, it would require more substantive safeguards compared to those that the Commission currently foresees. As the advocacy group, European Digital Rights (EDRi), explains: “the proposed Terrorist Content Regulation needs substantive reform to live up to the Union’s values, and to safeguard the fundamental rights and freedoms of its citizens”.

An early stream of strong criticism on the initial proposal from civil society groups, European Parliament (EP) committees, including opinions and analysis by the European Union Agency for Fundamental Rights (FRA), EDRi, as well as a critical joint report by three UN Special Rapporteurs, highlighted threats to the right of freedom of expression and information, the right to freedom and pluralism of the media, the freedom to conduct a business and the rights to privacy and protection of personal data.

Critical aspects include an insufficient definition of terrorist content, the scope of the regulation (at present, this includes content for educational and journalistic purposes), the aforementioned call for “proactive measures”, a lack of effective judicial supervision, insufficient reporting obligations for law enforcement agencies, and missing safeguards for “cases where there are reasonable grounds to believe that fundamental rights are impacted” (EDRi 2019).

The EDPS stresses that such “suitable safeguards” should include the right to obtain human intervention and the right to an explanation of the decision reached through automated means (EDRi 2019).

Although safeguards that were suggested or demanded found their way into the EP’s draft report on the proposal, it is yet to be seen who can hold their breath longer going into the last round before the final vote. During closed-door trialogues between the EP, the new EC, and the EU Council (which began in October 2019), only minor changes are still possible, according to a leaked document.

Oversight and regulation

First decisions on the compliance of ADM systems with the GDPR

“Although there was no great debate on facial recognition during the passage of negotiations on the GDPR and the law enforcement data protection directive, the legislation was designed so that it could adapt over time as technologies evolved. [...] Now is the moment for the EU, as it discusses the ethics of AI and the need for regulation, to determine whether – if ever – facial recognition technology can be permitted in a democratic society. If the answer is yes, only then do we turn [to] questions of how and safeguards and accountability to be put in place.” – EDPS, Wojciech Wiewiórowski.

“Facial recognition processing is an especially intrusive biometric mechanism, which bears important risks of privacy or civil liberties invasions for the people affected” – (CNIL 2019).

Since the last Automating Society report, we have seen the first cases of fines and decisions related to breaches of the regulation issued by national Data Protection Authorities (DPAs) based on the GDPR. The following case studies, however, show the limits of the GDPR in practice when it comes to Art. 22 relating to ADM systems and how it is leaving the privacy regulators to make assessments on a case-by-case basis.

In Sweden, a face recognition test project, conducted in one school class for a limited period of time, was found to violate several obligations of Data Protection Regulation (esp. GDPR Art. 2(14), Art. 9 (2)). (European Data Protection Board 2019).

A similar case is on hold after the French Commission Nationale de l’Informatique et des Libertés (CNIL raised concerns when two high schools planned to introduce face recognition technology in partnership with the US tech firm, Cisco. The opinion is non-binding, and the filed suit is ongoing.

The ex-ante authorization by data regulators is not required to conduct such trials as the consent of the users is generally considered to be sufficient to process biometric data. And yet, in the Swedish case, it wasn’t. This was due

to power imbalances between the data controller and the data subjects. Instead, an adequate impact assessment and prior consultation with the DPA were deemed necessary.

The European Data Protection Supervisor (EDPS) confirmed this:

“Consent would need to be explicit as well as freely-given, informed and specific. Yet unquestionably a person cannot opt-out, still less opt-in, when they need access to public spaces that are covered by facial recognition surveillance. [...] Finally, the compliance of the technology with principles like data minimization and the data protection by design obligation is highly doubtful. Facial recognition technology has never been fully accurate, and this has serious consequences for individuals being falsely identified whether as criminals or otherwise. [...] It would be a mistake, however, to focus only on privacy issues. This is fundamentally an ethical question for a democratic society.” (EDPS 2019)

Access Now commented:

“As more facial recognition projects develop, we already see that the GDPR provides useful human rights safeguards that can be enforced against unlawful collection and use of sensitive data such as biometrics. But the irresponsible and often unfounded hype around the efficiency of such technologies and the underlying economic interest could lead to attempts by central and local governments and private companies to circumvent the law.”

Automated Face Recognition in use by South Wales Police ruled unlawful

Over the course of 2020, the UK witnessed the first high profile application of the Law Enforcement Directive concerning the use of face recognition technologies in public spaces by the police. Seen as an important precedent on a hotly debated topic, the verdict was greeted with a great deal of attention from civil society actors and legal scholars all over Europe and beyond.

The case was brought to court by Ed Bridges, a 37 years old man from Cardiff, who claimed his face was scanned without his consent both during Christmas shopping in 2017, and at a peaceful anti-arms protest a year later.

The court initially upheld the use of Automated Facial Recognition technology (“AFR”) by South Wales Police, declaring it lawful and proportionate. But the decision was appealed by Liberty, a civil rights group, and the Court of Appeal of England and Wales decided to overturn the High Court’s dismissal and ruled it unlawful on August 11, 2020.

In ruling against South Wales Police on three out of five grounds, the Court of Appeal found “fundamental deficiencies” in the existing normative framework around the use of AFR, that its deployment did not meet the principle of “proportionality”, and, also, that an adequate Data Protection

Impact Assessment (DPIA) had not been performed, lacking multiple crucial steps.

The court did not, however, rule that the system was producing discriminatory results, based either on sex or race,

as South Wales Police had not gathered sufficient evidence to make a judgment on that. However, the court felt it was worth adding a noticeable remark: “We would hope that, as AFR is a novel and controversial technology, all police forces that intend to use it in the future would wish to satisfy themselves that everything reasonable which could be done had been done in order to make sure that the software used does not have a racial or gender bias.”

After the ruling, Liberty called for South Wales Police and other police forces to withdraw the use of face recognition technologies.

ADM in practice: border management and surveillance

While the EU Commission and its stakeholders debated whether to regulate or ban face recognition technologies, extensive trials of the systems were already underway all over Europe.

This section highlights a crucial and often overlooked link between biometrics and the EU’s border management systems, clearly showing how technologies that can produce discriminatory results might be applied to individuals – e.g., migrants – who already suffer the most from discrimination.

Face recognition and the use of biometrics data in EU policies and practice

Over the last year, face recognition and other kinds of biometrics identification technologies garnered a lot of attention from governments, the EU, civil society, and rights organizations, especially concerning law enforcement and border management.

Over 2019, the EC tasked a consortium of public agencies to “map the current situation of facial recognition in criminal investigations in all EU Member States,” to move “towards the possible exchange of facial data”. They commissioned the consultancy firm Deloitte to perform a feasibility study on the expansion of the Prüm system of face images. Prüm is an EU-wide system that connects DNA, fingerprint, and vehicle registration databases to allow mutual searching. The concern is that a pan-European, database of faces could be used for pervasive, unjustified, or illegal surveillance.

 

Over 2019, the EC tasked a consortium of public agencies to “map the current situation of facial recognition in criminal investigations in all EU Member States,” to move “towards the possible exchange of facial data”. They commissioned the consultancy firm Deloitte to perform a feasibility study on the expansion of the Prüm system of face images. Prüm is an EU-wide system that connects DNA, fingerprint, and vehicle registration databases to allow mutual searching. The concern is that a pan-European, database of faces could be used for pervasive, unjustified, or illegal surveillance.

Border management systems without borders

As reported in the previous edition of Automating Society, the implementation of an overarching interoperable smart border management system in the EU, initially proposed by the Commission back in 2013, is on its way. Although the new systems that have been announced (EES, ETIAS, ECRIS-TCN) will only start operating in 2022, the Entry/Exit System (EES) regulation has already introduced face images as biometric identifiers and the use of face recognition technology for verification purposes for the first time in EU law.

The European Fundamental Rights Agency (FRA) confirmed the changes: “the processing of facial images is expected to be introduced more systematically in large-scale EU-level IT systems used for asylum, migration and security purposes […] once the necessary legal and technical steps are completed”.

According to Ana Maria Ruginis Andrei, from the European Union Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom, Security and Justice (eu-LISA), this expanded new interoperability architecture was “assembled in order to forge the perfect engine to successfully fight against the threats to internal security, to effectively control migration and to overcome blind spots regarding identity management”. In practice, this means to “hold the fingerprints, facial images, and other personal data of up to 300 million non-EU nationals, merging data from five separate systems.” (Campbell 2020)

ETIAS: automated border security screenings

The European Travel Information and Authorization System (ETIAS), which is still not in operation at the time of writing, will use different databases to automate the digital security screening of non-EU travelers (who do not need a visa, or “visa-waiver”) before they arrive in Europe.

This system is going to gather and analyze data for the advanced “verification of potential security or irregular migration risks” (ETIAS 2020). The aim is to “facilitate border checks; avoid bureaucracy and delays for travelers when presenting themselves at the borders; ensure a coordinated and harmonized risk assessment of third-country nationals” (ETIAS 2020).

Ann-Charlotte Nygård, head of FRA’s Technical Assistance and Capacity Building unit, sees two specific risks concerning ETIAS: “first, the use of data that could lead to the unintentional discrimination of certain groups, for instance if an applicant is from a particular ethnic group with a high in-migration risk; the second relates to a security risk assessed on the basis of past convictions in the country of origin. Some such earlier convictions could be considered unreasonable by Europeans, such as LGBT convictions in certain countries. To avoid this, […] algorithms need to be audited to ensure that they do not discriminate and this kind of auditing would involve experts from interdisciplinary areas” (Nygård 2019).

iBorderCtrl: face recognition and risk scoring at the borders

iBorderCtrl was a project that involved security agencies from Hungary, Latvia, and Greece that aimed to “enable faster and thorough border control for third country nationals crossing the land borders of EU Member States”. iBorderCtrl used face recognition technology, a lie detector, and a scoring system to prompt a human border policeman if it deemed someone dangerous or if it deemed their right to entry was questionable.

The iBorderCtrl project came to an end in August 2019, and the results – for any potential EU-wide implementation of the system – were contradictory.

Although it “will have to be defined, how far the system or parts of it will be used”, the project’s “Outcomes” page sees “the possibility of integrating the similar functionalities of the new ETIAS system and extending the capabilities of taking the border crossing procedure to where the travellers are (bus, car, train, etc.)”.

However, the modules this refers to were not specified, and the ADM-related tools that were tested have not been publicly evaluated.

At the same time, the project’s FAQ page confirmed that the system that was tested is not considered to be “currently suitable for deployment at the border (…) due to its nature as a prototype and the technological infrastructure on EU level”. This means that “further development and an integration in the existing EU systems would be required for a use by border authorities.”

In particular, while the iBorderCtrl Consortium was able to show, in principle, the functionality of such technology for border checks, it is also clear that ethical, legal, and societal constraints need to be addressed prior to any actual deployment.

Related Horizon2020 Project

Several follow-up projects focused on testing and developing new systems and technologies for Border Management and Surveillance, under the Horizon2020 program. They are listed on the European Commission’s CORDIS website, which provides information on all EU-supported research activities related to it.

The site shows that 38 projects are currently running under the “H2020-EU.3.7.3. – Strengthen security through border management” program/topic of the European Union. Its parent program – “Secure societies – Protecting freedom and security of Europe and its citizens”, boasts an overall budget of almost 1.7 billion euros and funds 350 projects– claims to tackle “insecurity, whether from crime, violence, terrorism, natural or man-made disasters, cyber attacks or privacy abuses, and other forms of social and economic disorders increasingly affect[ing] citizens” through projects mainly developing new technological systems based on AI and ADM.

Some projects have already finished and/or their applications are already in use – for example, FastPass, ABC4EU, MOBILEPASS, and EFFISEC – all of which looked into requirements for “integrated, interoperable Automated Border Control (ABC)”, identification systems, and “smart” gates at different border crossings.

TRESSPASS is an ongoing project that started in June 2018 and will finish in November 2021. The EU contributes almost eight million euros to the project, and the coordinators of iBorderCRL (as well as FLYSEC and XP-DITE) are aiming to “leverage the results and concepts implemented and tested” by iBorderCRL and “expand[ing] them into a multi-modal border crossing risk-based security solution within a strong legal and ethics framework.” (Horizon2020 2019)

The project has the stated goal of turning security checks at border crossing points from the old and outdated “Rule-based” to a new “Risk-based” strategy. This includes applying biometric and sensing technologies, a risk-based management system, and relevant models to assess identity, possessions, capability, and intent. It aims to enable checks through “links to legacy systems and external databases such as VIS/SIS/PNR” and is collecting data from all the above data sources for security purposes.

Another pilot project, FOLDOUT, started in September 2018 and will finish in February 2022. The EU contributes

€8,199,387,75 to the project to develop “improved methods for border surveillance” to counter irregular migration with a focus on “detecting people through dense foliage in extreme climates” […] by combining “various sensors and technologies and intelligently fuse[ing] these into an effective and robust intelligent detection platform” to suggest reaction scenarios. Pilots are underway in Bulgaria, with demonstration models in Greece, Finland, and French Guiana.

MIRROR, or Migration-Related Risks caused by misconceptions of Opportunities and Requirement, started in June 2019 and will finish in May 2022. The EU contributes just over five million euros to the project, which aims to “understand how Europe has perceived abroad, detect discrepancies between image and reality, spot instances of media manipulation, and develop their abilities for counteracting such misconceptions and the security threats resulting from them”. Based on “perception-specific threat analysis, the MIRROR project will combine methods of automated text, multimedia and social network analysis for various types of media (including social media) with empirical studies” to develop “technology and actionable insights, […]thoroughly validated with border agencies and policy makers, e.g. via pilots”.

Other projects that are already closed, but that get a mention, include Trusted Biometrics under Spoofing Attacks (TABULA RASA), which started in November 2010 and finished in April 2014. It analyzed “the weaknesses of biometric identification process software in scope of its vulnerability to spoofing, diminishing efficiency of biometric devices”. Another project, Bodega, which started in June 2015 and finished in October 2018, looked into how to use “human factor expertise” when it comes to the “introduction of smarter border control systems like automated gates and self-service systems based on biometrics”.

References

Access Now (2019): Comments on the draft recommendation of the Committee of Ministers to Member States on the human rights impacts of algorithmic systems
https://www.accessnow.org/cms/assets/uploads/2019/10/Submission-on-CoE-recommendation-on-the-human-rights-impacts-of-algorithmic-systems-21.pdf

AlgorithmWatch (2020): Our response to the European Commission’s consultation on AI
https://algorithmwatch.org/en/response-european-commission-ai-consultation/

Campbell, Zach/Jones, Chris (2020): Leaked Reports Show EU Police Are Planning a Pan-European Network of Facial Recognition Databases
https://theintercept.com/2020/02/21/eu-facial-recognition-database/

CNIL (2019): French privacy regulator finds facial recognition gates in schools illegal
https://www.biometricupdate.com/201910/french-privacy-regulator-finds-facial-recognition-gates-in-schools-illegal

Coeckelbergh, Mark / Metzinger, Thomas(2020): Europe needs more guts when it comes to AI ethics
https://background.tagesspiegel.de/digitalisierung/europe-needs-more-guts-when-it-comes-to-ai-ethics

Committee of Ministers (2020): Recommendation CM/Rec(2020)1 of the Committee of Ministers to Member States on the human rights impacts of algorithmic systems
https://search.coe.int/cm/pages/result_details.aspx?objectid=09000016809e1154

Committee on Legal Affairs (2020): Artificial Intelligence and Civil Liability
https://www.europarl.europa.eu/RegData/etudes/STUD/2020/621926/IPOL_STU(2020)621926_EN.pdf

Committee on Legal Affairs (2020): Draft Report: On intellectual property rights for the development of artificial intelligence technologies
https://www.europarl.europa.eu/doceo/document/JURI-PR-650527_EN.pdf

Committee on Civil Liberties, Justice and Home Affairs (2020): Draft Report: On artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters
https://www.europarl.europa.eu/doceo/document/LIBE-PR-652625_EN.pdf

Data Protection Commission(2020): Law enforcement directive
https://www.dataprotection.ie/en/organisations/law-enforcement-directive

EDRi (2019): FRA and EDPS: Terrorist Content Regulation requires improvement for fundamental rights
https://edri.org/our-work/fra-edps-terrorist-content-regulation-fundamental-rights-terreg/

GDPR (Art 22): Automated individual decision-making, including profiling
https://gdpr-info.eu/art-22-gdpr/

European Commission (2018): White paper: On Artificial Intelligence - A European approach to excellence and trust
https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf

European Commission (2020): A European data strategy
https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/european-data-strategy_en

European Commission (2020): Shaping Europe’s digital future – Questions and Answers
https://ec.europa.eu/commission/presscorner/detail/en/qanda_20_264

European Commission (2020): White Paper on Artificial Intelligence: Public consultation towards a European approach for excellence and trust
https://ec.europa.eu/digital-single-market/en/news/white-paper-artificial-intelligence-public-consultation-towards-european-approach-excellence

European Commission (2018): Security Union: A European Travel Information and Authorisation System - Questions & Answers
https://ec.europa.eu/commission/presscorner/detail/en/MEMO_18_4362

European Parliament (2020): Artificial intelligence: EU must ensure a fair and safe use for consumers
https://www.europarl.europa.eu/news/en/press-room/20200120IPR70622/artificial-intelligence-eu-must-ensure-a-fair-and-safe-use-for-consumers

European Parliament (2020): On automated decision-making processes: ensuring consumer protection and free movement of goods and services
https://www.europarl.europa.eu/doceo/document/B-9-2020-0094_EN.pdf

European Data Protection Supervisor (2019): Facial recognition: A solution in search of a problem?
https://edps.europa.eu/press-publications/press-news/blog/facial-recognition-solution-search-problem_de

ETIAS (2020): European Travel Information and Authorisation System (ETIAS)
https://ec.europa.eu/home-affairs/what-we-do/policies/borders-and-visas/smart-borders/etias_en

ETIAS (2019): European Travel Information and Authorisation System (ETIAS)
https://www.eulisa.europa.eu/Publications/Information%20Material/Leaflet%20ETIAS.pdf

Horizon2020 (2019): robusT Risk basEd Screening and alert System for PASSengers and luggage
https://cordis.europa.eu/project/id/787120/reporting

High Court of Justice (2019):
https://www.judiciary.uk/wp-content/uploads/2019/09/bridges-swp-judgment-Final03-09-19-1.pdf

High-Level Expert Group on Artificial Intelligence (2020): Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment
https://ec.europa.eu/digital-single-market/en/news/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment

Kayalki, Laura (2019): French privacy watchdog says facial recognition trial in high schools is illegal
https://www.politico.eu/article/french-privacy-watchdog-says-facial-recognition-trial-in-high-schools-is-illegal-privacy/

Kayser-Bril, Nicolas (2020): EU Commission publishes white paper on AI regulation 20 days before schedule, forgets regulation
https://algorithmwatch.org/en/story/ai-white-paper/

Leyen, Ursula von der (2020): Paving the road to a technologically sovereign Europe
https://delano.lu/d/detail/news/paving-road-technologically-sovereign-europe/209497

Leyen, Ursula von der (2020): Shaping Europe’s digital future
https://twitter.com/eu_commission/status/1230216379002970112?s=11

Sabbagh, Dan (2020): This article is more than 1 month old South Wales police lose landmark facial recognition case
https://www.theguardian.com/technology/2020/aug/11/south-wales-police-lose-landmark-facial-recognition-case

South Wales Police(2020): Automated Facial Recognition
https://afr.south-wales.police.uk/

Valero, Jorge (2020): Vestager: Facial recognition tech breaches EU data protection rules
https://www.euractiv.com/section/digital/news/vestager-facial-recognition-tech-breaches-eu-data-protection-rules/

Team

Kristina Penner

Kristina PennerKristina Penner is the executive advisor at AlgorithmWatch. Her research interests include ADM in social welfare systems, social scoring, and the societal impacts of ADM, as well as the sustainability of new technologies from a holistic lens. Her analysis of the EU border management system builds on her previous experience in research and counseling on asylum law. Further experience includes projects on the use of media in civil society and conflict-sensitive journalism, as well as stakeholder involvement in peace processes in the Philippines. She holds a master’s degree in international studies/peace and conflict research from Goe- the University in Frankfurt.

Fabio Chiusi

Fabio ChiusiFabio Chiusi works at AlgorithmWatch as the co-editor and project manager for the 2020 edition of the Automating Society report. After a decade in tech reporting, he started as a consultant and assistant researcher in data and politics (Tactical Tech) and AI in journalism (Polis LSE). He coordinated the “Persuasori Social” report about the regulation of political campaigns on social media for the PuntoZero Project, and he worked as a tech-policy staffer within the Chamber of Deputies of the Italian Parliament during the current legislation. Fabio is a fellow at the Nexa Center for Internet & Society in Turin and an adjunct professor at the University of San Marino, where he teaches journalism and new media and publishing and digital media. He is the author of several essays on technology and society, the latest being “Io non sono qui. Visioni e inquietudini da un futuro presente” (DeA Planeta, 2018), which is currently being translated into Polish and Chinese. He also writes as a tech-policy reporter for the collective blog ValigiaBlu.