Automating Society Report 2020

INTRODUCTION

Life in the automated society: How automated decision-making systems became mainstream, and what to do about it

by Fabio Chiusi

On a cloudy August day in London, students were angry. They flocked to Parliament Square by the hundreds, in protest – their placards emblazoned with support for unusual allies: their teachers, and an even more unusual target: an algorithm.

Due to the COVID-19 pandemic, schools closed in March in the United Kingdom. With the virus still raging throughout Europe over the summer of 2020, students knew that their final exams would have to be canceled, and their assessments – somehow – changed. What they could not have imagined, however, was that thousands of them would end up with lower than expected grades as a result. Students protesting knew what was to blame, as apparent by their signs and chants: the automated decision-making (ADM) system deployed by the Office of Qualifications and Examinations Regulation (Ofqual). It planned to produce the best data-based assessment for both General Certificates of Secondary Education and A-level results, in such a way that “the distribution of grades follows a similar pattern to that in other years, so that this year’s students do not face a systemic disadvantage as a consequence of circumstances this year”.

The government wanted to avoid the excess of optimism that would have resulted from human judgment alone, according to its own estimates: compared to the historical series, grades would have been too high. But this attempt to be “as far as possible, fair to students who had been unable to sit their exams this summer” failed spectacularly, and, on that grey August day of protest, the students kept on coming, performing chants, and holding signs to express an urgent need for social justice. Some were desperate, some broke down and cried.

“Stop stealing our future”, read one placard, echoing the Fridays for Future protests of climate activists. Others, however, were more specifically tailored to the flaws of the ADM grading system: “Grade my work, not my postcode”, we’re “students, not stats”, they read, denouncing the discriminatory outcomes of the system.

Finally, a chant erupted from the crowd, one that has come to the future of protest: “Fuck the algorithm”. Scared that the government was casually – and opaquely – automating their future, no matter how inconsistent with their skills and efforts, students screamed for the right not to have their life chances unduly affected by bad code. They wanted to have a say, and what they said should be heard.

Algorithms are neither “neutral” nor “objective” even though we tend to think that they are. They replicate the assumptions and beliefs of those who decide to deploy them and program them. Humans, therefore, are, or should be, responsible for both good and bad algorithmic choices, not “algorithms” or ADM systems. The machine may be scary, but the ghost within it is always human. And humans are complicated, even more so than algorithms.

The protesting students were not as naive as to believe that their woes were solely the fault of an algorithm, anyway. In fact, they were not chanting against “the algorithm” in an outburst of technological determinism; they were motivated by an urge to protect and promote social justice. In this respect, their protest more closely resembles that of the Luddites. Just as the labor movement that crushed mechanized looms and knitting frames in the 19th Century, they know that ADM systems are about power, and should not be mistaken for being an allegedly objective technology. So, they chanted “justice for the working class”, asked for the resignation of the Health Secretary, portrayed the ADM system as “classism at its finest”, “blatant classism”.

Eventually, the students succeeded in abolishing the system which put their educational career and chances in life at risk: in a spectacular U-turn, the UK government scrapped the error-prone ADM system and utilized the grades predicted by teachers.

But there’s more to this story than the fact that the protesters won in the end. This example highlights how poorly designed, implemented, and overseen systems that repro-duce human bias and discrimination fail to make use of the potential that ADM systems have, such as leveraging comparability and fairness.

More clearly than many struggles in the past, this protest reveals that we’re no longer just automating society. We have automated it already – and, finally, somebody noticed.

From Automating Society to the automated society

When launching the first edition of this report, we decided to call it “Automating Society”, as ADM systems in Europe were mostly new, experimental, and unmapped – and, above all, the exception rather than the norm.

This situation has changed rapidly. As clearly shown by the many cases gathered in this report through our outstanding network of researchers, the deployment of ADM systems has vastly increased in just over a year. ADM systems now affect almost all kinds of human activities, and, most notably, the distribution of services to millions of European citizens – and their access to their rights.

The stubborn opacity surrounding the ever-increasing use of ADM systems has made it all the more urgent that we continue to increase our efforts. Therefore, we have added four countries (Estonia, Greece, Portugal, and Switzerland) to the 12 we already analyzed in the previous edition of this report, bringing the total to 16 countries. While far from exhaustive, this allows us to provide a broader picture of the ADM scenario in Europe. Considering the impact these systems may have on everyday life, and how profoundly they challenge our intuitions – if not our norms and rules – about the relationship between democratic governance and automation, we believe this is an essential endeavor.

This is especially true during the COVID-19 pandemic, a time in which we have witnessed the (mostly rushed) adoption of a plethora of ADM systems that aim to contribute to securing public health through data-based tools and automation. We deemed this development to be so important that we decided to dedicate a “preview report” to it, published in August 2020 within the scope of the ‘Automating Society’ project.

Even in Europe, when it comes to the deployment of ADM systems, the sky is the limit. Just think of some of the cases introduced in this report, adding to the many – from welfare to education, the health system, to the judiciary – that we already reported on in the previous edition. In the following pages, and for the first time, we provide updates on the development of these cases in three ways. Firstly, through journalistic stories, then, through research-based sections cataloging different examples, and, finally, with graphic novels. We felt that these ADM systems are – and increasingly will be – so crucial in everyone’s lives that we needed to try and communicate how they work, and what they actually do to us, in both rigorous and new ways, to reach all kinds of audiences. After all, ADM systems have an impact on all of us.

Or at least they should. We’ve seen, for example, how a new, automated, proactive service distributes family benefits in Estonia. Parents no longer even need to apply for benefits: from birth, the state collects all the information about each newborn and their parents and collates it in databases. As a result, the parents automatically receive benefits if they are entitled to them.

In Finland, the identification of individual risk factors related to social exclusion in young adults is automated through a tool developed by the Japanese giant, Fujitsu. In France, data from social networks can be scraped to feed machine learning algorithms that are employed to detect tax fraud.

Italy is experimenting with “predictive jurisprudence”. This uses automation to help judges understand trends from previous court rulings on the subject at hand. And, in Denmark, the government tried to monitor every keyboard and mouse click on students’ computers during exams, causing – again – massive student protests that led to the withdrawal of the system, for the time being.

Time to put ADM wrongs to right

In principle, ADM systems have the potential to benefit people’s lives – by processing huge amounts of data, supporting people in decision-making processes, and providing tailored applications. In practice, however, we found very few cases that convincingly demonstrated such a positive impact.

For example, the VioGén system, deployed in Spain since 2007 to assess risk in cases of domestic violence, while far from perfect, shows “reasonable performance indexes” and has helped protect many women from violence.

In Portugal, a centralized, automated system deployed to deter fraud associated with medical prescriptions has reportedly reduced fraud by 80% in a single year. A similar system, in Slovenia, used to combat tax fraud has proved useful for inspectors, according to tax authorities. When looking at the current state of ADM systems in Europe, positive examples with clear benefits are rare. Throughout the report, we describe how the vast majority of uses tend to put people at risk rather than help them. But, to truly judge the actual positive and negative impact, we need more transparency about goals and more data about the workings of ADM systems that are tested and deployed.

The message for policy-makers couldn’t be clearer. If we truly want to make the most of their potential, while at the same time respecting human rights and democracy, the time to step up, make those systems transparent, and put ADM wrongs right, is now.

Face recognition, face recognition, everywhere

Different tools are being adopted in different countries. One technology, however, is now common to most: face recognition. This is arguably the newest, quickest, and most concerning development highlighted in this report. Face recognition, nearly absent from the 2019 edition, is being trialed and deployed at an alarming rate throughout Europe. In just over a year since our last report, face recognition is present in schools, stadiums, airports, and even in casinos. It is also used for predictive policing, to apprehend criminals, against racism, and, regarding the COVID-19 pandemic, to enforce social distancing, both in apps and through “smart” video-surveillance.

Face recognition, nearly absent from the 2019 edition, is being trialed and deployed at an alarming rate throughout Europe.

New ADM deployments continue, even in the face of mounting evidence of their lack of accuracy. And when challenges emerge, proponents of these systems simply try and find their way around them. In Belgium, a face recognition system used by the police is still “partially active”, even though a temporary ban has been issued by the Oversight Body for Police Information. And, in Slovenia, the use of face recognition technology by the police was legalized five years after they first started using it.

This trend, if not challenged, risks normalizing the idea of being constantly – and opaquely – watched, thus crystallizing a new status quo of pervasive mass surveillance. This is why many from the civil liberties community would have welcomed a much more aggressive policy response by EU institutions to this.

Even the act of smiling is now part of an ADM system piloted in banks in Poland: the more an employee smiles, the better the reward. And it’s not just faces that are being monitored. In Italy, a sound surveillance system was proposed as an anti-racism tool to be used in all football stadiums.

Black boxes are still black boxes

A startling finding in this report is that, while change happened rapidly regarding the deployment of ADM systems, the same is not true when it comes to the transparency of these systems. In 2015, Brooklyn Law School professor, Frank Pasquale, famously called a networked society based on opaque algorithmic systems a “black box society”. Five years later, and the metaphor, unfortunately, still holds – and applies to all the countries we studied for this report, across the board: there is not enough transparency concerning ADM systems – neither in the public, nor the private sector. Poland even mandates opacity, with the law that introduced its automated system to detect bank accounts used for illegal activities (“STIR”). The law states that the disclosure of adopted algorithms and risk indicators may result in up to 5 years in jail.

While we firmly reject the idea that all such systems are inherently bad – we embrace an evidence-based perspective instead – it is undoubtedly bad to be unable to assess their functioning and impact based on accurate and factual knowledge. If only because opacity severely impedes the gathering of evidence that is necessary to come to an informed judgment on the deployment of an ADM system in the first place.

When coupled with the difficulty both our researchers and journalists found in accessing any meaningful data on these systems, this paints a troubling scenario for whoever wishes to keep them in check and guarantee that their deployment is compatible with fundamental rights, the rule of law, and democracy.

Challenging the algorithmic status quo

What is the European Union doing about this? Even though the strategic documents produced by the EU Commission, under the guidance of Ursula Von der Leyen, refer to “artificial intelligence” rather than ADM systems directly, they do state laudable intentions:
promoting and realizing a “trustworthy AI” that puts “people first”.

However, as described in the EU chapter, the EU’s overall approach prioritizes the commercial and geopolitical imperative to lead the “AI revolution” over making sure that its products are consistent with democratic safeguards, once adopted as policy tools.
This lack of political courage, which is most apparent in the decision to ditch any suggestion of a moratorium on live face recognition technologies in public places in its AI regulation package, is surprising. Especially at a time when many Member States are witnessing an increasing number of legal challenges – and defeats – over hastily deployed ADM systems that have negatively impacted the rights of citizens.

A landmark case comes from the Netherlands, where civil rights activists took an invasive and opaque automated system, supposed to detect welfare fraud (SyRI), to court and won. Not only was the system found in violation of the European Convention on Human Rights by the court of The Hague in February, and therefore halted. The case also set a precedent: according to the ruling, governments have a “special responsibility” to safeguard human rights when implementing such ADM systems. Providing much-needed transparency is considered a crucial part of this.

Since our first report, media and civil society activists have established themselves as a driving force for accountability in ADM systems. In Sweden, for example, journalists managed to force the release of the code behind the Trelleborg system for fully automated decisions related to social benefit applications. In Berlin, the Südkreuz train station face recognition pilot project failed to lead to the implementation of the system anywhere in Germany. This was thanks to the loud opposition of activists, so loud that they managed to influence party positions and, ultimately, the government's political agenda.
Greek activists from Homo Digitalis showed that no real traveler participated in the Greek pilot trials of a system called ‘iBorderCtrl’, an EU-funded project that aimed to use ADM to patrol borders, thus revealing that the capabilities of many such systems are frequently oversold. Meanwhile, in Denmark, a profiling system for the early detection of risks associated with vulnerable families and children (the so-called “Gladsaxe model”) was put on hold thanks to the work of academics, journalists, and the Data Protection Authority (DPA).

DPAs themselves played an important role in other countries too. In France, the national privacy authority ruled that both a sound surveillance project and one for face recognition in high schools were illegal. In Portugal, the DPA refused to approve the deployment of video surveillance systems by the police in the municipalities of Leiria and Portimão as it was deemed disproportionate and would have amounted to “large-scale systematic monitoring and tracking of people and their habits and behavior, as well as identifying people from data relating to physical characteristics”. And, in the Netherlands, the Dutch DPA asked for more transparency in predictive algorithms used by government agencies.

Lastly, some countries have referred to an ombudsperson for advice. In Denmark, this advice helped to develop strategies and ethical guidelines for the use of ADM systems in the public sector. In Finland, the deputy parliamentary ombudsperson considered automated tax assessments unlawful.

And yet, given the continued deployment of such systems throughout Europe, one is left wondering: is this level of oversight enough? When the Polish ombudsperson questioned the legality of the smile detection system used in a bank (and mentioned above), the decision did not prevent a later pilot in the city of Sopot, nor did it stop several companies from showing an interest in adopting the system.

A startling finding in this report is that, while change happened rapidly regarding the deployment of ADM systems, the same is not true when it comes to the transparency of these systems.

Lack of adequate auditing, enforcement, skills, and explanations

Activism is mostly a reactive endeavor. Most of the time, activists can only react if an ADM system is being trialed or if one has already been deployed. By the time citizens can organize a response, their rights may have been infringed upon unnecessarily. This can happen even with the protections that should be granted, in most cases, by EU and Member States’ law. This is why proactive measures to safeguard rights – before pilots and deployments take place – are so important.

And yet, even in countries where protective legislation is in place, enforcement is just not happening. In Spain, for example, “automated administrative action” is legally codified, mandating specific requirements in terms of quality control and supervision, together with the audit of the information system and its source code. Spain also has a Freedom of Information law. However, even with these laws, only rarely, our researcher writes, do public bodies share detailed information about the ADM systems they use. Similarly, in France, a 2016 law exists that mandates algorithmic transparency, but again, to no avail.

Even bringing an algorithm to court, according to the specific provisions of an algorithmic transparency law, may not be enough to enforce and protect users’ rights. As the case of the Parcoursup algorithm to sort university applicants in France shows, exceptions can be carved out at will to shield an administration from accountability.

This is especially troubling when coupled with the endemic lack of skills and competences around ADM systems in the public sector lamented by many researchers. How could public officials explain or provide transparency of any kind around systems they don’t understand?

Recently, some countries tried to address this issue. Estonia, for example, set up a competence center suited to ADM systems to better look into how they could be used to develop public services and, more specifically, to inform the operations of the Ministry of Economic Affairs and Communications and the State Chancellery for the development of e-government. Switzerland also called for the creation of a “competence network” within the broader framing of the “Digital Switzerland” national strategy.
And yet, the lack of digital literacy is a well-known issue affecting a large proportion of the population in several European countries. Besides, it is tough to call for the enforcement of rights you don’t know you have. Protests in the UK and elsewhere, together with high profile scandals based on ADM systems, have certainly raised awareness of both the risks and opportunities of automating society. But while on the rise, this awareness is still in its early stages in many countries.

The results from our research are clear: while ADM systems already affect all sorts of activities and judgments, they are still mainly deployed without any meaningful democratic debate. Also, it is the norm, rather than the exception, that enforcement and oversight mechanisms – if they even exist – lag behind deployment.

Even the purpose of these systems is not commonly justified or explained to affected populations, not to mention the benefits they are supposed to gain. Think of the “AuroraAI” proactive service in Finland: it is supposed to automatically identify “life events”, as our Finnish researchers report, and in the minds of proponents, it should work as “a nanny” that helps citizens meet particular public service needs that may arise in conjunction with certain life circumstances, e.g., moving to a new place, changing family relations, etc. “Nudging” could be at work here, our researchers write, meaning that instead of empowering individuals, the system might end up doing just the opposite, suggesting certain decisions or limiting an individual’s options through its own design and architecture.

It is then all the more important to know what it is that is being “optimized” in terms of public services: “is service usage maximized, are costs minimized, or is citizen well-being improved?”, ask the researchers. “What set of criteria are these decisions based on and who choses them?” The mere fact that we don’t have an answer to these fundamental questions speaks volumes about the degree of participation and transparency that is allowed, even for such a potentially invasive ADM system.

The techno-solutionist trap

There is an overarching ideological justification for all this. It is called “technological solutionism”, and it still severely affects the way in which many of the ADM systems we studied are developed. Even if the term has been long-denounced as a flawed ideology that conceives of every social problem as a “bug” in need of a “fix” through technology, this rhetoric is still widely adopted – both in the media and in policy circles – to justify the uncritical adoption of automated technologies in public life.

When touted as “solutions”, ADM systems immediately veer into the territory described in Arthur C. Clarke’s Third Law: magic. And it is difficult, if not impossible, to regulate magic, and even more so to provide transparency and explanations around it. One can see the hand reaching inside the hat, and a bunny appears as a result, but the process is and should remain a “black box”.

Many researchers involved in the ‘Automating Society’ project denounced this as the fundamental flaw in the reasoning behind many of the ADM systems they describe. This also implies, as shown in the chapter on Germany, that most critiques of such systems are framed as an all-out rejection of “innovation”, portraying digital rights advocates as “neo-luddites”. This not only ignores the historical reality of the Luddite movement, which dealt in labor policies and not technologies per se, but also, and more fundamentally, threatens the effectiveness of hypothesized oversight and enforcement mechanisms.

At a time when the “AI” industry is witnessing the emergence of a “lively” lobbying sector, most notably in the UK, this might result in “ethics-washing” guidelines and other policy responses that are ineffective and structurally inadequate to address the human rights implications of ADM systems. This view ultimately amounts to the assumption that we humans should adapt to ADM systems, much more than ADM systems should be adapted to democratic societies.

To counter this narrative, we should not refrain from foundational questions: whether ADM systems can be compatible with democracy and deployed for the benefit of society at large, and not just for parts of it. It might be the case, for example, that certain human activities – e.g., those concerning social welfare – should not be subject to automation, or that certain technologies – namely, live face recognition in public spaces – should not be promoted in an endless quest for “AI leadership”, but banned altogether instead.

Even more importantly, we should reject any ideological framing that prevents us from posing such questions. On the contrary: what we need to see now is actual policies changing – in order to allow greater scrutiny of these systems. In the following section we list the key demands that result from our findings. We hope that they will be widely discussed, and ultimately implemented.
Only through an informed, inclusive, and evidence-based democratic debate can we find the right balance between the benefits that ADM systems can – and do – provide in terms of speed, efficiency, fairness, better prevention, and access to public services, and the challenges they pose to the rights of us all.

Author

Fabio Chiusi

Fabio ChiusiFabio Chiusi works at AlgorithmWatch as the co-editor and project manager for the 2020 edition of the Automating Society report. After a decade in tech reporting, he started as a consultant and assistant researcher in data and politics (Tactical Tech) and AI in journalism (Polis LSE). He coordinated the “Persuasori Social” report about the regulation of political campaigns on social media for the PuntoZero Project, and he worked as a tech-policy staffer within the Chamber of Deputies of the Italian Parliament during the current legislation. Fabio is a fellow at the Nexa Center for Internet & Society in Turin and an adjunct professor at the University of San Marino, where he teaches journalism and new media and publishing and digital media. He is the author of several essays on technology and society, the latest being “Io non sono qui. Visioni e inquietudini da un futuro presente” (DeA Planeta, 2018), which is currently being translated into Polish and Chinese. He also writes as a tech-policy reporter for the collective blog ValigiaBlu.