Research
United Kingdom
Contextualization
Last year, the United Kingdom was notable for its wide range of policy and research initiatives looking at the topic of automated decision-making (ADM) across government and civil society. These fell broadly into two categories: those driven by the hype around Artificial Intelligence (AI), which sought mainly to promote the supposed economic potential of the technology, with ethics as a secondary consideration; and critical initiatives sounding the alarm about the possible adverse impacts of ADM, especially on minority groups.
Institutional responses to ADM generally remain in the initial stages of research and consultation. However, a number of themes are already emerging, both in the identification of risks associated with ADM and in the broad approach to how to moderate the impacts of this technology. In particular, the possibility of discriminatory bias has been raised in relation to policing, public services, and other areas of civic life. The need for explicability and accountability in automated decision-making has been raised in a number of reports. So too has the current lack of frameworks and guidelines for organizations to follow in their evaluation of AI. This is a crucial time for the development of Britain’s regulatory approach to ADM, with several official organizations collecting their final evidence before launching more concrete policy recommendations.
Public interest and controversy around several specific ADM implementations has noticeably increased. This is particularly so in the case of face recognition. Attentive journalists have investigated and drawn attention to the roll-out of this technology across high-profile public spaces in London, which would otherwise be happening silently and without oversight. The civil society response has been strong enough to halt the deployment of face recognition in the case of the King’s Cross estate. The spectacular fiasco, in August 2020, of the algorithm deployed by the Office of Qualifications and Examinations Regulation (Ofqual) to grade A-level and GCSE2 students also received mainstream attention, leading the government to backtrack and resort to human judgment, and to protestors both in Parliament Square and outside the Department for Education. Their motto? “Fuck the algorithm”.
Brexit is however the all-consuming political question, cutting across all kinds of issues, and touching on ADM in a number of ways. EU citizens living in the UK have been told to register their residency in the country through a smartphone app powered by automated face recognition and automated data matching across government departments. Meanwhile, the debate on how to facilitate the transit of goods and people across the border between Northern Ireland and the Republic of Ireland in the event of leaving the European Single Market has raised the question of whether automated decision-making could be used (Edgington, 2019) to create a virtual border.
However, there remains little public awareness of automated decision-making as an issue in itself. While there is growing discourse around ‘Artificial Intelligence’, including critical coverage in the popular technology press, the framing of AI as an issue does not always correlate with the questions that need to be asked of ADM systems. The lack of a public register of automated decision-making systems is a problem that has been identified by some of the policy responses, which would help facilitate public understanding.
A catalog of ADM cases
Brexit: Settled status applications from EU citizens
As part of preparations for Brexit, the UK government launched a scheme to establish the residency rights of the estimated 3.4 million EU citizens living in the country.
Until Brexit, EU nationals have been free to enter the UK, and work and live in the country indefinitely without any need to apply for a visa or register as a foreigner.
Although at the time of writing, the terms of Britain’s exit from the EU are still up in the air, the government’s policy has been that EU citizens who enter the UK before Brexit must apply for status as “settled” or “pre-settled” residents in order to have the right to stay. Those with at least five years of residency are supposed to receive “settled” status, while others will be given “pre-settled” status and then required to apply for “settled” status when they reach the five-year threshold.
Given that there was no system to record when EU nationals entered the UK or became resident in the country, the process of establishing that someone came to the UK before Brexit is reliant on a combination of records from various government agencies, which may differ from person to person.
The government implemented the registration system in the form of a smartphone app. No doubt aiming to streamline the process as much as possible and control the costs of this huge bureaucratic exercise, the government has employed a number of elements of automated decision-making.
In October 2019, the Guardian reported on some of the challenges and problems of the scheme (Gentleman, 2019).
It mentioned how, after collecting the user’s details, the app attempted to confirm their history of residency by “making instant checks with HMRC”, (Her Majesty’s Revenue and Customs, the UK tax office). It described how “the app checks identity through facial matching in the same way travelers are checked in airports at e-passport gates.” The newspaper article also noted that “although much of the system is automated, every case is checked by a human before a decision is made.”
After a deadline, which may arrive in December 2020 or June 2021, the Guardian pointed out, “those who have failed to apply will gradually be transformed from legal residents into undocumented migrants […] they will gradually discover that normal life has become impossible.”
Swee Leng Harris, a law expert, researched the automated aspects of the settled status app for an article to be published in the forthcoming Data & Policy (Harris, forthcoming) journal.
Harris wrote that where the system found only a “partial match” with tax or welfare benefit records, the applicant would be granted pre-settled status as a default unless they challenged the decision.
According to government statistics analyzed for the article, although only a small number of appeals had been processed by the time the figures were provided (253 cases as of 31 May 2019), around 91% had been decided in favor of the applicant.
Harris argues this raises the question of how many people were given pre-settled status when they had, in reality, fulfilled the requirements for settled status but did not challenge the decision.
“Racist” algorithm for visa applicants suspended before legal challenge
According to them, the algorithm “entrenched racism and bias into the visa system”, as it “suffered from ‘feedback loop’ problems known to plague many such automated systems – where past bias and discrimination, fed into a computer program, reinforce future bias and discrimination”. This meant that the system “was prone to fast-tracking applicants from predominantly white countries and sorting people from other nations into a longer and more onerous review process”, as Motherboard puts it.
As a result, the Home Office said it will redesign the ADM system, this time explicitly “including issues around unconscious bias and the use of nationality, generally.”
Welfare benefit applications
Fraud in the welfare system is a topic beloved of the country’s popular press. The woman who falsely claimed the man she lived with was not her partner (Mutch, 2019) in order to qualify for benefits worth £30,000 and the Brit who claimed £90,000 from various funds despite living in Spain (Mahmood, 2019) are just two examples of “benefit cheats” who have made the newspapers recently.
This is part of the reason the government is keen to be seen to detect fraudulent applications as efficiently as possible. To that end, people who need to claim benefits because their income is not enough to pay the costs of housing are increasingly subject to automated decision-making, in another example detailed by Swee Leng Harris in a forthcoming article (ibid.) in Data & Policy.
The system is called Risk Based Verification (RBV), Harris writes, and in some parts of the country uses a propensity model in order to assign claims to a risk group, on which basis they may be subject to more stringent manual checks, such as documentation requirements or home visits by officials.
Harris points out that there are no requirements for local government offices, which administer the benefits, to monitor whether their algorithms are disproportionately impacting people with protected characteristics (such as belonging to a minority ethnic group).
An investigation in the Guardian identifies two companies that it says supply RBV systems to about 140 local councils in the UK. One is a company called TransUnion, a US creditrating business. The other is Xantura, a British firm that focuses on providing analytics systems to local authorities.
The Guardian reported that several councils had abandoned RBV systems after reviewing their performance (Marsh, 2019). North Tyneside, for example, found that the TransUnion system did not provide an explanation for flagging welfare applications as ‘high-risk’, and additional checks were not revealing any reason to reject the claims. The warnings were simply leading to valid claims being delayed. By contrast, the review (NTC, 2019) said that access to government tax data, rather than RBV, had helped speed up verification of applications.
North Tyneside Council had implemented RBV on advice issued by the Department for Work and Pensions (DWP 2011), the central government ministry responsible for overseeing welfare benefits, which had already started using the system to evaluate applications for unemployment benefit, which is administered on a nationwide basis.
The council’s own evaluation of RBV (NTC, 2015) recorded that “In researching the use of RBV, there have been no issues raised with regard to human rights,” suggesting that the implementation of this automated decision-making technology spread under the radar of public scrutiny. It did, however, identify the “possibility that people with certain protected characteristics may be under or over-represented in any of the risk groups,” and promised to monitor whether the system ended up disadvantaging particular groups.
The evaluation also noted that the cost of the “software and the manpower” required for RBV would be borne by Cofely GDF Suez, the private-sector outsourcing firm (now known as Engie) which had been given the contract to process welfare applications in North Tyneside. This raises the question of whether the company also stood to gain from the expected cost savings.
The grading algorithm fiasco
With schools closed since March due to the COVID-19 outbreak, A level and GCSE students in England, Wales, and Northern Ireland could not take their exam. In response, the government decided to entrust their marks to an algorithm.
The mathematical model was supposed to put grades and rankings estimated by teachers into historical perspective, by taking into account grades previously achieved by students and the distribution of grades across the country. This way, argued the government, grades would have been consistent with past results.
The opposite, however, ensued. “When A-level grades were announced in England, Wales and Northern Ireland on 13 August, nearly 40% were lower than teachers’ assessments”, wrote the BBC, reporting on official figures. What the search for historical consistency actually produced was downgraded marks for bright students from historically underperforming schools.
This also produced discriminatory results, largely benefitting private schools at the expense of public education. As the BBC clearly puts it: “Private schools are usually selective – and better-funded – and in most years will perform well in terms of exam results. An algorithm based on past performance will put students from these schools at an advantage compared with their state-educated equivalents”.
The UK’s Data Protection Authority, ICO, immediately issued a statement that is highly relevant for the use of ADM in this context. “The GDPR places strict restrictions on organisations making solely automated decisions that have a legal or similarly significant effect on individuals”, it reminded, before adding that “the law also requires the processing to be fair, even where decisions are not automated”.
The case rapidly reached mainstream attention. As a consequence, protests erupted in Parliament square and outside the Department of Education, while the government – after similar moves in Scotland, Wales, and Northern Ireland – quickly reversed course, scrapping the algorithm and resorting to the teachers’ judgment instead. The chief civil servant at the Department of Education, Jonathan Slater, was also sacked as a result.
Prime Minister Boris Johnson eventually blamed the fiasco on the “mutant algorithm”, but many commentators saw this as a policy, not technology, debacle, one that should be blamed on human decision-makers rather than “algorithms”, and that should remind us that they are never neutral instead. The Ada Lovelace Institute also warned that “the failure of the A-level algorithm highlights the need for a more transparent, accountable and inclusive process in the deployment of algorithms, to earn back public trust”.
An even more important consequence is that, after the Ofqual algorithm fiasco, councils started to “quietly” scrap “the use of computer algorithms in helping to make decisions on benefit claims and other welfare issues”, the Guardian revealed. Some 20 of them “stopped using an algorithm to flag claims as “high-risk” for potential welfare fraud”, the newspaper wrote.
Challenging face recognition
In last year’s report, we noted that police forces had begun trialing the use of automatic face recognition systems to automatically decide when people should be apprehended as suspects. This development was challenged by the civil society organization Big Brother Watch, which monitored the use of face recognition at Notting Hill Carnival in London and observed a high number of false positives.
Controversies around face recognition have increased in media profile significantly in the last year. In particular, there was an outcry over the use of face recognition in the King’s Cross area of central London.
It is the site of one of London’s main railway stations, but the formerly run-down district has in recent years been redeveloped and now also features an art school, corporate offices, shops, and restaurants and – crucially – what appear to the casual visitor to be public squares, walkways, and piazzas.
In August, the Financial Times revealed in an exclusive investigation (Murgia, 2019) that Argent, a property developer, was using the technology to track tens of thousands of people as they went about their business across the 67- acre King’s Cross estate, which it owns.
The story was picked up by the BBC (Kleinman, 2019) and The Guardian (Sabbagh, 2019) and sparked a public debate. Within a matter of weeks, the pressure forced the company to switch off the system and declare that it had no further plans to use any form of face recognition technology in the future.
In the course of the scandal, it emerged that the Metropolitan Police had provided a database of images of persons of interest (Murgia, 2019b) for Argent to load into the system.
The Information Commissioner, who is responsible for enforcing data protection laws, has launched an investigation (Rawlinson, 2019) into the King’s Cross system.
Other examples of face recognition systems also made the news. In a memorable TV sequence, a crew from BBC Click captured a man being fined £90 (BBC, 2019) for covering his face as he walked past a police unit trialing a face recognition camera.
London’s Metropolitan Police reported their first arrest using face recognition in February. According to newspaper reports (Express 2020), a woman wanted for allegedly assaulting police officers after an altercation at a London shopping mall in January was identified by the system walking down Oxford Street three months later.
The arrest came just days after the head of the force, Cressida Dick, had publicly made the argument for algorithmic policing at a conference (Dick, 2020).
“If, as seems likely, algorithms can assist in identifying patterns of behaviour by those under authorised surveillance, that would otherwise have been missed, patterns that indicate they are radicalising others or are likely to mount a terrorist attack; if an algorithm can help identify in our criminal systems material a potential serial rapist or killer that we could not have found by human endeavour alone; if a machine can improve our ability to disclose fairly then I think almost all citizens would want us to use it,” she was reported by ITV News as saying (ITV, 2020).
A legal challenge to police face recognition backed by Liberty, the civil rights group, was rejected (Bowcott, 2019) by the High Court in 2019. The group is appealing the decision (Fouzder, 2019).
Researchers from the University of Essex, meanwhile, reviewed police trials of face recognition (Fussey, 2019) in the UK and found that out of 42 matches made, only 8 could be confidently confirmed as accurate by human review
Policy, oversight and debate
Policy
Parliament
The All-Party Parliamentary Group on Artificial Intelligence (APPG AI) is a cross-party group of MPs with a special interest in AI, founded in 2017. It is sponsored by a number of large companies including the ‘Big Four’ accountancy firms – Deloitte, Ernst & Young, KPMG and PricewaterhouseCoopers – plus Microsoft and Oracle. Its latest program, published in summer 2017, is called “Embracing the AI Revolution” (APPG AI, 2019). The introduction explains: “We spent the first two years understanding key economic and socio-ethical implications [of AI] and we are now ready to take action.” The four pillars of the group’s work from 2019-2020 are: Education, Enterprise Adoption of AI, Citizen Participation, and Data Governance.
Its planned outputs include an ‘AI 101’ course and lesson plan to be used in schools; awards to be given to companies leading in the adoption of AI; and a public survey on how AI should be regulated. Its programme does not specifically address the ethical considerations around ADM, although it refers to the need to “ensure all AI developers and users become ethical AI citizens”.
Office for Artificial Intelligence
The Office for AI is now up and running. In June 2019, it published guidance in conjunction with the Government Digital Service (GDS) and the Alan Turing Institute (ATI) called “A guide to using artificial intelligence in the public sector” (GDS, 2019). This guidance is supposed to help civil servants decide whether AI could be useful and how to implement it ethically, fairly, and safely.
National Data Strategy
In July 2019, the government launched a call for evidence to build its National Data Strategy (DCMS, 2019). Its stated aim is to “support the UK to build a world-leading data economy”. As part of the call for evidence, it highlighted the need to consider “fairness and ethics” in how data is used. One of its priorities is “To ensure that data is used in a way that people can trust”. Following the election in December 2019, the government is expected to publish a draft strategy and run a second phase of evidence-gathering, in the form of a full public consultation on the draft strategy.
Oversight
Information Commissioner’s Office
The Information Commissioner’s Office (ICO) is the UK’s data protection regulator, funded by the government and directly answerable to parliament. It oversees and enforces the proper use of personal data by the private and public sectors.
The ICO has issued detailed guidance (ICO, 2018) on how organizations should implement the requirements of the General Data Protection Regulation (GDPR) in regards to automated decision-making and profiling. It has also appointed a researcher working on AI and data protection on a two-year fixed term to research and investigate a framework for auditing algorithms (ICO, 2019), including public consultation.
Centre for Data Ethics & Innovation (CDEI)
The CDEI is a body set up by the UK government to investigate and advise on the use of “data-driven technology”. It describes itself as an “independent expert committee”. Although the body has some independence from the government department that hosts it, critics have pointed out what they say is the “industry-friendly” (Orlowski, 2018) make-up of its board.
Nonetheless, the body has published several reports advocating more regulation of algorithms since its establishment in 2018.
In February 2020, CDEI published a report on online targeting – the personalization of content and advertising shown to web users using machine learning algorithms. While the report recognized the benefits of this technology, helping people navigate an overwhelming volume of information and forming a key part of online business models, it said targeting systems too often operate “without sufficient transparency and accountability.”
Online targeting translated to “significant social and political power”, the report said, that could influence people’s perception of the world, their actions and beliefs, and their ability to express themselves.
The report stopped short of recommending specific restrictions on online targeting. Rather, it advocated “regulatory oversight” of the situation under a new government watchdog. The watchdog would draw up a code of practice requiring platforms to “assess and explain the impacts of their systems”. It would also have legal powers to request “secure access” to platforms’ data in order to audit their practices.
CDEI’s interim report on bias in ADM (CDEI, 2019) was published in summer 2019. This noted the importance of collecting data on personal characteristics such as gender and ethnicity so that algorithms can be tested for bias. However, it acknowledged “some organisations do not collect diversity information at all, due to nervousness of a perception that this data might be used in a biased way”.
It also identified a lack of knowledge of “the full range of tools and approaches available (current and potential)” to combat bias.
Finally, it discussed the challenges of fixing algorithmic systems once bias has been identified: “Humans are often trusted to make [value judgments and trade-offs between competing values] without having to explicitly state how much weight they have put on different considerations. Algorithms are different. They are programmed to make trade-offs according to unambiguous rules. This presents new challenges.”
A final report was due to follow in March 2020, but was postponed due to the COVID-19 crisis and is expected to be published by the end of 2020.
The Royal United Services Institute (RUSI), a respected independent defense and security think tank, was commissioned by CDEI to produce a report on algorithmic bias in policing (RUSI, 2019) which was published in September 2019. It confirmed the potential for “discrimination on the grounds of protected characteristics; real or apparent skewing of the decision-making process; and outcomes and processes which are systematically less fair to individuals within a particular group” in the area of policing. It emphasized that careful consideration of the “wider operational, organisational and legal context, as well as the overall decision-making process” informed by ADM was necessary in addition to scrutinizing the systems themselves. And it highlighted “a lack of organisational guidelines or clear processes for scrutiny, regulation and enforcement for police use of data analytics.”
Police forces will need to consider how algorithmic bias may affect their decisions to police certain areas more heavily, the report found. It warned of the risk of over-reliance on automation and said discrimination claims could be brought by individuals on the basis of age or gender discrimination.
CDEI is also working with the government’s Race Disparity Unit to investigate the risk of people being discriminated against according to their ethnicity (Smith, 2019) in decisions made in the criminal justice system.
Committee on Standards in Public Life
This government committee was established in 1994 after a series of corruption scandals. Its usual remit is to oversee measures to ensure politicians and civil servants adhere to the Nolan Principles, a seven-point guide to ethical conduct in public office that the committee drew up when it was created: selflessness, integrity, objectivity, accountability, openness, honesty, and leadership.
The committee has announced an inquiry into “technologically-assisted decision making”. In a blog post (Evans, 2019), the committee chair explained: “We want to understand the implications of AI for the Nolan principles and examine if government policy is up to the task of upholding standards as AI is rolled out across our public services.”
Academia and Civil Society
Data Justice Lab
The Data Justice Lab is a research lab at Cardiff University’s School of Journalism, Media, and Culture. It seeks to examine the relationship between what it calls ‘datafication’ – the collection and processing of massive amounts of data for decision-making and governance across more and more areas of social life and social justice. Its major research project DATAJUSTICE has published working papers on how to evaluate ADM systems (Sánchez-Monedero & Dencik, 2018); the difference between ‘fairness’ and ‘social justice’ (Jansen, 2019) as aims for ADM and AI in policing; data-driven policing trends (Jansen, 2018) across Europe; and the datafication of the workplace (Sánchez-Monedero & Dencik, 2019).
AI Council
This committee of people from the private sector, public sector, and academia met for the first time in September 2019. Minutes of the meeting (AI Council, 2019) suggest it did not discuss ADM specifically, although in discussing its future work a need for “positive news stories on AI compared with other kinds of messaging” was raised.
Key takeaways
Narrative around data rights
The growing discourse around data rights in the UK is a welcome development. However, centering data as the referent object to which rights need to be attached minimizes, at best, crucial questions about how data is used in automated decision-making.
The concept that data is a valuable commodity over which individuals have personal rights of ownership and control may provide some limits on how personal data is circulated without consent.
However, automated decision-making raises the question not only of what data is accessible to whom, but the very serious implications of how and to what end that data is processed. The notion of data rights is of limited use in regulating this aspect.
Several policy initiatives have focused on the potential for discrimination in ADM systems. This is a much more solid basis on which to assess ADM, and the robustness of the tools that emerge from these initiatives will have a big impact on the kind of regulatory environment for ADM in the UK.
Emerging industry lobby
The emergence of a self-conscious artificial intelligence industry, with its attendant political lobbying functions, can be seen clearly in the structure and program of the All-Party Parliamentary Group on AI. There are great hopes for the economic potential of automation, which are encapsulated in the hype around AI (even if the narrow technical definition of the latter term is not strictly adhered to). This presents risks that the automated systems will be promoted and implemented at ever-increasing speed on the basis of possibly over-optimistic economic arguments, without proper consideration of their social impacts.
Civil society responses can take advantage of pop culture ideas about AI in order to challenge this, although they must take care not to attack the straw man of the AI singularity, and focus on the real and present-day concerns that will affect the lives of those who are subject to automated decision-making in the immediate future.
Brexit and opportunities for good ADM policy
The conclusion of several consultations in the coming year presents significant opportunities for policymakers to move towards taking action on ADM. Brexit has all but paralyzed policy implementation across all areas, but at the same time it has raised multiple new issues of ADM. While the situation is unpredictable, debate on the nature of Britain’s relationship with the EU and enormous policy decisions in this connection are likely to continue for several years. Civil society actors have an important role to play in highlighting the reality and risks of automated decision-making as the UK moves through this transformative period.