Public administration bodies at the local, regional, and national levels in Spain have been using ADM for years now, and while entirely autonomous ADM systems seem to be rare – according to publicly available information – protocols in which ADM makes part of the whole decision process seem to be fairly common.
“Automated administrative actions” are regulated by law, and before implementing any action, public bodies are supposed to establish which competent authority will define its function and, if necessary, take responsibility for auditing the source code. Also, the transparency law mandates public bodies to be proactively transparent and to generally grant citizens access to any information held by the public administration.
However, in practice, public bodies proactively release very little information on the ADM systems they use, and they are also reluctant to do so when requested by citizens or organizations. This could all change depending on the outcome of an ongoing court case, in which a non-profit foundation is asking the government to publish the source code of an ADM system in the same way law texts need to be public (we discuss this case below in this chapter).
In recent years, universities have been producing academic research on the use of ADM by the public administration, which some watchdog organizations are also trying to monitor. But such ADM systems rarely feature in the mainstream media and political discourse, and there seems to be a general lack of awareness among the public of the use of ADM by public bodies and its consequences for public and individual life.
Spanish companies, big international corporations, Spanish start-ups, and universities have all been developing and providing the public administration with ADM systems.
As reported in last year’s chapter and as updated in this one, ADM is most common in Spain in the surveillance and predictive policing fields, the health sector, and the analysis of social media content; and it’s also been increasing its presence in the delivery of financial and social aid, and the automatization of different administrative procedures.
The fact that, at the time of writing [11 May 2020], Spain hasn’t had a stable central government for more than four years (since late 2015), and that the public administration is quite decentralized (with regions and also towns and cities having a great deal of autonomy to implement their practices), means that there can be normative differences between regions, and that towns and cities have had an almost free rein to implement ADM processes at the local level, especially within the frame of the so-called Smart City platforms.
A catalog of ADM cases
Distribution of financial aid
In November 2017, the office of the Secretary of State for Energy released some software, known as BOSCO, to companies providing electricity. The aim of BOSCO was to determine whether people were entitled to financial aid to help; them with their electricity bills. The reasoning behind the ADM was twofold. Firstly, that it would make the process much easier for aid applicants (although, this didn’t seem to be the case judging by the large number of complaints the system received) and secondly, that it would also make the process easier and more efficient for the public utility companies.
After receiving many reports that the software was not functioning properly, Civio, a Madrid-based non-profit investigative newsroom and citizen lobby, discovered that BOSCO was systematically denying aid to eligible applicants. Civio asked the government for the BOSCO source code to identify why those errors were happening. The request passed through three different ministries before ending up at the Committee for Transparency and Good Governance, which refused to share the code saying that it would violate copyright regulations (even though the software had been developed by the public administration itself).
In July 2019, Civio filed an administrative complaint arguing that the source code of any ADM system used by the public administration should be made public by default, in the same way that legal texts are made public (Civio, 2019). The case – which at the time of writing was ongoing – could end up at the Spanish Supreme Court, and could set a legal precedent.
Risk assessment in cases of domestic violence
Probably the one ADM system that has received the most public scrutiny in Spain is the VioGén protocol, which includes an algorithm that evaluates the risk that victims of domestic violence are going to be attacked again by their partners or ex-partners.
VioGén was launched in 2007 following a 2004 law on gender-based violence that called for an integrated system to protect women. Since then, whenever a woman makes a complaint about domestic violence a police officer must give her a set of questions from a standardized form. An algorithm uses the answers to assess the risk that the women will be attacked again. These range from: no risk observed, to low, medium, high or extreme risk. If, later on, an officer in charge of the case thinks a new assessment is needed, VioGén includes a second set of questions and a different form, which can be used to follow up on the case and which the algorithm uses to produce an updated assessment of the level of risk. The idea was that the VioGén protocol would help police officers all over Spain produce consistent and standardized evaluations of the risks associated with domestic violence, and that all the cases that are denounced would benefit from a more structured response by the authorities, including follow-up assessments when needed.
After the protocol has been followed, the officer can override the algorithm’s response and decide to give the case a higher level of risk. Each of the levels implies different compulsory protection measures, and only the “extreme” level of risk asks for permanent protection. However, according to official documents about VioGén, the police may decide to grant additional protection measures apart from those established by the protocol.
The set of questions and the algorithms were developed in collaboration between police and academic experts. They have been updated five times since their first implementation, as officers and experts on domestic violence learned from its application.
There is a fair amount of public information available about the whole VioGén protocol, and a book, published in September 2018 by the Ministry of the Interior, provides a candid (by the usual standards of available information on ADM systems) and very informative account of the protocol’s history, design, and implementation (Spanish Ministry of the Interior, 2018). However, the book stops short of revealing the algorithms’ code or inner workings to calibrate and measure the weight given to the different factors and their interrelations.
The sensitivity of the subject of domestic violence and the fact that a fair amount of information about the whole protocol (if not about the algorithm itself) is available have contributed to making VioGén quite visible in the mainstream media (Precedo, 2016). That’s especially the case when official figures about victims of domestic violence are released, and they include references to cases not deemed high or extreme risk by VioGén and which nevertheless ended up in the aggressor killing the woman (Álvarez, 2014).
However, a careful revision of the available figures shows that these are published inconsistently, in a way that aggregates together the initial and the follow-up risk assessments of different cases (with some reports describing the number of assessments as the total number of individual cases), and that on some occasions several years had gone by between an initial assessment of low
risk and the aggressor finally attacking again and this time killing the woman after having been convicted and served his prison sentence. All of which makes it hard to extract coherent conclusions about any possible correlations between the cases calculated to be of low risk and those in which the aggressor ended up killing the woman.
When discussing VioGén, most of the experts on sexual and gender-based violence quoted in the press complain that although police officers follow the protocol, they often haven’t been properly trained in how to deal with cases of domestic abuse (Precedo, 2016). The book which was published by the Ministry of the Interior quotes a 2014 study stating that officers who followed the protocol didn’t change the level of risk given by the algorithm in 95% of the cases. This would seem to support the argument that the code should be made public so that it can be properly audited.
Detecting fraud in public procurement
In October 2018, the regional Valencian parliament passed a law that included the use ofADM to detect possible cases of fraud in public procurement (Europa Press, 2018). The parliamentary majority in government at the time was made up of a coalition of progressive parties that came to power in 2015. This coalition came about after 20 years of rule by the Conservative People’s Party, which in the end became marred in a series of corruption scandals regarding, among other things, public procurement.
Saler cross-checks data from various Valencian public administration databases and the official gazette of the Commercial Registry. The purpose is to raise red flags when it finds suspicious behavior (for example, a high number of small contracts given to the same company). Saler then automatically sends the information to the corresponding authority: the Anti-fraud Agency, the Public Prosecutor’s Office or the Court of Audits.
The idea, yet to be proven, is that Saler will become more effective in raising relevant red flags as it’s fed more databases and as it learns by being used by public servants. Its creators would like to export it to other regional administrations in Spain, and to the national central administration too, but the fact that each region uses different databases and indexes makes this quite a complicated process for the time being.
The public launch of this ADM by the Valencian authorities got a fair amount of press coverage by usual standards, probably because it dealt with corruption and that its original acronym was Satan (Cid, 2018).
Video surveillance and (quasi-) face recognition
In November 2019, Ifema, a state-owned consortium, started installing face recognition surveillance cameras in the big congress facilities it runs in Madrid (Peinado, 2019). Ifema specifically stated in the public procurement information from March 2019 that it was looking for “technical improvements in face recognition licenses”.
It should be noted that these cameras weren’t still in use by the time of the UN Climate Change Conference COP25, hosted at Ifema between 2 and 13 December 2019, according to a representative of the consortium quoted in the press (Peinado, 2019).
In late 2018, the city of Marbella, a tourist hub on the southern coast of Spain, started using a video surveillance system with, reportedly, the highest definition in Spain, and developed by the American firm Avigilon (Pérez Colomé, 2019). Only the border between the Spanish enclave of Ceuta and Morocco and some football stadiums use similar cameras, according to press reports. Officially, the system doesn’t have face recognition features, but the software used by these cameras does use “appearance search” and “face analytics”. Apparently this allows the system to search people by how they look by identifying a series of unique facial features, clothes, age, gender, and hair color, according to Avigilon’s PR.
A company representative quoted in the press said their face recognition software wasn’t being used by public authorities in Spain, but he added that the company had indeed installed face recognition software in private surveillance systems in Spain (Pérez Colomé, 2019).
In both the Ifema and Marbella cases – in which the systems are justified as they are said to increase security while also increasing the efficiency of surveillance – it’s not clear how exactly the recognition software works and what kinds of checks and other measures may be in place to protect people’s biometric information and other personal data.
In 2016, the local police of Rivas-Vaciamadrid, a town of 86,000 people in the Madrid region, ran a pilot of the Pred-Crime software, developed by Spanish company EuroCop (Europa Press, 2015). Pred-Crime analyzes historical data to predict where and when it’s more likely that different types of common misdemeanors and offenses, like traffic violations and robberies, will be committed.
Reportedly, the plan was to fully implement this software during 2016, but after testing it for nine months the municipality decided not to go on using it. “It needs to keep on being developed to become properly efficient”, a representative of the municipality is quoted as saying in a May 2019 press report (García, 2019).
EuroCop says on its website that it has dozens of municipalities from all over Spain among its customers, but it doesn’t specify whether those local authorities are using its predictive software or any of the other non-predictive tools the company also markets.
In another case of predictive policing, the Spanish police have reportedly been using software that analyzes the available data about the victim of a killing and the context of the crime, and then produces the likely profile of the killer. Between 2018 and 2019, agents of the office of the Secretary of State for Security have collaborated with the police by using such software in at least five investigations, according to a press report (Pérez Colomé, 2019b). No more details are known about this case.
According to the same press article, in mid-2019, the Environmental Ministry signed an agreement with the Autonomous University of Barcelona to develop software that can predict the most likely profile of pyromaniacs.
Automated analysis of social media content
In January 2018, the regional Government of Catalonia started a pilot project to try and measure the impact among the public of its STEMcat’s initiatives, a plan aimed at promoting scientific and technological vocations among young people.
For a month and a half, the Catalan government used Citibeats. This text-analysis software used machine-learning algorithms to collect and analyze around 12,000 tweets that spoke about the STEM disciplines in Catalonia. One of the insights the authorities said that they had gained was that women were more responsive to messages about natural sciences than about technology, according to the Catalan government’s PR. The authorities then used that and other insights to “optimize their strategy and propose new initiatives” to make young people interested in the STEM disciplines. The project was part of SmartCAT, the Catalan government’s strategy to become a smart region (as noted in this chapter, regional governments in Spain have a high degree of autonomy to develop their policies). The SmartCAT director said the software had allowed them “to evaluate in a more objective way the impact of the (government’s) initiatives” to make people interested in science and technology.
Citibeats, developed by Social Coin, a Barcelona-based start-up, was also used in December 2017 by the Barcelona municipality to gather people’s attitudes towards public transport and mobility in the city by analyzing around 30,000 comments by more than 15,000 people. In a case study of this project, Citibeats spoke of “citizens as sensors”.
In the two cases described above, and while the authorities praise the software’s ability to gather and analyze thousands of online comments (something it would take much more time and money to do using traditional survey methods), it’s not clear how representative those samples are and how valid the conclusions might be; and there doesn’t seem to be any information on how those analyses then influenced public policy.
Since March 2019, the regional government of Navarra has also been using the Citibeats software to detect online hate speech by analyzing text published on Facebook, Twitter and Instagram.
As with other cases dealing with personal data, it’s not clear how the software works and what oversight mechanisms the public authority may have in place when using it.
Automated evaluation of offers for public procurement through e-tendering
The current law governing public procurement in Spain, which was passed in November 2017 to adhere to EU regulations, allows public authorities to run digital bidding processes using electronic devices that can sort the different offers “through automated evaluation methods”.
This kind of digital bidding can only be used when the “requisite specifications of the contract to be awarded can be established in a precise way” and the “public services that are its object are not of an intellectual type, as in engineering, consultancy and architecture”. Contracts to do with food quality can’t be awarded through digital bidding either.
According to the law, before using this method a public authority needs to make public which “electronic device” it is going to use in the bidding process.
Before running the digital bidding itself, the authority needs to carry out a complete evaluation of all the offers and then send a simultaneous electronic invitation to all the eligible bidders. When such an invitation is sent, the authority has to include “the mathematical formula” that will be used in the automated classification of the eligible offers.
Automated assistance in tax filing
Since July 2017, the Tax Authority in Spain has been using the IBM Watson software to provide automated assistance regarding a particular aspect of VAT filing which mostly affects big companies. In public communications about it, both IBM and the Tax Authority highlighted that the software can work 24/7 and, as such, was freeing public servants from having to deal with a large number of emails from people trying to do their VAT filing. According to their PR, between July 2017 and February 2018, the number of emails to civil servants about VAT issues decreased from 900 to 165 per week. And, reportedly, the automated assistant went from receiving around 200 questions per week, when it was launched in July 2017 to around 2,000 by November 2017 (Computing, 2018).
Smart Cities: from smart waste to smart tourism
As cities aim to start using big data and automated processes to become “smart”, a series of ADM systems have been embedded at the municipal level in Spain. These systems usually support decision-making rather than working entirely autonomously.
One such example is the Smart Waste platform, which collects data from sensors installed in bins and trucks, and also from social media, surveys, the census, and satellite information. The combined data helps local authorities decide what services will be needed when and where.
Smart Waste was developed by The Circular Lab (from the innovation center of Ecoembes) which is a non-profit organization charged with collecting plastic packaging, cans, cartons, paper, and cardboard packaging for recycling. Minsait, a division of Indra, a Spanish multinational transport, defense, and security technology consultancy company also helped develop the platform.
During 2018, the Logroño municipality and the La Rioja and Cantabria regional governments first ran the platform as a pilot and today it is available to local and regional authorities all over Spain.
As is the norm in almost every case in Spain, there does not seem to be any available information on how the software works, what kind of output it produces, and what decisions or changes the different authorities have adopted due to its use.
In another example of the involvement of local governments with ADM systems, the Barcelona municipality and the Sagrada Familia basilica, one of the city’s tourist attractions, partnered with the private company Bismart to develop a mobile application, called Smart Destination. This application analyzes publicly available data about people (including social media posts and the number of people queuing to enter a tourist attraction), vehicles and traffic, the weather, bar and hotel occupancy, etc., and also the preferences given by the user to automatically generate personalized tourist plans and routes.
Once again, it is not known exactly how the system works and how it comes up with the output it gives to the public.
Policy, oversight and debate
The delayed Spanish National AI Plan and the National Strategy for AI
In November 2017, the Ministry of Industry and Trade presented a Group of Experts on AI and Big Data, formed by individuals from academia and the private sector (representing entities like BBVA – the second biggest bank in Spain –, Telefónica – one of the largest phone operators and network providers in the world –, and the Vodafone Institute – which the British company describes as its European think-tank) and tasked it with writing a White paper on AI to be published in July 2018. That paper was expected to inform the elaboration of a code of ethics about the use of data in public administration. It was to be the first step towards a Spanish National AI Plan, but nothing seems to be publicly known about it as of the time of writing (Full disclaimer: when it was created, this group of experts included among its members Lorena Jaume-Palasí, co-founder of Algorithm Watch who, at the time, was still involved with the organization.)
As a prelude to that plan, in March 2019, the Ministry of Science, Innovation and Universities published a Spanish AI National Strategy. It was released shortly before the April 2019 general elections, and it was criticized as vague and rushed by the specialized press (Merino, 2019). Indeed, the strategy reads like a mere overview of the possibilities and dangers of AI for the state and the public administration.
The document welcomes the use of AI systems by the public administration, which the strategy assumes will increase efficiency (for instance by advancing towards interoperability between public bodies and to generate “automated administrative procedures”) and save the state huge amounts of money (“billions of euros” alone in the health sector, the document says, referencing a study from June 2017 by the global consultancy company PricewaterhouseCoopers) (PwC, 2017). The strategy also says that AI-based threats to security require AI-based solutions, and it advocates for transparency in “algorithms and models” and wants to see an “honest use of the technology”, without describing any precise plans or norms to explain how it will advance in that direction.
On 18 November 2019, in the closing act of the II International Congress on AI, the acting Science Minister, Pedro Duque, said that he expected the Spanish AI National Plan to be ready in the following weeks (Fiter, 2019). However, at the time of writing there has not been any other news about it.
In general, Spain hasn’t had a stable government since 2015 – in fact, on 10 November 2019 Spain held its fourth general election in four years (El País, 2019). In addition, at the time of writing, a minority government established on 13 January has been struggling to contain one of the worst COVID-19 outbreaks in the world, and has probably contributed to those delays and the lack of a more comprehensive and decisive political leadership in the development of strategies and plans concerning the use of AI systems by the public administration.
The law regulating public administration
In Spain, the use of ADM processes by the public administration falls under the law 40/2015. This text defines “automated administrative action” as “any act or action entirely performed through electronic means by a public administration body as part of an administrative procedure and with no direct intervention by any employee”. And the text also states that before any automated administrative action is taken it must be established which of the competent authorities are “to define the specifications, programming, maintenance, supervision and quality control and, if applicable, the audit of the information system and its source code”. It must also be known which body will be responsible in case of any legal challenge to the automated action.
Transparency and access to information
Besides that, the 2013 law regulating citizens’ access to public information mandates that public bodies have to be proactively transparent and generally grant citizens access to any content and documents held by the public administration.
Yet, in practice, public bodies rarely publish detailed information on the ADM systems they use. When citizens or organizations request information about them, the public administration is usually reluctant to grant it, which in effect means their use of ADM processes happens in a context of opacity. The ongoing court case launched by non-profit Civio, commented on earlier in this report, illustrates that point.
Since 2014, the central public administration has been making Open Government Plans, which outline its commitments to advance towards a transparent government and administration. The current one, the III Open Government Plan 2017-19, doesn’t include any actions explicitly related to ADM systems. The IV Open Government Plan 2019-21 is, at the time of writing, being developed by an interministerial working group.
Data protection law
When it comes to people’s data protection, Spanish law 3/2018 (which was passed after the GDPR was approved) gives citizens the right to consent to the collection and processing of their data by the authorities. In principle, citizens then also have the right to request and access information about which body has their data and what it intends to do with it. The text explicitly mentions that the law applies to “any completely or partially automated processing of personal data”. It also says that when personal data is used “to create profiles” then the people affected “must be informed of their right to oppose automated individual decisions that have legal effect over them or that significantly affect them in a similar way”.
Towards a Digital Rights Charter for Spain
On 16 June 2020, the Spanish government announced it had kicked off the process to develop a Digital Rights Charter for Spain by naming a group of experts who will make suggestions. The aim of the Charter is to add new specific ‘digital rights’ to those included already in Spanish law. This new Charter should include rights relating to “protecting vulnerable groups, new labour conditions, and the impact of new technologies like AI”, among other issues, according to the government’s press release. The announcement said at a later stage the process would be open to the public too, and in the end the government will write up the Charter taking into consideration the expert’s and the public’s inputs. At the time of writing, there was no information on the calendar of the whole process or any other details about the scope and content of the Charter.
Political decision-making and oversight
In terms of political decision-making, the office of the Secretary of State for Digital Advancement, which depends on the Ministry of Economy and Business, is charged with coordinating the different strategies, plans, and actions for the “connectivity and digital transformation of Spain”, and includes a section focusing on AI.
Only one of the 11 specific plans promoted by that office, the one devoted to “smart territories”, which was published in December 2017, mentions the use of ADM systems. It claims that people’s quality of life will increase when automatization allows the government and the public administration to proactively identify individual persons’ problems and propose solutions to them. Regarding data collection, the plan states that “the data collection and processing mechanisms must be defined, (aiming to) limit the amount and type of data to collect, control their uses, and facilitating access to them and to the logic of the algorithms on which future decisions will be based”.
The body charged with “executing and deploying the plans of the Digital Agenda for Spain” is Red.es, a public corporate entity that depends on the office of the Secretary of State for Digital Advancement. Red.es includes a National Observatory for Telecommunications and the Information Society, which, as of the time of writing [11 May 2020], doesn’t seem to have produced work explicitly on ADM systems.
Spain is a decentralized state, with each of the 17 regions having a high degree of autonomy over their public administrations, which can be different from one another. In the Spanish central public administration, the top governance body regarding the use of IT is the Commission of IT Strategy, in which all ministries are represented and which takes on an intermediary role between the executive power, embodied by the Council of Ministers, and the public administration itself. The 2014 legal text that regulates this Commission states that digital transformation should mean “not only the automatization of services [offered by the public administration] but their complete redesign”, but it doesn’t go into any other details regarding ADM.
Civil society and academia
Academic research and debate
In recent years, there has been a lively debate in academia regarding the use of ADM systems in the public administration (as well as in the private sector), and there are several academic researchers specifically looking into ADM in the public administration. However, few of these debates seem to make it into the mainstream public discourse and to the general public.
In April 2019, the Network of Specialists in IT Law noted the “striking lack of algorithmic transparency and the absence of the appropriate awareness by the public administration of the necessity to approve a specific legal frame. We only see some concerns over compliance on data protection, which is perceived as a limitation”. The document reads like a fair summary of the situation in Spain, and calls on the authorities to “launch new mechanisms to guarantee [people’s] fundamental rights by design and by default”. The conclusions also point to the topic of smart cities as one area in need of urgent attention, as municipalities are freely experimenting with ADM systems without a proper debate or regulations (Cotino et al., 2019).
Some other notable academic research groups working in this field are Idertec at the University of Murcia, and the Research Group on Web Science and Social Computing at the Universitat Pompeu Fabra in Barcelona, which has a research area devoted to algorithmic fairness and transparency.
In recent years, several private organizations have been doing research and advocacy that includes ADM. In Barcelona, there is the Eticas Foundation, a non-profit associated with the private consultancy company, Eticas Consulting, which researches issues that include algorithmic discrimination, law and practice, migration and biometrics, and policing and security. Its founder and director, Gemma Galdón, has published several op-eds on these questions and is often quoted in the mainstream press. But, according to its website, the foundation itself doesn’t seem to have been very active recently.
In September 2019, a new association called OdiseIA launched in Madrid, describing itself as an independent “observatory of the social and ethical impact of AI”. OdiseIA was founded by 10 people from academia, the public administration, and the private sector, some of whom are well-known in expert circles regarding ADM in Spain.
ADM in the press
When it comes to the general public and mainstream political and media discourse, there seems to be a general lack of awareness about the use of ADM systems in public administration, the opportunities, challenges, and risks posed by such systems, and the implications for public and individual life. This is something about which all experts consulted have remarked upon and complained about.
While the specialized press has had a more consistent critical approach regarding ADM systems in the public administration, until recently, most of the mainstream press coverage of ADM seemed to be based on the PR materials released by the companies developing the software and by the public bodies using it. However, and maybe driven by the bad image that the word “algorithm” has been getting in connection with scandals at Facebook and Google, more recently the mainstream press has been taking a more critical approach. Having said that, the majority of articles and items in the general press still seem to be about individual ADM systems that gain visibility for a particular reason (like VioGén, discussed above, which deals with the sensitive issue of domestic violence) rather than about the use of ADM by the public administration, its normative frame and its implications for public and individual life as a subject on its own. Often, when the media talk about the lack of transparency and accountability regarding ADM systems in the public administration, it is on an individual case-by-case basis rather than as a structural issue.
Even though they don’t explicitly mention ADM systems as such, the present laws governing public administration in Spain (which include a definition for “automated administrative actions”) and citizens’ access to public information do offer a good opportunity to call the authorities to account regarding the use of ADM in public administration.
Such current normative frames will be put to the test by the ongoing court case in which a non-profit foundation is asking the government to release the source code of an ADM system in the same way that legal texts are made public.
There are also good practices, like the proactive way in which the authorities have been publishing information concerning the VioGén protocol, which could serve as an example for other authorities to follow. Even though more transparency – including ways to audit the functioning of the ADM systems themselves – is welcome, a greater explanatory effort by the authorities is also needed.
The number of active academic researchers looking into the field of ADM in public administration (and also in the private sector) and the increasingly comprehensive coverage of the issue by the mainstream press offer another opportunity for the general public to become more aware of how ADM systems are used to deliver public services, and the consequences for public and individual life of such systems.
On the other hand, the normalization of the use of ADM by the public administration without a proper public debate or putting clear oversight mechanisms in place, especially as low-key ADM systems spread at the local level among municipalities, risks creating an opaque administrative environment in which citizens may end up lacking the means to hold the authorities accountable about such systems.