A louder conversation, but mostly around “AI”
Even though it is still unrecognized as such, automated decision-making systems are increasingly at the forefront of policy and public debates in Italy. Specific legislative measures are absent, and — as noted in the first edition of this report — there is no formal definition of ADM for both government and private use. And yet, two sentences pronounced in 2019 now prescribe clearer limitations around the use of ADM systems in education. This particularly refers to the “Buona Scuola” algorithm, that mistakenly assigned thousands of teachers to the wrong professional destination according to mobility rankings in 2016, and which has since been discontinued.
ADM systems are at the heart of the debate surrounding the use of data to fight the COVID-19 outbreak, through a mobile phone app which has received consistent media coverage, and also related to Big Data predictive analytics when checking for fiscal consistency with the ISA Fiscal Reliability Index implemented by the current government. Predictive policing has also made its mark in Italy and findings unearthed in this report clearly show how solutions like XLAW are widely deployed, tested, and implemented. Predictive justice and health projects are also being explored, mostly at the local level.
However, for most Italians, the debate is about “artificial intelligence” and not ADM. In fact, the social and political implications of “AI” still constitute the bulk of the mainstream rhetoric around the challenges and opportunities of automation, strongly hingeing on moral panics and “clickbait” mainstream media articles. At the same time, the public dimension of the tech-policy debate around algorithmic fairness, discrimination, and manipulation often proceeds almost entirely devoid of the mounting evidence coming from both academic literature and investigative journalism.
Face recognition is suddenly everywhere
the “Buona Scuola” algorithm, mistakenly assigned thousands of teachers to the wrong professional destination according to mobility rankings in 2016.
As a result, no significant activism campaign has grown in response, thus allowing these experiments to go on mostly unchallenged or, when challenged by journalists and pundits, to be met with a disconcerting lack of transparency. This is most apparent in the case of the — ever-evolving — SARI face recognition system used by law enforcement agencies. The system’s mysterious functioning has yet to be made fully public.
This lack of detailed scrutiny has also led to an increasing tendency to frame algorithms in a “dystopian” fashion, and (unjustly) identify them as the culprit of many of the current drawbacks within Italian society. Algorithms are regularly blamed for polarization, lack of trust in institutions and the media, “fake news” — whether automated or not — and for the crisis in democracy. Furthermore, rising racism, hate speech, far-right nationalism, and intolerance have been substantially attributed to the personalization and micro-targeting algorithms of digital platforms. Again, this has mostly happened without solid, peer-reviewed evidence or reporting on the actual persuasive effects of social media campaigns.
What Italians think about when they think about ADM
ut what do Italians actually think about when they think about ADM? The best available proxy is to look into how and what Italians think about “AI” in decision-making contexts. In 2019, two polls tried to understand several aspects of the role of artificial intelligence in the daily lives of Italians, and some of the findings are relevant to the ADM debate. The AIDP-Lablaw survey, conducted by BVA DOXA and published in November 2019 (BVA DOXA 2019), argues that Italians are generally hopeful that AI will unearth solutions and discoveries that would otherwise be inconceivable (94%), and they widely believe that AI is beneficial to well-being and quality of life (87%). Also, 89% think robots will never fully replace human intervention — this broadly refers to the job market, but it might also be relevant in terms of ADM.
However, the Italians who were surveyed still fear that AI will end up having a commanding role in their lives (50% think there is a risk of machine predominance on mankind), and a large majority (70%) believe that “many jobs” will be lost as a result of the increase in automation. A crucial finding was the startling consensus around regulating AI: 92% agreed that laws are necessary.
Another survey, also published in November 2019, tried to address a narrower issue: Are Italians concerned about “killer robots” — part of what we may label “ADM in war”? According to a YouGov poll within the international “Campaign to Stop Killer Robots” (Rete Disarmo 2019), managed in Italy by Rete Disarmo, 75% of Italians are in favor of an international ban on fully autonomous systems equipped to kill human beings, and they go as far as proposing Italy as a leader in the design of international norms around the issue. A previous IRIAD Archivio Disarmo poll (March 2019) showed similar results. Support for the ban is also consist-ent across the whole ideological and party spectrum.
Also, relevant to help understand attitudes towards ADM in Italy is a monster.it poll concerning automation in the hiring process. Published in December 2018, the poll concluded (Sole 24 Ore 2018) that almost half of the sample (45%) firmly believe that “robots will never be able to assess an applicant better than a human”. However, almost a quarter of respondents think that algorithms might be — and will be — helpful in assisting the evaluation of candidates. Six percent agreed that “algorithms will be so reliable and precise that they’ll be able to substitute humans”.
A catalog of ADM cases
Your boarding pass is your face: face recognition at airports
Face recognition technologies are being deployed, or are about to be deployed, at airports in Milan, Linate, Malpensa, and at Rome Fiumicino — three of the biggest hubs for both national and international flights in the country. The idea behind the adoption of the technology is to allow travelers to experience a “frictionless journey” by replacing identification and boarding passes with the traveler’s own face. Airport authorities claim that this will make checks quicker, while at the same time the technology will increase security.
On February 12, 2020, a pilot deployment of face recognition technology started at Linate airport and will last until the end of the year (Corriere Milano, 2019; Corriere della Sera, 2020). Called “Face Boarding”, the project initially involves Alitalia flights to Roma Fiumicino only, and just for frequent and “Comfort” (i.e. business) class flyers. A trial deployment of face recognition is also expected to begin at Malpensa airport by the end of 2020 (Mobilita 2019).
At Rome Fiumicino, the trial started in November 2019, and initially involved just one company, KLM, and one destination, Amsterdam (TgCom24, 2019). However, airport man-agers claim that it will soon expand and that it will ultimately apply to all flights and all destinations in and out of the international airport.
Developers of the system, deployed through a collaboration between ADR (Aeroporti di Roma) and electronic identity solutions provider Vision-Box, claim that it only acquires biometric information of the traveler temporarily and that all data is erased “within an hour of departure”: “Nothing will remain on ADR’s servers”, they write on the airport’s website.
The biometric-based technological overhaul of both the Milan hubs, Linate and Malpensa, will cost 21 million euros, wrote all-news broadcaster TgCom24, and will include “six kiosks for enrollment (subscription to the program that links biometric data to a passport or ID card), 25 electronic boarding gates, 10 pre-security gates and 7 face security spots”.
Face recognition-powered “Automated Border Control Gates” (or “ABCGates”) were first installed at Naples airport in 2016, and manage the transit of some 2,000 passengers a day (Repubblica Napoli 2016). “The new technology helped providing Naples Airport passengers with a better experience”, says Alessandro Fidato, the facility’s Infrastructure & Operations Director, “making transit and controls quicker and at the same time assuring that the stricter standards in security are observed at Italian borders”. The IT solution is provided by Sita, described in the media as a “world leader” in airport security technology. The company claims to already serve some thirty governments worldwide.
More deployments are to be expected, especially in the aftermath of the COVID-19 pandemic, according to media reports (Il Messaggero, 2020).
No independent auditing or finding concerning any of these face recognition systems is publicly available, and none has been adequately debated among the general population.
These automated systems predict your health
Experiments with automated decision-making systems for predictive health are not new in Italy. And while projects such as RiskER and Abbiamo i numeri giusti are still ongoing, another interesting predictive health project has been added to the Italian landscape throughout 2019. In the city of Vimercate, in the Lombardy region of northern Italy, the local branch of the public health authority (Azienda Socio Sanitaria Territoriale, ASST) has adopted open source cloud solutions (developed by Almaviva) which use algorithms to predict the beginning of chronic pathologies and post-surgery complications (Giornale Monza 2019).
Building on a decade spent digitizing medical records, Vimercate hospital aims to more efficiently manage patients, including by personalizing treatments according to machine learning-powered analytics, thus reducing costs and optimizing logistics, while at the same time improving its “precision medicine” solutions.
“This is the first structured usage of AI within a hospital in Italy”, says Head of AI Solutions at Almaviva, Antonio Cer-qua, who also revealed that the objective is to provide his company’s IoT platform, Giotto, “to a national network of hospitals”. According to Cerqua, “many have already shown their interest” (Il Giorno 2019).
Representatives from the consulting giant Deloitte also visited the Vimercate hospital specifically because of this project, wrote Giornale di Monza.“AI must not replace physicians”, says Head of Informatics at ASST Vimercate, Giovanni Delgrossi, “but support them, a tool that helps them making better decisions” (Sole 24 Ore 2019b)
Is my complaint algorithmically sound? Predictive justice in Genoa (and beyond)
In 2019, the first experiment with automated decision-making within the justice system was developed at LIDER Lab Scuola Superiore Sant’Anna in collaboration with EMbeDS, KDD Lab and the Tribunal of Genoa (Gonews 2019).
Called “Predictive Jurisprudence”, it allows researchers to access and analyze — through machine learning techniques — the corpus of rulings pronounced by the judiciary in the Liguria capital. The purpose of this is to extract meaningful information for further processing, starting with the identification of common trends in jurisprudence focused on a specific subject matter (e.g., past judgments on a certain typology of crime or around similar court cases).
This would provide a benchmark against which each hu-man judge might assess the case before him or her, and easily check for consistency with previous rulings in analogous situations. Ideally, it might even produce predictions regarding a practitioner’s future behavior in similar cases
Genoa Tribunal president, Enrico Ravera, argues that the aim of the project is not a mere “extrapolation of statistics from case studies”, but rather to “apply artificial intelligence techniques to jurisprudence”. More precisely, “Predictive Jurisprudence” is described by the research team, coordinated by Prof. Giovanni Comandé, as “a multilayer project unfolding into five interconnected but autonomous levels”.
In a written reply to an email request by AlgorithmWatch, Comandé and colleague Prof. Denise Amram said: “In the start-up phase, the project aims to analyze decisions with the corresponding files of trial courts according to the criteria and methodologies developed in the Observatory on personal injury, applicable to areas of litigation other than non-pecuniary damages (level 1). The same materials are used also through techniques of Machine Learning to develop both tools for annotation and automatic extraction of information from legal texts (level 2) and algorithms for analysis and prediction (so-called Artificial Intelligence level 3).”
In particular, the researchers argue, “the architecture of the database designed to host the data acquired by the courts will be designed and trained for developing algorithms to automatically identify trends with reference to the criteria known to the interpreter, as well as to highlight new trends on the basis of possible bias/tendencies found by the algorithm.”
Especially notable in terms of automated decision-making is the fact that “the algorithm aims to recreate and mimic the legal reasoning behind the solution(s) adopted in the judgments by making predictable subsequent decisions on the same subject.” Automation is, therefore, applied to the summarization and explanation of ideas: “These tools should also help to explain the reasoning underlying each decision, while the development of suitable tools to explain the criteria defined by the developed AI (level 4) will be tested.”
Finally, in the last phase, “efforts and results in the different levels of research and development will be traced back to the attempt to structure the analysis of the legal argument at such a level of abstraction and systematicity as to contribute to the simplification of all tasks (level 5).”
Selected case-studies currently involve a first pilot on “alimony in case of divorce”. Here, “queries are pre-deter-mined by law, but their judicial interpretations continuously evolve”, write the researchers. “At this regard, current re-form bills are proposing to introduce new criteria, whose efficacy could be discussed in light of our analysis.” "The second pilot and the third one,” wrote Comandé and Amram, “develop within the Observatory on Personal Injury Damage studies. The algorithm may contribute to the identification of criteria for awarding non-pecuniary losses compensation beyond the current interpretations and attempts to standardize these highly subjective head of damages. Within this core-analysis, the algorithm could be better-trained to explain non-pecuniary losses in case of burn out, whose boundaries are still discussed both from clinical and legal perspectives.”
According to the researchers, “several Tribunals are joining the project. Among them the Tribunal of Pisa and Bologna.” In 2018, the Appeals Court and Tribunal in Brescia also start-ed their own experiment with “predictive justice” through a pilot project that aims to extract predictions about the length of a lawsuit, and the principles that will most likely be adopted in evaluating it and goes as far as providing an estimated probability of having a complaint approved. (BresciaOggi 2018)
This is part of a two-year-long collaboration between the justice system and the University of Brescia which consists of the following steps: identification of the subject matter to analyze, creation of a database for each topic, design-ing of the “workgroups” among university resources that will operationally interface with the tribunal, extraction of predictions by researchers and, finally, publication and dissemination of findings (AgendaDigitale 2019) — which, at the time of writing, are not yet available.
Policy, oversight and debate
Learning from the Buona Scuola algorithm debacle
The most important developments in the field of automated decision-making over the last year concern how the Italian Courts responded to the claims of teachers affected by the structural mistakes made by La Buona Scuola algorithm. The algorithm was used to sort 210,000 mobility requests from teachers in 2016. The evaluation and enforcement of the mobility request procedure were delegated entirely to a faulty automated decision-making system (Repubblica, 2019).
The algorithm has since been discontinued, and yet appeal cases against the decisions it made are still in the thou-sands (10,000 teachers were affected, according to La Repubblica), and, as noted in the first edition of the Automat-ing Society report, these complaints are strongly grounded in facts — from bad programming choices to deliberate obfuscation strategies to prevent meaningful auditing of the source code of La Buona Scuola algorithm itself.
As a result, teachers were unjustifiably moved from Puglia to Milan, instead of within their own region, or displaced from their natural location in Padua. The system even automatically forced two teachers with autistic children to move from the southern region of Calabria to Prato, in the northern region of Tuscany.
Over the course of 2019, two specific rulings determined precisely how and why these automated mistakes resulted in illegal discrimination and choices over the professional and personal lives of thousands of individuals employed in the public education system. The first case was from the Consiglio di Stato (State Council 2019), and the second was from Section 3rd-Bis of the Tribunale Amministrativo (Ad-ministrative Tribunal, TAR) of Lazio (n. 10964, September 2019) (TAR Lazio 2019).
Appeal cases against the decisions Buona Scuola made are still in the thousands.
And while both institutions recognize the technical flaws that affected the performance of the ranking algorithm, they are also clear that the actual flaw is even more fundamental, as it involves the very rationale behind its deployment — according to the TAR, even contradicting the Italian Constitution and the European Convention on Human Rights.
In fact, even though the State Council reminds us that the automation of decision-making processes within the administration brings about “unquestionable benefits”, it also prescribes that these benefits can only be realized when such processes do not involve any “discretional judgment” on the part of the system. In particular, its “technical rules” (the programming and functioning) and decisions must always ultimately be the result of human judgment. Algorithms might even work by themselves, but only when high-ly standardized, repetitive procedures are involved.
Similarly, the TAR of Lazio concluded that “informatics procedures, were they to attain their best degree of precision or even perfection, cannot supplant, by entirely replacing it, the cognitive, acquisitive and judgmental activity that only an investigation entrusted to a physical person can attain”. Crucial, again, is the fact that the “whole” procedure was delegated to an “impersonal algorithm”, when instead automation may only serve an “instrumental”, “merely auxiliary” role within the administrative process – clearly specifying that algorithms may “never” be put in a “dominant” position when compared to human beings.
This has clear implications for the transparency and explainability of the motives behind the public administrations’ decisions, as both rulings remark. However, the interpretative frameworks differ.
The State Council does not rule out the possibility of a transparent, fully automated decision. It only bases its favorable response to claimants in the fact that the particular algorithm deployed for La Buona Scuola was so compromised and chaotic as to be “unknowable”—neither to the affected teachers nor to the Court itself. However, the ruling immediately adds that when full automation is involved, an even stronger notion of “transparency” should apply. A notion that “implies the full knowability of any rules expressed in languages other than the judicial”.
The TAR of Lazio, instead, defines the very possibility of such full automation as a “deleterious Orwellian perspective”, suggesting that both transparency and explainability would be fundamentally compromised as a result.
Algorithmic fiscal reliability rankings in practice
First introduced by the Gentiloni government in 2017, the ISA Fiscal Reliability Index finally came into use in 2019. Developed by SOSE (Soluzioni per il sistema economico), a subsidiary of both the Ministry of Finance (MEF) and Banca d’Italia, the system consists of an algorithmic scoring of fiscal trustworthiness. The system assigns precise benefits to higher rankings (above 8 out of a scale of 10), while at the same time prescribing stricter controls on those below 6. This is based on the mere automated presumption of anomalies in consumption, properties, investments, and other financial behavior. Its rationale is to give fiscal sub-jects a way to self-assess their position and preemptively fill any gaps that might alert authorities and lead to further scrutiny.
And again, detractors pointed out that the system is essentially flawed. Corriere della Sera’s Dataroom interviewed “several accountants” in “many different regions of Italy”, and even though the methodology does not give statistical attribution and/or scientific weight to her findings, investigative journalist Milena Gabanelli concluded that errors might concern as many as 50% of taxpayers. As a result of the introduction of ISA “we found”, Gabanelli wrote, “that 40 to 50% of taxpayers shifted from a “congruous and coherent” fiscal position in income tax return for 2018 to an “insufficient” one in 2019, and vice versa” (Corriere della Sera 2019).
Errors allegedly resulting from the fact that taxpayers are not allowed to share which factors actually contribute to their annual incomes, thus facilitated misrepresentations and misunderstandings. It only takes a pregnancy, some vacant real estate, or a contingent surge in legal expenses to alert the system and potentially find yourself automatically flagged for “anomalies”, claims Gabanelli, as the algorithm is “badly programmed” and even less flexible and affordable than the widely criticized “studi di settore”, based on an “inductive” method of fiscal scrutiny. Official data are still lacking.
Also, claims of a “Fiscal Big Brother” being algorithmically implemented, noted in the previous edition of this report, still represent the main framing around automated decision-making about fiscal matters in public debate. This is especially the case regarding a new, “anonymized” tool for predictive, Big Data analysis of taxpayers’ behavior which was also introduced with the 2019 Budget Law (Sole 24 Ore 2019c).
Called “Evasometro Anonimizzato”, the tool will include “specific algorithms”, wrote financial newspaper Il Sole 24 Ore, and will be able to cross-check the different databases held by the Italian fiscal authorities for inconsistencies in consumption patterns or in any other financial operation. Anomalies recognized by the “digital eyes” of the system will alert fiscal authorities, who can then summon flagged individuals for further scrutiny.
This algorithmic-driven monitoring has to be coupled with that of “open sources”, such as “news articles, websites and social media”, by the Italian fiscal authority, Agenzia delle Entrate. The monitoring system has been active since 2016, but was largely unknown to the public until January 2020. It was devised to “acquire all elements that could be useful in knowing the taxpayer”, thereby helping the institution check the consistency of each individual’s fiscal behavior (Corriere della Sera 2019b).
Proof of how this system works revealed in several Court rulings — for example, from the Appeals Court of Brescia and Ancona — in which judges explicitly mentioned damn-ing “documentation extrapolated from Facebook”. One example is that of an ex-husband who was forced to pay living expenses to his former wife because of Facebook posts that depicted a lifestyle patently incompatible with the 11,000 euros income he declared in his tax return for the previous year.
Predictive policing: the new normal?
Predictive, automated policing is rapidly widening its scope in Italy. While the 2019 Automating Society report showed how the KeyCrime software had been deployed in the city of Milan, experiments of implementation of another algorithm-based solution, called XLAW, have recently spread to several other important municipalities.
Like KeyCrime, the XLAW software was also developed by a law enforcement official and it has already been deployed in the cities of Naples, Modena, Prato, Salerno, Livorno, Trieste, Trento, and Venice (Business Insider 2019), where it led to the arrest of a 55-year-old man accused of theft (Polizia di Stato 2018). Law information website Altalex reports that the system’s accuracy is 87-93% in Naples, 92-93% in Venice, and 94% in Prato, but, at the moment, no independent auditing or fact-checking is publicly available for each of these trials (Altalex 2018).
XLAW has a strong predictive component to it. In the case of Venice, for example, State Police boasted that “84% of felonies (“fatti-reato”) that have been either attempted or committed had been foreseen by the system”. As a result, XLAW, just like KeyCrime, is regularly portrayed as a success story.
And yet, the only official response concerning the actual results produced by KeyCrime in Milan over the years tells a different story. In reply to a FOIA request by VICE journalist Riccardo Coluccini, the Dipartimento per le politiche del personale dell’amministrazione civile e per le risorse strumentali e finanziarie of the Interior Ministry stated that it does not possess any “data or study concerning the software's impact on tackling crimes”, not even around the “variables” considered by the system in making its automated decisions. (Vice Italy 2018)
In addition, full disclosure has been opposed by the administration as the software is patented and proprietary, and as such the data enjoys “special protection” in terms of intellectual property rights. The Direzione Centrale Anticrimine of the Interior Ministry only uses it on a gratuitous loan, Vice reported.
The creator of XLAW, Elia Lombardo, describes the software as a probabilistic, machine learning-powered solution for trends and pattern discovery in crimes. The rationale behind its workings is the “hunting reserves” model of crime-spotting, that assumes “predatory” crimes (e.g. burglary, robbery) to be both “recurrent” and “residing”. This implies that one can deduce — or more accurately, induce — how criminal behavior will unfold, before it has happened, by intelligent analysis of the criminal history of a location over time. Lombardo claims to have done this by carefully ana-lyzing 20 years of data and also by calling on his experience in the field. As a result, thanks to XLAW, law enforcement officials can be alerted and deployed on the scene before a crime has even happened. At the same time, XLAW provides police officers with details as precise as “genre, height, citizenship, distinguishing features, and biometrics” of a potential suspect.
Lombardo states that this approach was validated by the “independent” analysis of two universities in Naples — the Federico II and Parthenope — before it was implemented.
Strategic documents on AI, and their relevance for ADM
Even in the face of a proliferation of examples of automated decision-making systems in every Italian citizen’s public life, policy responses have been notably absent.
The hypothesized Parliamentary Committee on AI Ethics has been shelved, with all three of the current governing parties proposing one focused on disinformation — still labeled “fake news” even in the face of wide opposition from academia — instead (Valigia Blu 2019).
A trans-disciplinary center on AI, proposed in the 2018 Digital Transformation Team’s White Paper, has not been realized either. However, from January to June 2019, thirty experts convened by the Italian Ministry of Economic Development (Ministero dello Sviluppo Economico, MISE) outlined a document that provides a foundation for the government when drafting the Italian Strategy on AI (“Strategia nazionale per l’intelligenza artificiale”).
The resulting document was open for public consultation between 19 August 2019 to September 13 2019 (MISE 2019), and its final version was published on July 2, 2020 (MISE 2020).
The strategy claims to be fundamentally rooted in the principles of anthropocentrism, trustworthiness, and sustainability of AI, with a strong accent on the necessity of an ethical use of AI and on the need to ensure technical trust-worthiness starting from the very design of such systems. The ambition is no less than to spur a new “Renaissance” based on artificial intelligence, called “RenAIssance” in the final document.
To steer the country in that direction, the report outlines 82 proposals, some of which are relevant to understand how automated decision-making is framed in Italy. Even though not explicitly mentioned, ADM systems might benefit from the “national mapping” of experiments in regulating, testing, and researching AI (“regulatory sandboxes”) that, according to the Strategy, would be held by a newly hypothesized inter-ministerial and multi-stakeholder “cabina di regia” dedicated to tackling the multifaceted regulatory, educational, infrastructural, and industrial issues raised by the implementation of AI.
Echoing EU law, the document also asserts that explainability of algorithmic decisions should be ensured through procedural transparency, “every time a decision that strongly impacts on personal life is taken by an AI system”. As per the EU Commission’s White Paper on AI, transparency should be proportional to the risk embodied by the AI system under consideration.
Creators and vendors of such systems should also be obliged to inform users whenever they deal with a machine, instead of a human, reads the Strategy, while appropriate sanctions and remedies should be put in place for automat-ed decisions that entail discriminatory/illegal effects. Consistent with the GDPR, this means that users must be able to effectively oppose any automated decisions, in a “simple and immediate” way.
Even though still under development, “Trustworthy AI Im-pact Assessment” is also explicitly mentioned as a tool to better understand actual risks implied by the use of automated systems. Benefits are strongly highlighted, though, as adoption of “AI” solutions should be promoted, according to the document, for every “societal challenge”, and in particular to achieve a “better quality of life”, for example through deployments in the health, education, and “digital humanities” sectors.
Artificial intelligence is also prominently featured in the “2025 Strategy for Technological Innovation and Digitization” by the newly created Ministry for Technological In-novation and Digitization within the current, second Conte administration. In It, AI is envisioned as “in the Nation’s service”, with an implicit reference to automated decision-making processes, and especially useful in enabling “efficiency” in repetitive administrative procedures that require a low degree of discretion.
At the same time, an ethical, trustworthy AI could help with respect regarding the constitutional principle of a fair and timely trial, according to the document. In fact, the judiciary system is explicitly mentioned as the first sector of public life in which the 2025 Strategy envisions experimentation. A newly formed “Alliance for Sustainable Artificial Intelligence”, a joint committee made of both private entities and public institutions, will, therefore, have the task of devising an “Ethical and Judicial Statute for AI”.
“Beyond fixing a minimal set of guidelines”, the Alliance — through the so-called “AI Ethical LAB-EL” — will “establish a set of basic requirements for the acceptance of AI solutions for both the public and private sector”. According to the 2025 Strategy, respect of these guidelines could become, in time, equivalent to an official “certification” that a certain AI solution is both ethically and judicially viable.
The document, however, contains no mention of the National Strategy on AI, and provides no coordination among the two official plans for Italy’s AI-powered future.
Lastly, in April 2019, +Europa MP Alessandro Fusacchia launched an “Intergruppo parlamentare” on AI (CorCom 2019).
Italy needs to be more aware of how quickly automated decision-making systems are spreading in many sectors of public administration and public life. Face recognition technologies and other invasive ADM systems are being increasingly adopted without any meaningful democratic de-bate and/or dialogue with civil society and academia, thus potentially setting the stage for further problematic deployments.
The Buona Scuola algorithm failure, now enshrined in multiple rulings, should stand as a warning: automation can lead to very painful consequences when rushed and ill-conceived, both for specific categories (in this case, the thousands of teachers unjustly displaced by the faulty automat-ed system) and the general population (the taxpayers who are ultimately paying the price of their legitimate appeals).
A general principle seems to be emerging. If it is to be truly beneficial and trustworthy, and bring about what Paola Pisano from the Innovation and Digitization Ministry, de-fines as a “new Humanism of AI and a digital Renaissance”, machine intelligence must work with human intelligence, augmenting it rather than replacing it. Whether this means that full automation must be ruled out for public uses of ADM, or that we only need a better definition of what tasks and decisions should be fully automated is still a contentious issue. And it is an issue that will probably shape the Italian policy response to ADM-related issues — when there is one.