Research
Germany
Contextualization
Issues concerning ADM systems keep coming up in public discussions, but they do not usually make the headlines. ADM is commonly referred to under the label “Artificial Intelligence” (AI) and the lines between digitization, automation, and machine learning tend to be blurry. Discussion around ADM was sparked when the federal police piloted a face recognition system at Südkreuz railway station in Berlin. The program was criticized by civil society actors for its low-quality output and the potential for fundamental rights infringements. Many ADM systems are rolled out in Germany and most of them do not get as much attention as the Südkreuz case.
Risk-prediction applications are on the rise. Within the police force, they are used to analyze the threat levels of Islamists under observation or to predict burglary hot spots. The Federal Foreign Office uses software to identify where the next international crisis is likely to occur. Schools and offices use an automated analysis tool to identify risks of violent behavior, be it from a school shooter, a terrorist, or a violent partner. Automated text, speech, and image recognition are deployed to prevent suicides in jail, analyze migrants’ dialects or to identify child pornography. Automation can also be found in the welfare system where it is mainly used to speed up administrative processes.
These cases show that ADM systems are beginning to permeate many different sectors of society, a trend that will intensify over the coming years. At a governmental level in Germany, the course is set for AI applications to be developed and rolled out. The government’s AI Strategy focuses on funding research and supporting small and medium-sized enterprises (SME) and start-ups to make Germany a strong player in the European AI landscape.
A catalog of ADM cases
Predictive policing
Federal Level
Since 2017, the Bundeskriminalamt (Federal Crime Agency, BKA) has used the risk-assessment tool RADAR-iTE (Bundeskriminalamt, 2017) to sort “militant Salafists” into three threat levels (high, conspicuous, and moderate). The system was developed in cooperation with the Department of Forensic Psychology at the University of Konstanz. In order to assess a person already known to the authorities, the caseworker fills in a standardized questionnaire about the “observable behavior” of the subject, drawing on data the police previously gathered on the person and everything the police is legally authorized to access. Once the results of the corresponding threat levels are provided, the caseworker (or the respective department) decides what action to take.
Federal State Level
At the federal state level, several police forces have been running trials with predictive policing software (Heitmüller, 2019), some of which are currently in use. In Bavaria, the tool PRECOBS (Institut für musterbasierte Prognosetechnik, 2018) calculates where burglaries are most likely to occur, however, the same software was discontinued in Baden Wurttemberg in 2019 due to data quality issues (Mayer, 2019). In the field of burglary prevention, the state of Hesse deploys the KLB-operativ forecast tool (Polizei Hessen, 2017), and Berlin uses KrimPro (Dinger, 2019). Both of these tools analyze data to identify where a potential break-in is most likely to occur. Based on IBM products, North Rhine-Westphalia developed SKALA (Polizei Nordrhein-Westfahlen, 2020), while Lower Saxony uses PreMAP (Niedersächsisches Ministerium für Inneres und Sport, 2018); these two tools make predictions about burglary hot spots, which the police incorporate into action plans.
Identifying Child Pornography
An AI tool to identify pornographic images of children was developed in a collaboration between (Richter, 2019) Microsoft and the Ministry of Justice of the State of North Rhine-Westphalia, the Zentral- und Ansprechstelle Cybercrime (Contact Office for Cyber Crime, ZAC NRW) based in the department of public prosecution in Cologne. In order to comply with strict regulations related to data and child protection, software is used to blur images in police investigations on the police servers in Germany before they are uploaded to Microsoft servers in a non-identifiable way. In the cloud, algorithms analyze the images for pornographic content, identify the faces of victims and abusers and compare them to existing profiles stored in a database. These results are returned to the police for further analysis. This software aims to reduce the workload and mental strain police endure while investigating child abuse.
Suicide Prevention in Jail
In 2019, the Ministry of Justice of the State of North Rhine-Westphalia launched a program aimed at preventing suicide in jail (Schmalen, 2019). The ministry employed the Chemnitz-based firm FusionSystems GmbH (Systems, 2019) to build a video surveillance system that can detect suspicious objects such as a knotted rope or a firelighter inside a cell and alert officers on duty. The system is supposed to be used on inmates who are categorized at a medium to high suicide risk level and it is meant to substitute the current in-person checks carried out at 15-minute intervals, which have been criticized (Schmalen, 2019) because they potentially increase the emotional strain on inmates.
Crises Management in Foreign Policy
The Federal Foreign Office uses the data analytics tool PRE- VIEW– Prediction, Visualization, Early Warning (Auswärtiges Amt, 2019) to identify evolving international crises. The tool analyzes publicly available data related to current political, economic, and societal trends and conflicts in order to identify developing crises. According to the Federal Foreign Office, AI is used to process the data, which is then used to produce infographics and maps to help provide insights into the state of a particular conflict. Furthermore, trend analyses illustrate how political and societal developments may evolve. PREVIEW is deployed by the Federal Foreign Office’s Department S, which oversees international stabilization measures and crises engagement. The output also supports the civil servants in determining which steps to take next.
Identity Check of Migrants
The Bundesamt für Migration und Flüchtlinge (Federal Office for Migration and Refugees, BAMF) has been using automated text and speech recognition systems to identify refugees (Thüer, Köver and Fanta, 2018) since 2017. Agency employees can ask asylum seekers to give them access to their cell phone, tablet, or laptop to verify if they are telling the truth about where they come from. The agency has the ability to obtain all the data contained on the devices and run software on it. The software presents the employee with a limited overview of the content, which also includes language analysis of the text retrieved. According to the BAMF, both the software and the hardware was provided by the firm Atos SE (Biselli, 2017), however, VICE Magazine found evidence (Biselli, 2018b) that the mobile forensic technology firm MSAB was also involved. Another tool deployed by the BAMF aims to identify disguised dialects in speech (Biselli, 2018a). When an asylum seeker does not have a valid proof of ID, a two-minute voice recording of the person describing a picture in their mother tongue is analyzed by software, which then calculates a percentage of how close the speech comes to a certain dialect.
Social Services/ Welfare Administration
Since 2012, the Behörde für Arbeit, Soziales, Familie und Integration (Agency for Labor, Social Affairs, Family and Integration) in Hamburg has been using a piece of software called JUS-IT (Behörde für Arbeit Soziales Familie und Integration, 2018) for the General Social Service, the Child Care Service and the Economic Help for Youths Programme. It is used to administer cases and automate payments and is equipped with interfaces that connect it to police reports and health insurance funds. The system is based on Cúram – a modular off-the-shelf IBM product that can be tailored for specific needs – which has been criticized for functioning inaccurately in Canada (Human Rights Watch, 2018). In 2018, an expert commission on child protection (BÜRGERSCHAFT DER FREIEN UND HANSESTADT HAMBURG, 2018)found that JUS-IT lengthens administrative processes, leaving less time for much-needed family visits. As a result, the commission recommends major revisions or a complete shutdown (Lasarzik, 2019) of the software.
The Bundesagentur für Arbeit (Federal Labor Agency) uses an IT-system called ALLEGRO to administer unemployment benefit. The agency workers input an applicants’ data and the system calculates the corresponding benefit levels. The system can connect to health insurance and pension funds and cooperate with customs and the Central Register of Foreign Nationals (Ausländerzentralregister) (Deutscher Bundestag, 2018). In 2012, ALLEGRO replaced the administration’s previous software, called A2LL, which was an error prone system developed by T-Systems. The new software was developed in-house at the Federal Labor Agency (Borchers, 2008).
Risk-Scoring of Violence
Similar to the tool used by the Federal Crime Agency described above, researchers from the Institute of Psychology and Threat-Management at the University of Darmstadt have developed a tool called Screener Islamismus (Screener Islamism) (DyRiAS, 2019c). This tool is used to calculate the level of risk a person has of causing violence due to Islamic extremism. The Screener Islamismus tool is based on the DyRiAS IT-system (short for Dynamic Risk Assessment Systems) (DyRiAS, 2019a). The philosophy behind the tool is that violence is not linked to character traits but that it occurs at the end of a process where perpetrator, victim, and situational influences interact. Users answer several questions about the behavior of the person they want to check. The system organizes the answers into an overview and adds guidance (DyRiAS, 2019c) on how to handle the situation. It remains to be seen, which institution will implement the system as it does not seem to have been built for the police, but other institutions such as schools and offices, etc.
Other DyRiAS applications are already in use. For example (DyRiAS, 2019b), protection centers for women in the cities of Singen and Weimar use DyRiAS-Intimpartner (DyRiAS intimate partner) to determine the threat levels of abusive male partners. Another example can be found at the Landesschulamt Sachsen-Anhalt (Federal State Education Agency). It deploys DyRiAS-Schule (DyRiAS school) to analyze the threat levels of students who have the potential to go on a school shooting rampage. In another case, Swiss employers screen employees with DyRiAS-Arbeitsplatz (DyRiAS workplace) to look at their potential for violent behavior or if they might become a stalker. After a member of staff fills in a questionnaire about the person they suspect of potentially acting violently, the tools “automatically create a report” that provides the user with a threat score.
Policy, oversight and debate
Policy – government and parliament
German AI Strategy
The German AI Strategy (Die Bundesregierung, 2019), published in November 2018, has three key aims:
- To make Germany, and Europe, leaders in the development and deployment of AI and to keep Germany competitive at an international
- To ensure the responsible development and deployment of AI for the common
- To embed AI ethically, legally, culturally, and institutionally into society, shaped by politics and based on a broad dialogue across
The strategy mandates 3 billion euros to fund AI projects and, following a written query from the Green Party (Deutscher Bundestag, 2019), the Federal Government disclosed how the funds will be distributed.
The lion’s share goes to the Federal Ministry of Education and Research (170 million euros). This will consolidate existing German AI research institutions, fund additional professorships, attract international experts to Germany, improve academic teaching and, allow for investment in junior scientists. A slightly smaller portion goes to the Federal Ministry of Economic Affairs and Energy (147 million euros), followed by the Federal Ministry of Labor and Social Affairs (74 million euros) while the rest is divided between seven further ministries and the Federal Chancellery.
As part of the AI Strategy, the German government decided to create an ‘AI Observatory’ at the Federal Ministry of Labor and Social Affairs. This will carry out technological assessments of the distribution and effects of AI regarding the labor market and society in general. The aim is to foster European interinstitutional cooperation, develop a shared understanding of AI and design a framework of guidelines and principles for the deployment of AI in the labor sector.
The AI Strategy will be updated, and many projects concerned with issues around research, innovation, infrastructure, administration, data sharing models, international standards, and civic uses are currently in the making.
German Data Strategy
In November 2019, the government published the key points it would like to include in the forthcoming data strategy. The strategy’s goals are to make data more accessible, support responsible data use, improve data competencies in society, and make the state a pioneer in data culture. As data plays an integral part in ADM-systems, the strategy will have a high impact on which applications can be rolled out in the future. The development process of the data strategy was accompanied by expert hearings with participants from civil society, the economy and science, and the public took part online.
Data Ethics Commission
In 2018, the German government appointed the Data Ethics Commission (Datenethikkommission, 2019) to discuss the ethical implications of Big Data and AI. A year later, the Commission presented a report with several recommendations concerning algorithmic systems. A general requirement they called for is that algorithms should have a human-centric design. This includes; consistency with societal core values, sustainability, quality and efficacy, robustness and security, minimal bias and discrimination, transparency, explainability and traceability, and clear accountability structures. Interestingly, they also introduced the concept of “system criticality”, especially with respect to transparency and control. With this approach, they demand to include the likeliness and scope of potential damage caused by algorithmic systems. In addition, the commission recommended that algorithms should be audited according to a “criticality pyramid”. The criticality of an algorithmic system is assessed using a scale where 1 = not critical, no action needed, to 5 = high potential of damage, partial or complete ban of the system. Between these two extremes, they propose heightened transparency obligations, exante approval mechanisms and continuous supervision by oversight bodies.
Enquete Commission on Artificial Intelligence
At the end of 2019, one year after the commission was installed in parliament, the working groups published interim results from expert hearings and discussions carried out during the previous year. The economic working group (Enquete-Kommission Künstliche Intelligenz, 2019c) stated that AI holds the potential to both heighten productivity and improve sustainability as well as to reinforce social injustices and restrict participation in the labor market and society. The working group on issues around government and the state (Enquete-Kommission Künstliche Intelligenz, 2019b) stressed the duty of care binding the state when rolling out AI systems, and it proposed mechanisms that support the ability of citizens to decide and to trace back decisions. Bearing in mind the fact that health data is very sensitive and needs to be highly protected, the working group on health (Enquete-Kommission Künstliche Intelligenz, 2019a) sees AI as one potential driver for improved diagnoses, therapies, and care for the sick and elderly.
German French Working Group on Disruptive Innovations and Artificial Intelligence
To face global competition in the area of technology and innovation, especially in AI, a working group (Deutsch-Französische Parlamentarische Versammlung, 2019) on the topic was established between the French and the German parliaments in 2019. The group aims to coordinate the work of the two governments regarding AI issues and to foster a European frame for enhanced innovation capabilities guided by European values.
In 2017/18, a pilot project on face recognition was run by the police at the Südkreuz train station in Berlin. It was criticized by civil rights and data protection activists and also provided a high number of false positives and negatives. When Horst Seehofer, Minister of the Interior, introduced a bill that would authorize the federal police to roll out face recognition software nationwide, his plans were met with criticism from many parties. Early on in 2020, Minister Seehofer withdrew the bill.
Civil Society and academia
Civil Society
Civil society is active in the field of digitalization in general and around ADM processes in particular. An important stream in public discussions focuses on AI and sustainability. Highlights in this debate were the Bits & Bäume Conference (Bits and Trees) (Bits und Bäume, 2019) in 2018, which aimed to connect both the ecology and technology communities to discuss the interaction between digitization issues and the environment. In 2019, and after analyzing the complex interactions between the two topics, the German Advisory Council on Global Change (German Advisory Council on Global Change (WBGU), 2019) presented a thorough report called Towards Our Common Digital Future. The report pointed out a gap in the Sustainable Development Goals (SDGs) concerning ADM-processes, and called for traceability, legally enforceable rights, and a regulatory discussion around liability rights concerning algorithmic systems. The Bertelsmann Foundation Ethics of Algorithms (Bertelsmann Stiftung, 2019b) project developed the Algo. Rules (Bertelsmann Stiftung, 2019a). These consist of nine guidelines on how to design an algorithmic system. In addition, they also published working papers on a broad range of issues as well as conducting studies on the societal state of knowledge and acceptance for algorithms in Germany and Europe (Grzymek, 2019). The Stiftung Neue Verantwortung think tank (Stiftung Neue Verantwortung, 2019) ran a two-year project on algorithms and the common good, focusing on the importance of a strong civil society, health issues, recruitment, predictive policing, corporate digital responsibility, and the charity sector. The investigative platform Netzpolitik.org (Netzpolitik.org, 2019) and the watchdog-organization AlgorithmWatch (AlgorithmWatch, 2020a), continue to report on new ADM cases (AlgorithmWatch, 2020b). AlgorithmWatch also carries out research projects on the topics of platform governance, human resource management and credit scoring, as well as mapping the state of ADM in Germany and Europe.
Academia
Academic discourse on ADM occurs at different levels: some of it focuses on developing ADM systems, while other parts focus on research into ethical implications. An example of the research into ADM in action can be found in the health sector. The Federal Ministry for Economic Affairs and Energy funds two medical projects (under the frame of the Smarte Datenwirtschaft Wettbewerb Smart Data Economy Competition) (Hans Böckler Stiftung, 2018) in which AI is supposed to be used on patient data. In 2019, the Smart Medical Doctor – From Data to Decision began (Bundesministerium für Wirtschaft und Energie, 2019). This is a project aimed at developing a medical data platform where AI anonymizes and prepares data from hospital cases to make them available for use in a diagnosis support tool for doctors called Ada DX. Ada Health GmbH, Helios Kliniken GmbH, the University Hospital Charité, and the Beuth University of Applied Sciences are all involved in the project. A second project, called Telemed5000 (Telemed5000, 2019), aims to develop a remote patient management system which allows doctors to manage large numbers of cardiologic patients. This is a German-Austrian collaboration and it aims to combine Internet of Things (IoT) systems with deep learning technology to monitor heart patients.
There are also several research projects underway that are looking into the societal implications of algorithms in Germany. A few examples worth mentioning are the Weizenbaum Insitut – Research for a Networked Society (Weizenbaum Institut, 2019); the Algorithm Accountability Lab (TU Aachen, 2019) and the Socioinformatics Degrees at the Technical University Kaiserslautern; and the newly founded Politics Department at the Technical University Munich (TUM, 2019).
Key takeaways
The narratives around ADM run between the poles of technological solutionism, where AI and ADM are praised to be the fix to all our societal problems and technophobic rejections of everything that is new. In many cases, the truth lies somewhere between and it is important to learn more about the intricacies of ADM systems to judge good from bad. Maybe the cure for cancer will be found with the help of machine learning, and ADM can potentially help to manage the task of reducing carbon emissions. At the same time, a ubiquitous datafication of many aspects of life and their incorporation into ADM systems can pose dangers to fundamental rights and must be scrutinized for injustices. One thing is certain; things are going to change. The shifts that ADM systems bring to the economy, science, labor, health care, finance, social services, and many other sectors will need to be accompanied by a society that is ready to critique flaws and, at the same time, embrace changes where they are useful and wanted. For too long, discussions about ADM linked back to autonomous driving or the image of robots replacing humans. It is timely and necessary for society to learn more about the challenges and opportunities that come with ADM systems. At a governmental level, the Enquete Commission on AI is a good start for policymakers to get a better perspective on the topic. Civil society actors such as AlgorithmWatch continuously labor to shed light on the workings of hidden algorithms. What is needed now, is a broad societal discussion about how ADM can make life better and where we see red lines that should not be crossed.