The use of automated decision-making (ADM) in the Netherlands came to international attention in late 2019 and early 2020. This followed the intervention by one of the UN’s top human rights experts in a high-profile court case in The Hague concerning the controversial SyRI (System Risk Indication) system. Professor Philip Alston, the UN Special Rapporteur on extreme poverty and human rights, described the case as the first legal challenge he was aware of that “fundamentally and comprehensively” contested the use of an ADM system in the welfare state on human rights grounds. Particularly noteworthy in the Court’s ruling that declared SyRI to be in violation of Article 8 of the European Convention of Human Rigths (Rechtbank Den Haag, 2020) is that governments have a “special responsibility” for safeguarding human rights when implementing new technologies such as these automated profiling systems (Idem, par. 6.84). The case led to a major public and political debate on ADM systems, and the use of algorithms in the Netherlands, and by the government more generally.
In contrast, the previous edition of Automating Society described how ADM and artificial intelligence (AI) was “predominantly discussed as part of the larger Dutch strategy on digitization,” and that no work, specifically focused on ADM and AI, had been done in the Netherlands on a national agenda. Yet, in the space of a year, a lot has changed. As we will discuss in this chapter, the Dutch government has now published a national Strategic Action Plan for Artificial Intelligence, with actions including major government investment in research on the legal aspects of decision-making algorithms, and the transparency, explainability, and supervision of algorithms. Indeed, in summer 2019, the main coalition government party, Volkspartij voor Vrijheid en Democratie (VVD), published a report proposing a new supervisory authority for the use of algorithms (Middendorp, 2019a) [here]. In a similar vein, in autumn 2019, another coalition government party, Democraten 66 (D66), also published proposals on the establishment of an oversight body for ADM systems. The proposals suggested prohibiting the use of certain ADM systems (like SyRI) and automated face recognition systems until proper legislation and supervision is first put in place (D66, 2019) [here].
In this edition of Automating Society, we aim to highlight new applications of ADM systems in the Netherlands. This chapter provides updates on both the political and public debates, and the regulatory and self-regulatory measures concerning ADM and AI. We also look at how civil society and the media in the Netherlands continue to play a crucial role in the current public debate on ADM systems, and the government’s use of algorithms that affect individuals. For example, in summer 2019, the Dutch public broadcaster, Nederlandse Omroep Stichting (NOS), revealed the widespread use of predictive algorithms by government agencies, based on confidential inventories on the use of such algorithms obtained by the broadcaster (Schellevis; de Jong, 2019) [here]. This report led the Chairman of the Dutch Data Protection Authority (Autoriteit Persoonsgegevens) (AP) to state publicly that the Dutch government must be more transparent about its use of predictive algorithms (NOS, 2019) [here]. Indeed, in November 2019, the AP announced that two of its high-risk focus areas for supervisory work during the 2020-2023 period would be digital government, and the use of algorithms and AI (Autoriteit Persoonsgegevens, 2019) [here] The AP is concerned about the increased use of algorithms and AI by more and more government authorities and private companies, which carries risks of harmful effects - while the irresponsible use of algorithms can lead to incorrect decisions, exclusion, and discrimination. The AP also highlighted an ongoing trend within local government and law enforcement authorities giving them access to large amounts of data on individuals. The AP added that it is important that government authorities handle such data responsibility.
The chapter also shows how it is not just the data protection authority that is increasing its oversight of the use of ADM systems, but that other regulatory authorities are too. A prominent example of this is the Dutch Central Bank (De Nederlandsche Bank) (DNB), which supervises financial institutions in the Netherlands. In 2019, the DNB recognized the risks associated with the use of AI by financial institutions in decision-making processes (De Nederlandsche Bank, 2019) [here]. As a result, the DNB published general principles for the use of AI in the financial sector, in order to ensure that financial institutions use AI in a responsible manner. The DNB will be examining the issue of responsible use of AI in its supervision of financial institutions in 2020.
A catalog of ADM cases
Alternative dispute resolution
Over the past few years in the Netherlands, the use of online and automated dispute resolution systems as an alternative to costly and time-consuming court cases has increased. Organizations, such as the E-Court and Stichting Digitrage (Digitrage Foundation), are the most well-known examples. Legally, these dispute resolution mechanisms are a kind of automated arbitration, where both parties contractually agree to use one specific automated method to resolve a dispute. At the E-Court, the entire process is conducted online. The parties upload the required documents and an ADM system creates a verdict. A (human) arbiter then reviews that verdict and signs it. Especially in sectors with a high turnover of small court cases, these fast and cheap alternatives are attractive. For instance, in 2018 most Dutch health insurers, and many debt collection agencies, included these automated dispute resolution companies in their policy agreements (Kuijpers, Muntz & Staal, 2018) [here].
However, as the use of these automated online arbitration systems increased, a large societal backlash ensued. The E-Court, for example, was criticized for the opacity of its underlying system that produces the verdicts. This was due to the fact that the verdicts were not published, and because people were inadequately informed that they had the choice of going to a state court (Kuijpers, Muntz & Staal, 2018) [here]. There was extensive negative reporting on these practices in the media, and a report by the Landelijke Organisatie Sociaal Raadslieden (National Association of Social Lawyers) heavily criticized the E-Court (Sociaal Werk Nederland, 2018) [here]. These criticisms and concerns grew to such an extent that the lower courts decided to temporarily stop confirming the arbitration decisions of the E-Court, effectively shutting down their operation. This resulted in an extended legal battle (e-Court, 2018) [here]. In February 2020, Dutch state and e-Court were still engrossed in this dispute, now arguing over whether a settlement has been reached (Driessen, 2020) [here]. Despite these concerns, the Minister of Legal Protection recently stated that he views digital innovation and online arbitration, such as the E-Court, as positive developments, as long as they operate within the bounds of the law (Minister voor Rechtsbescherming, 2019a) [here].
A striking example of ADM is the use of the growth tracking software, called Growth Watch, used in the Dutch health care system. The software tracks the development of children and automatically flags discrepancies that indicate either disease or child abuse (RTV Utrecht, 2019) [here]. The software is said to contain several anomalies that make it unreliable for children under a certain age, or with a specific health history. In one instance the system justly separated from its parents (RTV Utrecht, 2018) [here] and (van Gemert et al., 2018) [here]. This case prompted doctors to advise against using the software without a proper disclaimer or major adaptations (van Gemert, 2019) [here].
Automated news reporting is also on the rise in the Netherlands. The previous Automating Society report discussed several news agencies that had implemented automated recommender-systems that semi-automatically decide which articles are shown to each visitor or subscriber. These trials have continued and have been deemed successful. In the past year, ADM in journalistic reporting has been taken a step further, and now there are several examples of automatically generated news articles. For example, during the provincial elections of March 2019 the national broadcaster NOS published automatically generated articles with updates on the election results. The system works with standard texts and automatically inserts the latest results (Waarlo, 2019) [here] and (Duin, 2019) [here]. A large private broadcaster, RTL, also recently implemented an automatic news generator. The automatic news generating software is called ADAM, short for “Automatische Data Artikel Machine” (automatic data article machine), and is mainly used to create local news, such as traffic updates. ADAM will also be used to generate articles on trends where much of the information is publicly available, such as national trends regarding schools or hospitals. The initiative was partially funded by the Google Digital News Initiative (Wokke, 2019) [here] and (Bunskoek, 2019) [here].
Finance and trading
One of the largest Dutch banks, the ING, has created several financial services that are based on ADM systems. For instance, the ING Smart Working Capital Assistant uses ADM to predict events “that have a positive or negative impact on the working capital of the company in question”. Such assessments are sent directly to the client’s smartphone, who can then use the information accordingly (Visser, 2018) [here]. Another example is the ING Katana Lens that the bank created in cooperation with a pension fund. This web-based application is aimed at helping investors by giving them predictions of price trends for specific bonds. The ADM system works by collecting and analyzing past and current price data, and predicting the future price trends based on past patterns (Banken.nl, 2018) [here] and (Mpozika, 2019) [here]. The ING also has a similar product, Katana, but this is aimed at the sales-side of bond trading and not investors (ING, 2019) [here].
Law enforcement Initiatives
Dutch law enforcement is experimenting extensively with ADM systems. The police are conducting several local pilots, such as the project in the municipality of Roermond, discussed earlier, and several other national projects. ADM is also used to fine people for using a mobile device whilst driving. The Dutch national police issued a statement that, from October 2019 onwards, the police will use advanced cameras to automatically detect drivers using phones while operating a vehicle, as this is illegal in the Netherlands. These so-called “smart” cameras will automatically flag when a driver is using a mobile device. The camera will then automatically take a picture of that person and register the license plate. This information is subsequently sent to a police officer for review. When the police officer determines that the information is correct, a fine of 240 euros is automatically sent to that driver’s home address. (RTL nieuws, 2019) [here] and (Politie, 2019b) [here].
The police are also using ADM to try and solve “cold cases” – or unsolved crimes. The national police conducted an experiment using artificial intelligence to analyze several cold cases, and to help prioritize which cases should be reopened. This experiment has been deemed successful and the police will continue using ADM to analyze cold cases. (Politie, 2018) [here] and (Tweakers Partners, 2019) [here]. Furthermore, the national police are actively collaborating with academia to develop ways in which artificial intelligence can help law enforcement. In 2019, the University of Utrecht established the National Police Artificial Intelligence Lab, where seven PhD candidates will, in close collaboration with the police, research how artificial intelligence can be used in a law enforcement context. This lab builds on the work already done in the Police Data Science Lab that the University of Amsterdam created in 2018 (Politie, 2019a) [here]. Another example of the Dutch police using ADM is in their online reporting system for Internet fraud. ADM is used to adjust the reporting questionnaire, give people advice, and automatically alert the police when there is a high likelihood that a crime is committed (Blik op nieuws, 2019) [here].
Of specific importance is the use of the predictive policing tool ‘Crime Anticipation System’ (CAS, Criminaliteits Anticipatie Systeem). Based on past police reports and conviction records, CAS predicts which places have the highest risk of specific types of violent crimes, with the aim to effectively spread police presence throughout a city to prevent these crimes from happening (Minister van Veiligheid en Justitie, 2015) [here]. On a bi-weekly basis, the system produces a 125 by 125-meter color-coded grid indicating the highest risk areas where police presence is subsequently increased. Different versions of CAS have been in use since 2014 (Politie, 2017).
Finally, the Dutch police’s Catch face recognition system is of particular importance. An investigation published by VICE in July 2019 uncovered the existence of over 2.2 million images in the Catch system of a total of 1.3 million individuals who may be suspected of committing a serious criminal offense (van Gaal, 2019) [here]. In November 2019, the police confirmed that its face recognition system now has access to a complete database of people suspected of serious crimes. Current figures are unknown, but in 2017, 93 suspects were identified using the system (NU.nl, 2019) [here]. Furthermore, the national police introduced a smartphone app last year that allows police officers to send images to Catch (NU.nl, 2018) [here].
Policy, oversight and debate
Having described the ADM systems currently in use in the Netherlands, this section examines how Dutch society is debating these developments. We will look at the debate from the perspective of the government, civil society and academia.
Government and parliament
It was noted in the last Automating Society report that little work had been done on a national agenda concerning ADM or artificial intelligence by the Dutch government. However, over the last year, there have been major developments in terms of new national agendas concerning the use of ADM systems and artificial intelligence.
Strategic Action Plan for Artificial Intelligence
One of the most important developments at a policy level has been the Dutch government’s new Strategisch Actieplan voor Artificiële Intelligentie (Strategic Action Plan for Artificial Intelligence) (SAPAI), which was launched in October 2019, and contains a number of initiatives in relation to ADM and algorithms (De Rijksoverheid, 2019) [here] (see also the letter to parliament from the Minster for Legal Protection, 2019b) [here]. The SAPAI is built upon three tracks:
Track 1 involves “capitalising on societal and economic opportunities” presented by AI, and how the Dutch government will make optimal use of AI in the performance of public tasks. The government also intends to engage in intensive public-private partnerships to realize the benefits of AI.
Track 2 is “creating the right conditions”. This includes ensuring that the Netherlands has access to more usable data for AI applications to realize better AI developments.
Importantly, track 3 of the SAPAI is “strengthening the foundations”. On this track, the government will ensure that public values and human rights are protected. In this regard, the SAPAI’s actions will include research into the legal aspects of decision-making algorithms, research into the risks of face recognition technology, and European certification of AI applications in the administration of justice. In addition, there will be action on the effective supervision of algorithms, including research into the transparency/ explainability of algorithms, and crucially, the supervision of algorithms. The Dutch government will also establish a transparency lab for government organizations, and stimulate the participation of Dutch companies and public organizations in the pilot phase of the ethical guidelines for AI from the High-Level Expert Group of the European Commission.
The SAPAI also tasked the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (Dutch Research Council) (NWO) with funding research on explainable, socially conscious, and responsible AI. In November 2019, the NWO launched the first national AI research agenda, (Artificial Intelligence Research Agenda for the Netherlands) (NWO, 2019) [here].
Dutch Digitalization Strategy 2.0
The previous edition of Automating Society noted that the first Nederlandse Digitaliseringsstrategie: Nederland Digitaal (Dutch Digitalization Strategy) was launched in 2018. In 2019, the Dutch government built upon this by implementing an updated Nederlandse Digitaliseringsstrategie 2.0 (Dutch Digitization Strategy 2.0) (De Rijksoverheid, 2019b; 2019c) [here and here]. This strategy sets out the government’s priorities in relation to the impact of digitization on society, the economy, and government. These include applying AI solutions to resolve social issues and stimulate economic growth and developing digital inclusion, digital government, digital connectivity, and digital resilience. Notably, the strategy recognizes that AI can put pressure on key basic rights and public values (such as human dignity, autonomy, and the prohibition of discrimination). As a result, the importance of inclusiveness and transparency and the explicability of algorithms will be key issues. Furthermore, the potentials and risks of the algorithm-based decision- making will also be assessed in the context of safeguarding public values; and the government will engage in case studies in the area of social security, law, self-driving cars, and content moderation. In March 2019, the Ministry of the Interior and Kingdom Relations also presented the NL DIGITAAL: Data Agenda Government to the House of Representatives (Ministerie van Binnenlandse Zaken, 2019a) [here].
In addition, in February 2019, the first Conferentie Nederland Digitaal (Digital Netherlands Conference) was held, bringing together government ministers, industry, researchers, and civil society, in order to better cooperate on digitization (Nederland Digitaal, 2019) [here]. Notably, Nederland ICT, an organization representing industry, presented its AI ethical code of conduct to the State Secretary for Economic Affairs.
Finally, in the previous Automating Society report, the government was supposed to react to a report on algorithms and fundamental rights during the summer of 2018. In March 2019, the government sent its response to parliament. In its reaction, the government acknowledges that new technologies can have a large impact on fundamental rights. The government mainly focused on the need for procedural safeguards and fairness. Central to its policy is neutrality, non-discrimination, transparency, and data protection. The government is looking into additional legal safeguards for reducing bias in its big data initiatives and it is creating guidelines to be used for the ex-ante review. Additionally, the government stressed the importance of robust data protection laws and effective enforcement (Minister van Binnenlandse zaken, 2019b) [here].
Supervision of ADM System and Algorithms
Members of the current Dutch government have also been quite active in terms of the debate concerning ADM systems and algorithms. As mentioned in the introduction above, in May 2019, the main coalition government party, VVD, published a report proposing a new supervisory authority for the use of algorithms (Middendorp, 2019a) [here]. The following month, Jan Middendorp, a government MP, sent an initiative to the House of Representatives, proposing supervision of the use of algorithms by the government (Midendorp, 2019b) [here].
Furthermore, in November 2019, another coalition government party, D66, also published a report with proposals to limit the use of ADM systems by government and technology companies (D66, 2019) [here]. The proposals include:
- creating a proper legal framework for the use of ADMs systems, where linking and analyzing individuals’ data would only be permissible where it is required for a compelling purpose, and with sufficient guarantees for protecting individuals’ human rights.
- establish a central overview of the number of databases, the degree of linking, and the use of ADM systems.
- establish an algorithm authority, and a reporting obligation for algorithm use.
- prohibit face recognition systems and certain uses of algorithms until there is proper legislation and oversight.
Algorithms and the Administration of Justice
In the previous Automating Society report, it was noted that the Minister for Legal Protection promised to send a letter to parliament in fall 2018 on the possible meaning of ADM and AI for the judicial branch. At the time of publication, this letter had not been received. However, in December 2018, the Minister sent a letter to parliament about the application and use of algorithms and artificial intelligence in the administration of justice. The overall aim of the letter was to give an overview of current experiments in the field of ADM and the administration of justice and to identify benchmarks for judicial use of ADM, based on fundamental rights. These benchmarks are accessibility, internal and external transparency, speed, independence, and impartiality of the judge and the fairness of the trial. The government goes on to list several elements of the judicial process where ADM systems could be used and also identifies possible pitfalls. The overall conclusion is to tread carefully, to conduct experiments apart from actual cases, and to properly assess each experiment to see what the added value of the use of technology is (Ministerie van Justitie en Veiligheid, 2018) [here]. Finally, in March 2020, the Minister for Legal Protection sent a letter to parliament in response to questions on algorithmic analysis of court judgments (Minister voor Rechtsbescherming, 2020) [here]. The Minister set out the benefits of algorithmic analysis of judgments, including allowing more insight into judicial reasoning and pointed to a scheme at the District Court of East Brabant on AI analysis of court documents and cases, The Minister confirmed that only 2-3% of court rulings are published online in the Netherlands, but it is planned to increase this to 5% over the next three years; and that there is currently no specific legal framework for applying AI to court rulings in the Netherlands.
Algorithms and Law Enforcement
The extensive use of ADM systems by the Dutch police, as described in the previous section, became a topic of debate in the Dutch Parliament after the Minister of Justice and Safety outlined the Government’s policy on the use of AI by the police (Kamerstukken 2019). In response, the Parliamentary Committee for Justice and Safety drafted more than 80 questions. The Minister answered in February 2020 emphasizing, firstly, that the Dutch police currently is not using AI systems on a large scale with only the online Internet fraud reporting system as a concrete example. However, the Minister did confirm that algorithmic systems, as outlined above, are used throughout the Dutch police, and outlined several fields where algorithmic or AI systems are being developed. Examples of these, are speech to text software, image/face recognition, and natural text processing, all to shorten the time needed for desk research. Further, the letter elaborated on the ethical, legal, and privacy safeguards in place for the use of AI. Crucially, the Minister clearly stated that it is the government’s conviction that the future development and use of AI-systems for the police is deemed necessary (Kamerstuken 2020).
Regulatory and self-regulatory measures
Automatic Number Plate Recognition
On 1 January 2019, a law that had been enacted in 2017 on Automatic Number Plate Recognition (ANPR) came into effect [here]. The civil society organization, Privacy First, is currently preparing legal proceedings, arguing that the law violates international and European privacy and data protection laws (Privacy First, 2019) [here].
Data Processing Partnership Act
In the previous edition of Automating Society, it was reported that a public consultation was completed on a proposal for the Wet gegevensverwerking door samenwerkingsverbanden (Data Processing Partnership Act). The act aims to provide a basis for public-private cooperation, and make collaboration between them easier in relation to the processing of data, specifically used for surveillance or investigation purposes (e.g., to prevent crimes or detect welfare fraud). In June 2019, the Minister for Justice and Security sent a letter to parliament on the progress of the proposed law [here]. The Ministerraad (Council of Ministers) have agreed on the bill, and it has been sent to the Raad van State (Council of State) for its advice.
Data Protection Authority focus on AI and algorithms
As mentioned in the Introduction, the Autoriteit Persoonsgegevens (Dutch Data Protection Authority) (AP) announced in November 2019 that two of its high-risk focus areas for supervisory work during the 2020-2023 period would be digital government, and the use of algorithms and AI (Autoriteit Persoonsgegevens, 2019a) [here]. This is because the AP is concerned about the increased use of algorithms and AI by more government authorities and private companies, which carries the risk of harmful effects, while the irresponsible use of algorithms can lead to incorrect decisions, exclusion, and discrimination. The AP also highlighted the continued trend of local government and law enforcement authorities now have access to large amounts of data on individuals, and that government authorities must handle such data responsibly.
In addition, the AP also announced in 2019, that it would be investigating the developments of Smart Cities by municipalities in the Netherlands, to ensure the privacy of residents and visitors is protected. The AP noted that municipalities are increasingly using data-driven and automated decisions and that the digital tracking of people in (semi) public places is an invasion of privacy that is only permitted in exceptional cases (Autoriteit Persoonsgegevens, 2019b) [here].
Finally, in April 2020, in the context of the coronavirus crisis, the AP released a statement on possible government use of location data from telecommunication companies to track the spread of the virus. The AP stated that, under Dutch privacy law and the Telecommunication Act (Telecommunicatiewet), telecom companies “may not simply share customer data with the government”. The AP concluded that statutory regulation would be needed, which would be required to be proportionate and contain sufficient safeguards (Autoriteit Persoonsgegevens, 2020) [here].
Dutch Central Bank recommendations on AI and ADM
In July 2019, the Dutch Central Bank (de Nederlandsche Bank, DNB) published a discussion document titled “General principles for the use of Artificial Intelligence in the financial sector”. This document was created as a response to the growing use of automated systems within the financial sector and is meant to give guidance to financial service providers planning to use ADM systems. The DNB explicitly recognizes the risks involved in using ADM in a financial context. The bank summarized their recommendations in the form of an acronym: “SAFEST”. This stands for soundness, accountability, fairness, ethics, skills, and transparency. The guidelines are meant to start a sector-wide conversation on the use of ADM. The DNB called on relevant stakeholders to send input and will report on the results at the end of 2020 (De Nederlandsche Bank, 2019) [here].
Civil Society and Academia
The previous Automating Society report listed several civil society organizations doing important work on ADM, including Bits of Freedom [here], De Kafkabrigade (The Kafka Brigade) [here], Platform Bescherming Burgerrechten (Platform for the Protection of Civil Rights) [here], and Privacy First [here]. In the past year, these organizations have continued their efforts. For example, Bits of Freedom is actively campaigning against face recognition software and it recently published a report arguing for a ban on face recognition in public spaces. For that report, they conducted an experiment where they showed how even one publicly accessible camera in the inner city of Amsterdam made it possible to identify the people walking past (Hooyman, 2019) [here]. Furthermore, the Platform for the Protection of Civil Rights has been one of the driving forces behind, and the main claimant in, the SyRI system court case. This organization has been campaigning against the system for years (Platform Bescherming Burgerrechten, 2019) [here].
Consistent with the Dutch government’s policy of promoting public-private partnership, there are now several prominent examples of such partnerships. Two such examples are the launch of the Strategic Action Plan for AI by the State Secretary for Economic Affairs in 2019, and the Dutch AI Coalition, which, was also launched the same year (Nederlandse AI Coalitie, 2019) [here]. It is a public-private partnership comprised of over 65 parties from industry, government, education, and research institutions (Martens, 2019; Wageningen University & Research, 2019) [here and here]. In addition, the Kickstart AI Initiative is also relevant. This brings together a range of large Dutch companies Ahold Delaize, ING, KLM, Philips, and the NS, to fund AI research and education (Kickstart AI, 2019) [here].
There is also a great deal of scientific research on ADM systems and AI taking place in Dutch universities. As mentioned previously, the Dutch Research Council (NWO) launched the first national AI research agenda in 2019. In addition, one of the most significant research initiatives has been the establishment of the national Innovation Center for Artificial Intelligence (ICAI). This is a national initiative focused on joint technology development between academia, industry and government in the area of artificial intelligence. The ICAI involves a number of research institutions, including Delft University of Technology, Radboud University, University of Amsterdam, Utrecht University, and Vrije Universiteit Amsterdam (Innovation Center for Artificial Intelligence, 2019) [here]. There are a number of labs established under this collaboration, such as the National Police Lab AI. As mentioned above, at Utrecht University, the National Police Lab AI works in collaboration with the police, to research how artificial intelligence can be used in a law enforcement context.
Furthermore, at the University of Amsterdam, there is the Digital Transformation of Decision-Making research initiative. The purpose of this initiative is to examine how automated decision-making systems are replacing human decision-makers in a range of areas, from justice, to media, commerce, health, and labor. In particular, researchers are looking at how the digitization of decision-making affects democratic values and the exercise of fundamental rights and freedoms (UvA, 2019a) [here]. In relation to this, there is the AI & the Administration of Justice research initiative which is also at the University of Amsterdam. This initiative is engaged in normative research into the fundamental rights, societal values, and ethics of automated decision-making, specifically oriented towards the role and responsibilities of judges, public prosecutors, and lawyers (UvA, 2019b) [here]. And finally, there is the Human(e) AI research priority area of the University of Amsterdam which stimulates new research on the societal consequences of the rapid development of artificial intelligence and ADM in a wide variety of societal areas (UvA, 2019c) [here].
This chapter reveals several important new findings about ADM and AI in the Netherlands during the past year. Perhaps the most important point is that there is currently an incredible amount of activity in the field of ADM and AI, whether at the government policy level, in political and public debate, or the ever-increasing new uses of ADM systems by both government and private companies. Secondly, it is striking that the use of ADM systems and AI by law enforcement has seen major development in the Netherlands. Thirdly, it also seems that a lot of the use of ADM systems by the government is in the realm of local government and government agencies that have discretion in how to allocate recourse, such as welfare, unemployment, tax exemptions, etc. However, in terms of the challenges that ADM systems present, the case of the Netherlands perhaps highlights major issues concerning the lack of transparency with regards to ADM systems used by government. This has been one of the key criticisms of ADM systems, such as the SyRI system for welfare-fraud detection. Indeed, in the SyRI judgment, one of the biggest faults was the lack of transparency, with the Court even noting the “deliberate choice” by the government, of not providing “verifiable information” on the nature of SyRI. Finally, sector-specific regulators are also stepping-up, such as the data protection authorities and financial institution regulators, which are bringing a new level of guidance and scrutiny to both public and private uses of ADM systems.