Research
Belgium
Contextualization
As a result of the different governments, and the different levels of government, in Belgium (Federal and Regional), several different strategies dealing with digitization emerged in 2018. In Flanders, this strategy is called Vlaanderen Radicaal Digitaal, while in the Walloon region, it is known as Plan Numerique. In 2015, the federal government launched a strategy called Digital Belgium, but this covered more than just ADM. In 2018, the Flemish government launched an action plan with a budget of €30 million to deal specifically with artificial intelligence. The main targets of this investment are fundamental research, applications for industry, framing policy concerning education, sensitization, and ethics. While in Flanders, the biggest opportunities for AI are in personalized healthcare, smart mobility, and industry 4.0. Concerning regulation, the Belgian national law on data protection, which is based on the GDPR, came into force on 5 September 2018. In the same year, the Belgian Chamber of Representatives adopted a resolution to have a preventative ban on fully automated weapons.
A catalog of ADM cases
Face Recognition at Brussels Airport
In July 2015, Brussels Airport installed ‘e-gates’ with face recognition technology that checks the identity of EU-citizens from outside the Schengen zone. It electronically checks to see if identity papers are real and if a passenger is flagged. In addition, the face recognition software compares the picture on the passport chip in the identity card with the digital picture the software renders at the e-gate (De Morgen, 2019). In early 2020, the airport scrapped the e-gates, which cost €2.4 million, because they were constantly defective, according to the police (The Bulletin, 2020).
In July 2019, the Belgian news magazine KNACK published an interview with the Commissioner-General of the Federal Police in which he proposed the implementation of face recognition software at Brussels airport (KNACK, 2019). However, the Belgian Oversight Body for Police Information (COC) was never contacted for advice about this, and a Data Protection Impact Assessment had not been conducted. As a result, the COC started an investigation and found the following. In early 2017, the Federal Police began piloting a new face recognition system with four cameras. The system functioned in two phases. When the software that looks at video images is activated, so-called snapshots are made. The system constantly individualizes the video images through a biometric template of people whereby the snapshots are created. These snapshots are stored and linked to the blacklists of people who are suspected of a crime. When there is a positive link, a hit is created. During the test phase, the error rate (number of false positives) was very high; as a result, in March 2017, testing of the system was stopped. However, during a visit by the COC it was found that the system was still partially active, in the sense that snapshots of all passengers in the airport were still being taken, but without any comparison to a blacklist being performed. To be able to keep testing the system, it had to store the images for a minimum amount of time (COC, 2019).
The COC identified three problems with this pilot:
- The police did not conduct an evaluation of the risks related to privacy and did not inform the COC about the pilot project as they are obliged to
- A database containing data associated with a hundred thousand faces of travelers was created and stored temporarily, and that is not
- The system gave a lot of false alarms (Heymans & Vanrenterghem, 2019).
In their public report, the COC concluded that there was too little information about the implementation and risks of the technology as there was no clear policy or data protection impact assessment conducted to come to a conclusion or offer advice. As a result, they enforced a temporary corrective measure in asking for a temporary ban of the pilot project (COC, 2020).
Smart video surveillance by the local police: Briefcam
Since 2019, the local police in the cities of Kortrijk, Kuurne, and Lendelde (VLAS) have been using a ‘smart’ video surveillance system developed by an American company called Briefcam. According to the police, the goals of the system are as follows:
- To aid. For instance, if a person with a backpack and wearing a blue coat has fled in a certain direction, this person can be found easily by the algorithm. The person is then traced on other cameras to get a more complete picture of the route he or she took and to detect crimes that might have been committed. The system can also help search for lost children and solve bicycle theft more easily.
- The generation of live. For instance, at the beginning and end of the school day, trucks are not permitted to drive near schools. If a truck is spotted driving through an area it is not allowed to, the system sends an alert to the police.
- Collect statistical information to support
The system stores all ‘objects’ (for example: people, small and large vehicles, and animals) that appear in the video images. An algorithm then decides what category each object belongs to, and, after this first categorization, a sub-categorization is made. For example, when it comes to people, the system categorizes each person as either a man, woman, or child before further categorizing by clothing: short/long sleeves, short/long trousers, color of clothing, and things such as hats, handbags, backpacks, etc. The same system of categorization is used for vehicles. For example, if the vehicle has two wheels: Is it a bicycle or a motorcycle? For different vehicles: Is it a car, pick-up, van, truck, or a bus? In addition, the direction, size, and velocity of the object are also registered. The system is also able to implement face recognition (PZVlas, 2019), although, at the moment, there is no legal framework to regulate use by the police. However, as the spokesperson of the VLAS police zone indicated in an interview: in exceptional circumstances, when requested by an investigatory judge, it is possible to use face recognition (Verrycken, 2019).
Face recognition at football matches
Since early 2019, the RWD Molenbeek football club (RWDM) has been experimenting with face recognition software at football matches. The system allows people with season tickets to enter Molenbeek football stadium without having to show their tickets. Two cameras at the stadium entrance detect the faces of people coming in, and, in real-time, the system verifies if the person has a season ticket or not. The aim of the system is to create a “fast lane” whereby season ticket holders, who have given their consent and have uploaded a picture to the database, do not have to queue. According to a representative of the biometric company that delivered the software for the football club, after a one-year pilot, the system is deemed sufficiently reliable, even in difficult circumstances; such as rain, or when people are wearing hats or glasses. (Verrycken, 2019).
Algorithmic school registrations
In many Belgian cities, there are problems when registering children for schools, especially when a lot of parents want to register their children at the same school. This has led to parents spending several nights in tents outside schools so that they can be the first to register their children. In response to this situation, which was considered unfair, several cities started organizing school registration via a central online system that uses an algorithm to decide in which school a child can be registered. In the system for primary schools, that is used in Leuven, a city east of Brussels, a number of variables are taken into account. On the basis of the answers given by the parents to a series of questions relating to the education level of the mother and whether or not the student receives a grant, students are then divided into ‘indicator students’ and ‘non-indicator students’. An indicator student is a student who is defined as having fewer life chances and is based upon the level of education of the mother and if the student is eligible for an education allowance. Further categorization happens on the basis of two criteria: distance from home to school and preference of school. The schools can decide how much weight they assign to distance and preference i.e. between 30%, 50%, or 70% for each (Meldjeaan, 2020). This weight can differ between cities, for instance, in Antwerp the schools have to assign a minimum of 50% to the distance criterium.
For secondary schools, some cities also use an online registration system, which uses an algorithm to decide which school the child will go to. However, the system is not the same as for primary schools. As not every municipality has a secondary school, the distance criterium is considered discriminatory and, therefore, it is not taken up (Dierickx and Berlanger, 2019). The system does take into account priority students (i.e., children of the staff of the school) and indicator-students (see above). The algorithm works randomly on the basis of the first choice in three rounds (Aanmelden school, 2020). As the system does not take distance into account, this has led to situations where students from Brussels took up places in Flemish secondary schools outside of Brussels, and children, living close to the school not getting in and having to be registered in schools much farther away (Dierickx and Berlanger, 2019).
Predictive ADM systems for healthcare and upbringing
A Brussels startup called Kantify and the AI Lab at the Université Libre Bruxelles (Iridia) has—for the first time—developed a model for predicting atrial fibrillation, a heart rhythm disorder, which causes heart attacks (Het Nieuwsblad, 2019).
According to an interview in the Flemish newspaper De Standaard, Lauren Van Parys, a Flemish member of Parliament, said a newly formed government agency of Flanders called the Agency for Upbringing (Agentschap Opgroeien), will use AI to better predict whether problems will emerge with a particular child, based upon existing information (De Standaard, 2019). However, when asked about this further, the Agency for Upbringing indicated that there are no concrete plans yet, but that they are preparing a policy note about it.
New developments in predictive policing and algorithmic work activation
Predictive policing
In the 2019 AlgorithmWatch report, it was suggested that the Belgian police were planning to roll out predictive policing nationally based on the iPolice system, which will be operational by 2020. The iPolice system aims to centralize all police data in the cloud. It is a cooperation between the Ministry of Home Affairs, the Digital Agenda, and the Ministry of Justice. However, up until now, there has been no news about iPolice. Local police are increasingly experimenting with predictive policing applications, for instance, in the police zone of Zennevallei (Beersel, Halle, and Sint-Pieters-Leeuw), the police are very active in exploring how to implement predictive policing in response to a research proposal by the University of Ghent. A PhD student at the university will research the police zone for two years. The student will try to collect as much data as possible, including times and locations of crimes, but also weather conditions, and demographic characteristics, among other factors, according to the spokesperson of the police zone (Van Liefferinge, 2019).
Algorithmic work activation
In an interview, the head of the public employment service of Flanders, VDAB, argued that artificial intelligence can predict which citizens need more support and that this will be a positive outcome for vulnerable groups (De Cort, 2019). He indicated that they could do even more if they could make use of more data from partner organizations. However, questions have been raised in the Flemish Parliament about what the VDAB is doing as the algorithms they use are not transparent, and there is a worry that the algorithms will work in a discriminatory way (El Kaouakibi, 2020).
The Innovation Lab of the Flemish public employment service, VDAB, is currently developing and experimenting with two projects. Firstly, a prediction model that predicts—based on an individual file and click data—the chance that a job seeker will still be unemployed after six months, and what the reasons for that are. And secondly, a prediction model—also based on individual file and click data—that predicts which jobseekers will look less actively for work in the coming months. The hope is that the system will help quickly activate these people and avoid more drastic measures such as referral to the control service of the VDAB and possible sanctions. At the moment, two different models have finished the proof-of-concept phase. One was developed by Accenture and the other by Deloitte (VDAB, 2019).
Policy, oversight and debate
Flanders
The Flemish Minister of Innovation launched the Flemish Action Plan of Artificial Intelligence (AI) in March 2019. The plan includes an annual investment of €32 million and is focused on research, implementation, ethics, and training. €12 million will go specifically to research (Vlaamse Regering, 2019a), and €5 million will go to ethics and training. Part of the plan is to establish a knowledge center that will reflect upon the ethical challenges associated with AI.
This knowledge center—which will focus on juridical, ethical, and societal aspects of artificial intelligence, together with data-driven applications, launched on 9 December 2019 in Brussels (Kenniscentrum Data & Maatschappij, 2019). The center is not a new organization, but a cooperation between three academic research groups: Centre for IT and IP Law (CITIP), Katholieke Unversiteit Leuven, Studies in Media, Innovation and Technology (SMIT), Vrije Universiteit Brussel/imec and Research Group for Media, Innovation, and Communication Studies (MICT), Universiteit van Gent/ imec.
The main aims of the center are a widespread co-design approach to delineate AI policy, to stimulate societal debate about the acceptance of new technology, deliver thought leadership about both societal and economically acceptable developmental trajectories for AI, identify and valorize important human factors in the development of AI and finally develop legal frameworks and guidelines for policymakers and companies.
Activities of the center include:
- The translation of general ethical principles into concrete guidelines, recommendations and rules.
- Removing risk from AI-development by companies and innovators.
- Stimulate current and future regulatory and policy reforms with an impact on AI (for instance, directive product liability, Machinery directive reform, e-Privacy directive, etc.).
- To gain clarity about the implications of current and new regulatory instruments (such as the GDPR, regulation on free flow of non-personal data, etc.).
- To create space for experiments relating to “ethical
- validation” and “regulatory sandboxing”.
- Increase societal awareness and create more support. (Departement Economie, Wetenschap & Innovatie, 2019).
The main driver for investment in AI in Flanders is economic. The “Quarter nota to the Flemish Government about the Flemish AI Plan” (Vlaamse Regering, 2019b), proposed by the Minister, states that—considering the technological and international developments with regards to AI—“if Flanders wants to remain competitive investments in AI are necessary” (Vlaamse Regering, 2019b: 4). The nota goes on to mention social and ethical consequences, but in the same sentence, it questions the validity of these concerns as these discussions block thinking relating to the opportunities of AI. It also does not mention much about these potential consequences, suggesting that all will be solved, if these consequences are included in the design phase. In addition, the opportunities and uses of AI are generalized in a way that they are considered the same across all sectors.
Federal Goverment
Under the direction of the former deputy prime ministers, Philip De Backer and Alexander De Croo, the Federal government launched the “AI 4 Belgium strategy” on 18 March 2019, with the aim to position Belgium as a central player in the European AI landscape (AI4 Belgium, 2019). The strategy is the result of consultation with 40 experts from the technology sector in Belgium. According to vice-premier Alexander De Croo: “through lack of knowledge about AI it is possible that Belgium will miss chances for economic development and prosperity. Agoria estimates that digitization and AI can create 860.000 jobs by 2030. Instead of lagging behind, we want to switch a gear higher. We have the ambition to develop the right knowledge and skills, so it is possible to reap the societal and economical fruits of AI.” (Van Loon, 2019).
As a result of the implementation of the GDPR in May 2018, the Belgian Privacy Commission was reformed and became the Belgian Data Protection Authority. In addition, the oversight body which looks at how the police use information (Controleorgaan op politionele infomatie, COC) was reformed to function as an independent data protection body. This body will be specifically responsible for overseeing how the police use information.
In answer to a question in the Belgian Chamber of Representatives on 25 October 2019, regarding the use of face recognition systems by the police, the Ministry of Security and Home Affairs indicated that the legal basis for using face recognition is currently being investigated (De Kamer, 2019).
Walloon government
In 2019, Wallonia launched its 2019-2024 digital strategy. According to the website of digitalwallonia.be, the strategy has five major themes:
- Digital sector: A strong digital sector with cutting-edge research, able to capture and maintain the added value of the digital transformation on our
- Digital business: Boosting the Walloon economy depends on a strong and rapid increase in the digital maturity of our
- Skills and education: Each Walloon citizen must become an actor in the digital transformation process by acquiring strong technological skills and adopting an entrepreneurial
- Public services: A new generation of open public services, acting transparently and being, by themselves, an example of digital
- Digital Territory: Smart and connected to broadband networks. Our territory must offer unlimited access to digital innovation and act as a driver for industrial and economic (Digital Wallonia, 2019)
Wallonia invested €900,000 in a one-year action plan Digital-Wallonia4.ai between July 2019 and July 2020 supported by l’Agence du Numérique, Agoria, l’Infopôle Cluster TIC, and le Réseau IA. The goal was to accelerate the adoption of AI in enterprise and demystify the technology for the general public. Wallonia, which is lagging behind Flanders, wants to develop consequences of AI. And, even there, the emphasis has been on legal compliance. In general, very little attention in policy discourse focuses on the ethical questions or the societal impact of AI. The only place where attention is given to the risks of AI is in an advisory report compiled by the independent Advice Commission of the Flemish Government for Innovation and Entrepreneurship (VARIO, 2019). However, the overview is limited, and there is a lack of knowledge regarding the research that is being done in Belgium on this topic, and of academic research in this area in general. No concrete solutions have been proposed to deal with the ethical questions or develop its potential and see the birth of AI champions (Samain, 2019).
The strategy was centered around four axes: 1) AI and society, 2) AI and enterprises, 3) AI and training and 4) AI and partnerships (Digital Wallonia, 2020). There were two main actions of this program. Firstly, the creation of Start AI that aimed to help companies learn about AI through a three-day coaching session run by one of the members of the Digital Wallonia AI expert pool. Secondly, the creation of Tremplin IA to help with Proof of Concepts (PoCs) of AI in the Walloon Region. In all, 19 projects for individual PoCssocietal impact of AI or in making the development and implementation of AI more transparent.
Furthermore, it seems that the advice about the dangers of AI by this agency are not taken that seriously when it comes to Flemish policy. Only a very small amount of the foreseen budget will be invested in the knowledge center for data and society (mentioned previously). In addition, if one looks at the goals and activities of the center, this does not include supporting and financing research on social and ethical issues or value-sensitive design and nine collective projects joined the program. Of these, the jury selected five individual PoC projects and four group projects. Agoria set up an online course, while Numeria created AI training courses. Meanwhile, a study on the social and ethical impact of AI will be conducted by the Research Centre in Information Law and Society (CRIDS) of the University of Namur (Digital Wallonia, 2020).
Key takeaways
In 2018, most stories about AI were still very theoretical. Whereas, in 2019, it is clear that there has been an increase in the use of ADM. The main drivers for investment in AI in Belgium are predominantly economic. Belief in AI progress, together with a fear of missing out, and falling behind in comparison to other countries, are strong drivers.
The biggest investments have been in the development of new AI technologies, whereas, in comparison, very little has been invested in research into the ethical, legal, and social (design that takes into account certain ethical values), for instance. As much of the development of these technologies is still in the design phase, there are opportunities for the new Knowledge Center to advise on data protection, ethical issues, and social impact, which could make a difference.
At the level of the Federal government, nothing has been found in the AI 4 Belgium strategy about the legal, ethical, and social consequences of AI. It is presented as if these issues do not exist. Nothing was found regarding the establishment of oversight bodies or ethical commissions that would oversee the technological developments in AI. There seems to be a lot of trust in the researchers and companies developing new AI technologies. Likewise, in the Digital Wallonia strategy, nothing can be found that addresses legal, ethical, and social concerns.
To summarize, there is a big challenge for policymakers when it comes to ADM and AI in Belgium. They need to take legal, ethical, and social concerns seriously and to ensure these systems are transparent, and implemented with the necessary democratic oversight, have a positive impact on Belgian society, and empower Belgian citizens.