Research
France
Contextualization
Many people within French institutions and large corporations see themselves as front-runners in artificial intelligence. However, only a few of these same people would argue that they are the best in the world, and yet almost all of them complain that they are not. This paradox seems to imply that they see their natural position as being in the leading pack.
This state of mind makes for some almost farcical situations, like the time when the government called its AI strategy “AI for humanity” (without, of course, asking the non-French sector of humanity for their opinion). Despite this touch of arrogance, the French make several arguments for their lofty claims. Firstly, France will likely be the first Euro-pean country to have a centralized database of biometric identity of all its citizens, allowing public services to offer (or require) identification via face recognition. Secondly, almost all ministries and large cities have completed their digitization drives, and many now use their data troves to implement algorithms. Thirdly, the French tax authorities are the most active participants in the European automated information exchange system. Over the course of 2018, they sent and received information on close to 3 million taxpayers, more than any other country. Furthermore, France leads the way when it comes to legislation to keep algorithms in check, even if the law is enforced patchily, if at all.
Private-sector companies such as Thales or Idemia (formerly Morpho-Safran) are world leaders in biometrics. If you’ve ever crossed an international border, chances are you were submitted to automated tools from Idemia, which operates the Schengen area’s Visa Information System and the TSA PreCheck program in the United States. These companies actively lobby public officials at all levels to support their work. This often takes the form of surveillance projects carried out under the umbrella of “smart city” initiatives. However, this sometimes leads to disproportionate measures; for example, the schools where hundreds of children are required to use biometric identification to access canteens.
In a country where measuring racial discrimination is a criminal offense punishable by five years’ in prison1, few studies exist on algorithmic bias. Despite this limitation, several watchdog organizations, both within the administration and in civil society, regularly document the many instances where automated decision-making goes against the freedom of French citizens. However, the growing list of occasions when their opinion, or findings, are disregarded by the authorities casts doubt on the sustainability of the French model, at least when it comes to automated and democratic decision-making.
A catalog of ADM cases
alicem
Although not an automated decision-making system in itself, Alicem is a government program that should allow any part of the French administration system to offer identification via face recognition. Planned for early 2020, it has been put on hold after a public outcry in late 2019.
Citizens register their face’s biometric characteristics using a smartphone app. They can then use the app to go through administrative procedures that currently require them to physically visit a government agency.
Alicem could work as a centralized database of its citizens’ face biometrics (the government is adamant that no biometric data is stored). Even though officially, there are no plans to use the database further, it potentially opens the way for a wide array of applications, not least blanket face recognition by video surveillance cameras in the country’s streets.
In early September 2019, a security expert revealed that the developers of the app posted some of the code to Stack Overflow (a forum for computer developers), published private project videos publicly on YouTube, and also made the staging servers of the project freely available online (Alderson, 2019). These are all signs of extremely sloppy security procedures for any project, let alone an official biometric database. As was the case with Aadhaar, India’s notoriously leaky biometric database, it is almost certain that the data held by France’s Alicem will eventually land in the hands of criminals (BBC, 2018).
After Bloomberg reported about the scheme in October 2019 (Fouquet, 2019), the public outcry led to the government delaying the launch of the app. Now, it is expected to launch in 2020, but no official launch date has been made public.
automating surveillance in large cities
In Saint-Étienne (pop. 175,000), the city planned to deploy microphones in order to automatically detect suspicious sounds. The project was to be implemented in a poor neighborhood, in coordination with CCTV cameras and an autonomous drone equipped with a camera. The plan was to register all “suspicious” sounds, including things like gunshots but also electric drills, sprays, and whistles (Tesquet, 2019 ; La Quadrature, 2019).
Another plan, this time in the two largest southern cities of Nice and Marseille, proposed that some high schools should introduce face recognition at the building’s entrance. Under the plan, students would pass a face check prior to entering their high school. The program was to be funded and implemented by Cisco, a US company.
However, both projects are on hold after the French data protection authority considered them illegal. The Saint-Étienne microphones would infringe on the privacy of citizens deemed disproportionate, they said (Hourdeaux, 2019a).
The heart score
In early 2018, and following similar measures for patients with liver and kidney problems, French hospitals introduced a “heart score” for patients in need of a heart transplant. Previously, patients waiting for a heart transplant were divided into two categories – “emergency” and “super emergency” – depending on the severity of their condition. However, a review of past practices revealed that one in four patients classed as “super emergency” for a heart transplant was not actually at high-risk. Whereas, a third of those who were high-risk were not in the “super emergency” category.
The current system computes a score each time a new heart is available for transplant. The algorithm that computes the score is transparent and fairly understandable (younger people, for instance, are given more points. And, where there is lower compatibility between the donor and the potential recipient, this results in fewer points), and the score is available for doctors to access online. Based on the score, doctors decide who will benefit from the fresh organ (Guide du Score Cœur, 2018). The court of auditors praised the new system in a 2019 report. However, they cautioned that patient data was often erroneous because hospital personnel had not updated their digital files. The report stated that bogus data concerned up to one in four patients at one hospital. Such flaws endangered the objectivity and acceptance of the system, the auditors wrote (Cour des Comptes, 2019).
machine-learning to detect tax fraud
The finance ministry introduced a legal amendment in the 2020 budget law that will let them scrape data from social networks, websites for classified advertising, or auction platforms to detect tax-fraud. They could, for instance, compare a person’s lifestyle on Instagram with their tax returns. Machine-learning is expected to find the patterns of wrongdoing from a training data set. However, the French data protection authority strongly criticized the plan (Rees, 2019a).
The draft law will let the program run for the next 3 years. Several MPs appealed to the Constitutional Court, in the hope of striking down the measure (Rees, 2019b). But, in a final decision, the court found the scheme to be legal (LaVoix du Nord, 2019).
automation at the employment agency
The national employment agency has digitized every procedure in an attempt to speed up the treatment of cases. While the move proved beneficial for simple situations, it created bottlenecks for complex cases. The scanned documents, for instance, are treated by contractors that sometimes classify them wrongly, leading to erroneous decisions. In the end, caseworkers complained that their workload increased, as they must redo by hand many of the automated processes (Guedj, 2019).
The health data hub
In early December 2019, the government created a legal structure to host the “Health Data Hub” (HDH). HDH is a platform that will gather all the data produced by the public health system, and it will make this data available for start-ups and companies that have a project “in the public interest.” The project follows the AI strategy that President Macron defined in 2018, where health was one of four pillars. Public hospitals are unhappy about the move as it takes the data they have independently maintained over the years away from them. The fact that the platform is partly-hosted on Microsoft Azure, a cloud computing solution, compounds the fears that sensitive data might be shared with foreign, third-parties. All the information on the “Health Data Hub” is said to be anonymized, but because it might be shared in a non- aggregated format, re-identification could be possible (Hourdeaux, 2019b).
Hate speech on social networks
The “Avia” law, largely copied from the German NetzDG, was adopted by the French parliament in early 2020. This law would have forced social networks to remove any piece of content identified as hate speech within 24 hours. Given the required speed, such take-downs would have been made automatically (Rees, 2020).
The law also extended the censorship powers of the police. Under current legislation, police officers can censor any website that “praises terrorism” after giving a 24-hour notification to the website’s owner. The new law would have reduced the notification period to one hour. French police interpret “terrorism” very loosely, using the charge against left-wing groups, environmentalists, and many Arab or Muslim communities regardless of their intentions.
However, these key points of the law were struck down by the constitutional court in June. The definition of hate speech was extremely vague, including “praise of terrorism” and “publication of pornographic content that minors could see” (Untersinger, 2020).
Policy, oversight and debate
A handful of the administration’s algorithms are now open, many more remain closed
The 2016 “Digital Republic” Act states that any citizen can ask to see the rules that define an algorithm used to make a decision. However, since the law became effective in 2017, only a handful of algorithms have been made public, but the administration remains confident that transparency will happen, eventually (Acteurs Publics, 2019).
Change might be coming, as the law states that, starting 1 July 2020, any decision taken by the administration on the basis of a closed algorithm would be considered void.
A researcher analyzed the freedom of information requests containing the word “algorithm” in the archive of the freedom of information authority (Commission d’Accès aux Documents Administratifs, CADA). Out of the 25 requests he analyzed (between 2014 and 2018), his main finding was that the French were not aware of their right to be given the rules governing public-sector algorithms (Cellard, 2019).
The much-hated automated radars are back
Gilets jaunes, the “yellow vest” protesters, have destroyed over 3,000 of France’s 4,500 automated speed cameras, according to the government. The radars were a symbol of government overreach for protesting car-owners.
The speed cameras automatically assess the speed of all vehicles, and newer versions of the radars can check for other illegal behavior, such as a driver using a phone while driving. The much-hated devices are coming back as the government plans to deploy 6,000 new, improved units by the end of 2020 (L’Express, 2019).
Two watchdogs down, one up
Algotransparency and La Data en Clair, two watchdogs and news organizations that featured in the previous Automating Society report, ceased activities in 2018. No reason has been given for the discontinuation of either organization.
La Quadrature du Net, together with 26 other civil society organizations, launched Technopolice in August 2019, a watchdog for “smart city” initiatives. The group calls for “methodical and continuous resistance” against what it sees as an implementation of the surveillance state.
Bob emploi
Bob Emploi – an online service that claimed it could reduce unemployment by automatically matching job seekers with job offers – failed to gain traction in 2019. Approximately 50,000 accounts were created during the year (there are over three million unemployed people in France). While the project is still under development, the claims of algorithm-driven matching have disappeared from the official website. It is now about providing data to “accompany” job seekers in their job search
Parcoursup – Selection of university students
A students’ union sued the government to obtain the source code behind an algorithm that sorts university applicants. A first tribunal granted them their request; however, that decision was later overruled by the Supreme Court for administrative matters (Conseil d’Etat). Although the administration must, by law, provide the details of any algorithm used to make a decision that affects a citizen’s life (as per the 2016 Digital Republic Act mentioned above), lawmakers carved an exemption for university selection. The judges did not assess the legality of this exemption. Instead, they based their decision solely on it (Berne, 2019).
Key takeaways
France continues to enthusiastically experiment with automated decision-making at both the national and the local level. Watchdog organizations, such as the data protection authority (Commission nationale de l’informatique et des libertés, CNIL) and La Quadrature, are active and visible in public debates, but their power remains limited, especially given heightened political tensions. After CNIL declared the face recognition program in the Provence-Alpes-Côte d’Azur region illegal, local strongman Renaud Muselier took to Twitter to declare CNIL a “dusty” organization, which ranked the security of students “below its ideology”.
The government followed through on parts of its AI strategy. Of the four pillars put forward in 2018, transportation, security, and health received support in the form of funding or fast-tracking legislation. We could not find any measures related to ADM and the fourth pillar, the environment.
Despite a highly volatile political situation, issues relating to ADM and data collection do still enter into both public and political debate. Stories on Parcoursup were headline news in early 2019. In the fall, the program of mass data collection by the tax authorities, dubbed “Big Brother Bercy” named after the location of the finance ministry, provoked an outcry loud enough for some members of parliament to propose amendments to the project.
Similarly, a project that would have forced all citizens to use face recognition on a mobile app to access e-government services, called Alicem, was put on hold after newspapers and other detractors denounced the project as disproportionate. Although several civil society organizations have been active on the issue, it took the publication of the story in the US media outlet, Bloomberg, for the uproar to begin in earnest (Saviana, 2019).
The lack of political stability may prevent French proposals and experience from feeding the debate across Europe. Mr. Villani, an MP for the presidential party, for instance, wrote the country’s AI strategy in 2018. However, after he declared his candidacy for the Paris mayoral race against his party’s candidate, he is now a political adversary to President Macron.