Automating Society Report 2020

Policy Recommendations

In light of the findings detailed in the 2020 edition of the Automating Society report, we recommend the following set of policy interventions to policymakers in the EU parliament and Member States’ parliaments, the EU Commission, national governments, researchers, civil society organizations (advocacy organizations, foundations, labor unions, etc.), and the private sector (companies and business associations). The recommendations aim to better ensure that ADM systems currently being deployed and those about to be implemented throughout Europe are effectively consistent with human rights and democracy:


Increase the transparency of ADM systems

Without the ability to know precisely how, why, and to what end ADM systems are deployed, all other efforts for the reconciliation of fundamental rights and ADM systems are doomed to fail.

Establish public registers for ADM systems used within the public sector

We, therefore, ask for legislation to be enacted at the EU level to mandate that Member States establish public registers of ADM systems used by the public sector.

They should come with the legal obligation for those responsible for the ADM system to disclose and document the purpose of the system, an explanation of the model (logic involved), and information about who developed the system. This information has to be made available in an easily readable and accessible manner, including structured digital data based on a standardized protocol.

Public authorities have a particular responsibility to make the operational features of ADM systems deployed in public administration transparent. This was underlined by a recent administrative complaint in Spain, that argues that “any ADM system used by the public administration should be made public by default”. If upheld, the ruling could become precedent in Europe.

Whereas disclosure schemes on ADM systems should be mandatory for the public sector in all cases, these transparency requirements should also apply to the use of ADM systems by private entities when an AI/ADM system has a significant impact on an individual, a specific group, or society at large.

Introduce legally-binding data access frameworks to support and enable public interest research

Increasing transparency not only requires disclosing information about a system’s purpose, logic, and creator, as well as the ability to thoroughly analyze, and test a system’s inputs and outputs. It also requires making training data and data results accessible to independent researchers, journalists, and civil society organizations for public interest research.

That’s why we suggest the introduction of robust, legally-binding data access frameworks, focused explicitly on supporting and enabling public interest research and in full respect of data protection and privacy law.

Learning from existing best practices at the national and EU levels, such tiered frameworks should include systems of sanctions, checks and balances as well as regular reviews. As private data-sharing partnerships have illustrated, there are legitimate concerns regarding user privacy and the possible de-anonymization of certain kinds of data.

Policymakers should learn from health data sharing frameworks to facilitate privileged access to certain kinds of more granular data, while ensuring that personal data is adequately protected (e.g., through secure operating environments).

While an effective accountability framework will require transparent access to platform data, this is a requirement for many auditing approaches to be effective as well.


Create a meaningful accountability framework for ADM systems

As findings from Spain and France have shown, even if transparency of an ADM system is required by law and/or information has been disclosed, this does not necessarily result in accountability. Further steps are needed to ensure that laws and requirements are actually enforceable.

Develop and establish approaches to effectively audit algorithmic systems

To ensure that transparency is meaningful, we need to complement the first step of establishing a public register by processes that effectively audit algorithmic systems.

The term “auditing” is widely used, but there is no common understanding of the definition. We understand auditing in this context in accordance with ISO’s definition as a “systematic, independent and documented process for obtaining objective evidence and evaluating it objectively to determine the extent to which the audit criteria are fulfilled.”

We do not have satisfying answers to the complex questions raised by the auditing of algorithmic systems yet; however, our findings clearly indicate the need to find answers in a broad, stakeholder engagement process and through thorough and dedicated research.

Both audit criteria and appropriate processes of auditing should be developed, following a multi-stakeholder approach that actively takes into consideration the disproportionate effect ADM systems have on vulnerable groups and solicits their participation.

We, therefore, ask policymakers to initiate such stakeholder processes in order to clarify the outlined questions, and to make available sources of funding aimed at enabling the participation by stakeholders who have so far been inadequately represented.

We furthermore demand the provision of adequate resources to support/fund research projects on developing models to effectively audit algorithmic systems.

Support civil society organizations as watchdogs of ADM systems

Our findings clearly indicate that the work of civil society organizations is crucial in effectively challenging opaque ADM systems. Through research and advocacy, and, often, in cooperation with academia and journalists, they repeatedly intervened in policy debates around those systems over recent years, in several cases effectively making sure that the public interest and fundamental rights are duly considered both before and after their deployment in many European countries.

Civil society actors should, therefore, be supported as watchdogs of the “automating society”. As such, they are an integral component of any effective accountability framework for ADM systems.

Ban face recognition that might amount to mass surveillance

Not all ADM systems are equally dangerous, and a risk-based approach to regulation, such as Germany’s and the EU’s, correctly reflects this. But in order to provide workable accountability for systems that are identified as risky, effective oversight and enforcement mechanisms must be put in place. This is all the more important for those deemed at “high risk” of infringing on users’ rights.

A crucial example that emerged from our findings is face recognition. ADM systems that are based on biometric technologies, including face recognition, have been shown to pose a particularly serious threat to the public interest and fundamental rights, as they clear the path to indiscriminate mass surveillance – and especially as they are widely, and opaquely, deployed nonetheless.

We demand that public uses of face recognition that might amount to mass surveillance are decisively banned until further notice, and urgently, at the EU level.

Such technologies may even be considered as already illegal in the EU, at least for certain uses, if deployed without “specific consent” of the scanned subjects. This legal reading has been suggested by the authorities in Belgium, who issued a landmark fine for face recognition deployments in the country.


Enhance algorithmic literacy and strengthen public debate on ADM systems

More transparency of ADM systems can only be truly useful if those confronted with them, such as regulators, government, and industry bodies, can deal with those systems and their impact in a responsible and prudent manner. In addition, those affected by these systems need to be able to understand, where, why, and how these systems are deployed. This is why we need to enhance algorithmic literacy at all levels, with important stakeholders as well as the general public, and to reinforce more diverse public debates about ADM systems and their impact on society.

Establish independent centers of expertise on ADM

Together with our demand for algorithmic auditing and supporting research, we call for the establishment of independent centers of expertise on ADM at the national level to monitor, assess, conduct research, report on, and provide advice to government and industry in coordination with regulators, civil society, and academia about the societal and human rights implications of the use of ADM systems. The overall role of these centers is to create a meaningful accountability system and to build capacity.

The national centers of expertise should involve civil society organizations, stakeholder groups, and existing enforcement bodies such as DPAs and national human rights bodies to benefit all aspects of the ecosystem and build trust, transparency, and cooperation between all actors.

As independent statutory bodies, the centers of expertise would have a central role in coordinating policy development and national strategies relating to ADM and in helping to build the capacity (competence/skills) of existing regulators, government, and industry bodies to respond to the increased use of ADM systems.

These centers should not have regulatory powers, but provide essential expertise on how to protect individual human rights and prevent collective and societal harm. They should, for instance, support small and medium-sized enterprises (SMEs) in fulfilling their obligations under human rights due diligence, including conducting human rights assessments or algorithmic impact assessments, and by registering ADM systems in the public register discussed above.

Promote an inclusive and diverse democratic debate around ADM systems

Next to strengthening capacities and competencies with those deploying ADM systems, it is also vital to advance algorithmic literacy in the general public through broader debate and diverse programs.

Our findings suggest that ADM systems not only remain non-transparent to the wider public when they are in use, but that even the decision whether or not to deploy an ADM system in the first place is usually taken without either the knowledge or participation of the public.

There is, therefore, an urgent need to include the public (interest) in the decision-making on ADM systems from the very beginning.

More generally, we need a more diverse public debate about the impact of ADM. We need to move beyond exclusively addressing expert groups and make the issue more accessible to the wider public. That means speaking a language other than the techno-judicial to engage the public and spark interest.

In order to do so, detailed programs – to build and advance digital literacy – should also be put in place. If we aim at enhancing an informed public debate and creating digital autonomy for citizens in Europe, we have to start by building and advancing digital literacy, with a specific focus on the social, ethical, and political consequences of adopting ADM systems.