Automating Society Report 2020

Research

Finland

by Minna Ruckenstein and Tuukka Lehtiniemi

Contexutalization

In Finland, both public and private actors from different sectors of society are involved in the development of automation systems and put forward plans to implement them. At the central government level, attempts to specify the debate around AI and ADM are ongoing. This debate includes technical, legal, ethical, and societal perspectives. The technical and legal perspectives tend to dominate the debate, but the ethical perspective is currently gaining more attention. The societal perspective remains marginal and it needs to be reintroduced time and again into the debate by emphasizing the socio-technical nature of ADM systems. From this perspective, ADM is not a stand-alone technology, but a framework that encompasses the decision-making model and the political, economic, and organizational context of its use.

Driven by pressures to do more with fewer funds, Finnish authorities across the board are employing and planning to deploy automated procedures in their decision-making. Yet, legal amendments and clarifications are needed to maintain and proceed with automation projects. Various public institutions are currently under scrutiny concerning the compatibility of automation with existing laws. Another area of concern is the fulfillment of good governance. Here, the issue of full automation – that is, the use of ADM systems without direct human involvement in the decision process – is particularly contested.

Commercial actors are an integral part of public sector initiatives, raising questions about the aims and outcomes of public-private partnerships. For example, in the care sector, companies typically offer algorithmic models to spot correlations between individual risk factors. The implementation of such models could fundamentally change the logic of social and healthcare systems. The cases discussed here call for an overdue societal debate around ADM.

A catalog of ADM cases

Repairing automated content moderation

Utopia Analytics, a text analytics company specializing in automating content moderation, has been moderating online conversations on one of Finland’s largest online forums (Suomi24) since 2017. The company’s flagship service is called Utopia AI Moderator and, according to their website, it promises to deliver “the finest machine learning services on the planet” (Utopia Analytics, 2020).

Researchers at the University of Helsinki followed the implementation of the Utopia AI Moderator tool as part of the content moderation process of Suomi24. The Utopia AI Moderator is a service that is tailor-made in collaboration with the customer, rather than a stand-alone product. As with all of its customers, Utopia provides Suomi24 with a machine learning model that has been built using the data from their historical man-made content removal decisions. The model can be adjusted over time, for instance, it is possible to alter the machine’s sensitivity to the messages that are posted.

According to Suomi24, approximately 31,000 new threads with 315,000 comments are posted on the forum monthly. Suomi24 automatically sends moderation requests to Utopia, and their AI model either accepts or rejects the content. If there is a borderline case, or something completely new compared to the existing data, the model may request a human moderator to view the post. The AI model is kept up-to-date by using these new samples generated by human moderators.

Ultimately, the automated moderation process is a combination of Utopia’s tool, Suomi24’s own technical systems and human work.

Soon after the implementation of the Utopia system, users of the online forum discovered that messages from conversations started disappearing randomly. Some writers on the forum started to craft conspiracy theories about attacks on free speech. Others paid attention to the fact that similar messages were disappearing, for instance, those containing links. Writers concluded that the removals were machine-generated and referred to a dysfunctional and weird AI that was sabotaging their discussions.

Instead, it is probable that the removals were generated by bugs, or inconsistencies in the technical system of the online forum. The important point here is that what looked like a problem generated by the AI model was, in fact, an everyday information technology failure.

After realizing what was happening on the forum, the Suomi24 team openly discussed false removal decisions with the users of their site, telling them about their moderation practices and the division of labor between humans and the machine (Suomi24, 2019). The company representatives admitted that they had returned incorrectly removed messages to discussions. In the course of explaining their moderation practices, they also expressed a view on the tasks and the limitations of AI. As they mentioned in a blog post: “as with any AI, it is not (at least for now) perfect, but it is still an indispensable assistance in a service of this scale.” The Utopia AI Moderator model continues to remove a large portion of offensive posts, as well as posts that do of Suomi24 has improved and the moderation process is expected to work much more smoothly in the future. Yet, machine decisions are only as good as the data that the machine uses. Humans – in this case, content moderators learning to remove content more accurately. However, the AI model never decides what is a proper or improper conversation, it only mirrors decisions human moderators have already made. Over time, it will become better at detecting things that have occurred before, but there will always be new decisions to make which need human judgement.

The collaboration between Utopia Analytics and Suomi24 reminds us of the way in which services develop gradually over time, how things might break down, and how essential it is to do repair work during the implementation process. Following recent upgrades, the technical system of Suomi24 has improved and the moderation process is expected to work much more smoothly in the future. Yet, machine decisions are only as good as the data that the machine uses. Humans – in this case, content moderators– continue to have a central role in procuring the data used for training the machine.

Increasing the efficiency of permit processes at the Finnish Immigration Service


Migri (Finnish Immigration Service) is a government agency responsible for making decisions in matters related to immigration, Finnish citizenship, asylum and refugee status. In 2019, Migri made a total of 104,000 decisions concerning these matters.

The long length of time it takes to process permits causes anguish and uncertainty for the people subject to Migri’s decisions. This is the case particularly for those seeking asylum and refugee status. Other problems that repeatedly come up in public discussions include the residency permits of employees hired from abroad by Finnish firms (Keskuskauppakamari 2019), as well as international students who miss the start of their studies due to delays in permit processing (Helsingin Sanomat 2019c). The agency is currently exploring automation in order to speed up permit processes and make its operations more cost-effective (Migri 2019a). According to an IT consultancy spokesperson,the ongoing “smart digital agency” project aims, to provide better services, improve internal processes and learning to remove content more accurately.

Given the agency’s lack of resources to handle an increasing amount of decisions (Migri 2018a), economic efficiency means reducing the need for human participation at different stages of the permit process. Against this background, automation is an attractive way to decrease human service interactions, as well as to focus the available human resources on those parts of the permit process where they are necessary.

Migri’s aim is to reduce the need to contact officials in person so that, by the end of 2020, 90% of all interactions take place via self-service channels. A practical example of this is the use of chatbots to automatically deliver basic service tasks (Migri 2018b). According to Migri, chatbots have eased the burden on their overwhelmed telephone customer service staff. Before the chatbots were launched in May 2018, the majority of calls did not get through at all: only around 20% of incoming calls were responded to. After chatbot automation, the share of responded calls has increased to around 80%. This was possible because many of the calls involved routine issues. Chatbot automation also redirects service-seekers to other government agencies: for example, starting a business in Finland involves Migri, but also other agencies such as the Tax Administration or the Patent and Registration Office (Migri 2018c).

The other aspect of automation concerns the actual decision-making, which Migri considers the key method of speeding up permit processes (Migri 2019b). Piloted automated processes include citizenship applications and seasonal work certificates. Here, as well as in customer service, the aim is to automate those parts of the process that do not require human input. Significantly, Migri makes a distinction between partial and full decision automation: full automation refers to a process where an information system executes all phases of the decision process. According to Migri, full decision automation would only be applied to straightforward decisions that result in a positive outcome with a claim accepted or a permit granted. Negative decisions, or decisions that require case-by-case consideration or a discussion with the applicant, will not be fully automated. Full automation is, therefore, not suitable for processing asylum applications (Valtioneuvosto 2019). However, even asylum decision processes could be partially automated, so that those phases of the process that do not require human judgment could be carried out automatically. Even with these restrictions, it is notable that the potential for automation concerns a large share of all Migri’s decisions. In 2019, about 71,000 decisions (more than two-thirds of all decisions) concerned residence permits, and about 87% of these decisions were positive. Almost 12,000 citizenship decisions were made with a similar share of positive decisions (Migri 2018).

In its public statements, Migri has pointed to the organizational changes necessary for making the automation of permit processes possible, including rearrangement of the authority’s own operations (Migri 2019b). Even more pressing changes include amendments to legislation relating to personal data in the field of immigration administration. A proposal at the Parliament of Finland is aimed at renewing the regulations on Migri’s personal data processing to “meet the requirements related to digitalization”, and allow automated decisions to be made when certain preconditions are met (Ministry of the Interior 2019). The proposal underwent evaluation by the Constitutional Law Committee at the Parliament of Finland. In its statement (Eduskunta 2019), the committee expressed a critical view towards the proposal. In the committee’s view, the problems with the proposal concern the fulfillment of good governance requirements, and the responsibilities of officials for the legality of their actions. In addition, instead of approaching decision automation with authority-specific legislation, the committee would favor more general legislation on the topic, which should be amended with sector-specific rulings where necessary.

Migri’s plans to automate permit processes, then, take place in the context of internal and external pressures on the agency’s permit processes: increased number of applications to process, desire to focus resources on issues that require human judgment, demands for speedy application processing, and a legislation that currently delimits the employment of ADM. Simultaneously, proposals for amending the legislation seem to be running into constitutional conflicts, forestalling or at least delaying, the authority’s automation plans.

Automatic identification of individual risk factors in social care and healthcare

In October 2019, the Japanese multinational IT service provider, Fujitsu, announced that it was developing an AI solution for South Karelia’s social care and healthcare district (known by its Finnish acronym Eksote) (Fujitsu 2019). The project employs machine learning methods with the aim of helping Eksote identify factors underlying social exclusion of young adults, as well as predicting associated risks. With the predictive model, social and healthcare professionals will be provided an overview of risk factors. According to Fujitsu’s press release, the model identifies some 90% of young adults susceptible to social exclusion. In practical terms, the model that is being used is derived from pseudonymized data taken from the use of Eksote’s services by young adults, and it uses this data to predict social exclusion outcomes defined by Eksote’s professionals. According to Eksote, the legislation on the secondary and combined use of healthcare data makes it possible to use only non-identifiable, pseudonymized data. This means that Fujitsu’s model cannot be used to identify individual young adults considered to be at risk of social exclusion; rather, the model produces a list of risk factors on a general level. The next step in the project is to examine whether it is possible, under the current legislation, to set up a consent-based system: a client’s consent would be asked for before using the predictive model on their individual data when they, for example, have an appointment with a social care or healthcare professional (Etelä-Saimaa 2019).

The project sits well in a continuum of Eksote’s projects involving predictive models. In June 2018, Eksote announced that it had developed, in collaboration with the Finnish IT firm, Avaintec, an AI model to predict problems experienced by children and the youth (Eksote 2018a). The aim was to identify problems early on, so that an intervention could be made and support provided to families sooner rather than later. Much like in the 2019 project on young adults and social exclusion, this model was based on explicitly defined undesired “endpoints”: low-grade averages, disciplinary interventions at school, high-class non-attendance figures, being taken into custody, acceptance into psychiatric care, and substance abuse (Eksote 2018b). The model made use of data gathered from the IT systems of maternity clinics, kindergartens, schools, healthcare, and mental healthcare providers, as well as social services – e.g., it combined data from different administrative branches, namely social care, healthcare, and education. The outcome was the identification of a total of 1340 risk factors (Yle 2018), ranging from bad teeth in children, to parents missing maternity clinic appointments, to the child’s siblings bullying others at school. These examples also give an idea of the kinds of data that were employed when making predictions. Ideally, the model would make it possible to continuously keep track of risk factors of individual children.

From a technical point of view, Eksote’s information systems could be set up to signal an alarm when a given threshold is exceeded. Indeed, this was the ultimate desire expressed by the project’s developers, but the practical implementation of such feedback functionalities ran into conflict with existing legislation. While it is permissible to combine non- identifiable, individual-level data from different administrative branches for the purpose of research (Oppimaa n.d.), it would not be legal to do the same thing for identifiable individuals and on a continuous basis. The project, then, was considered a pilot or a trial, the modelling phase being the practically implemented part, and further implementation is pending legislative changes. However, when an AI- branded model remains on the level of general lists of risk factors and their associated probabilities, it provides limited help beyond traditional statistical research, let alone the intuition of experienced social work professionals. It is probably not surprising that, for example, when a child’s hygiene is neglected, their siblings have problems at school, or appointments are repeatedly missed, that there can be, but on the other hand, may not be, other problems later on in the child’s development.

Importantly, Eksote’s 2018 and 2019 AI projects operate with similar principles: they define negative outcomes for a given group of people, combine data from different administrative sources, and train a machine learning model with the aim of providing a list of risk factors with probabilities associated with negative outcomes. This kind of approach can be replicated across welfare contexts. Eksote also uses a similar model for the purpose of identifying risk factors in elderly people related to their decreased ability to function. It seems obvious to conclude that if it were possible to combine individual-level data from different administrative branches, models like these, seeking correlations for risk factors, could be implemented quickly into social and healthcare systems.

Policy, oversight and debate

AuroraAI for life-event management

The national AI program aims to turn Finland into a leading country in the application of AI. The program’s 2017 report on the objectives and recommended measures outlined how AI can facilitate the public sector’s development towards becoming a more efficient and effective service provider (Ministry of Economic Affairs and Employment 2017). Indeed, one of the key actions recommended in the report was to build the world’s best public services. As a practical step, the report describes a citizen’s AI assistant called Aurora.

Since these recommendations, the idea of the citizen’s AI assistant has expanded into an AuroraAI program led by the Ministry of Finance (Ministry of Finance 2019a). The above- mentioned 2017 report still described Aurora as something fairly easy to grasp, e.g., the citizens’ AI assistant, not unlike commercial digital assistants such as Siri, tirelessly helping with personalized services. Over the past two years, AuroraAI has become much broader in its aims and ultimately vaguer: it is “a concept for human-centric and ethical society in the age of artificial intelligence”, as the program’s development and implementation plan defines it (Ministry of Finance 2019b). In the “AuroraAI partner network”, it is envisioned that a group of service providers, including public sector organizations, private firms as well as NGOs, will jointly provide the services offered to citizens. The idea is to automatically identify “life events”, e.g., circumstances in people’s lives that give rise to specific service needs. Auro- raAI is described by its promoters as “a nanny”, or “a good guardian” that identifies and predicts life events, and helps citizens to utilize public and private services by suggesting and offering the ones needed in these particular circumstances. Use case examples include a student moving to a new city, retirement from work, loss of employment, or changes in family relations. The actual outcomes of the program for citizens remain conceptual, but what is in the plans resembles a recommender system for public services (Ministry of Finance 2019c). In addition, to offer addition to offering services in a timely and frictionless manner, the program is simultaneously based on a currently dominant economic rationale: timeliness, personalization, targeting, and automated service provision are expected to increase efficiency and remove wastefulness.

From a legal and regulatory perspective, the AuroraAI initiative is far from a straightforward exercise. One of the foundational principles underlying the “life event” model is that data collected in various parts of the public sector will be brought together and used to proactively develop services for citizens. However, as the final report of the national AI program (Ministry of Economic Affairs and Employment 2019) points out, the existing legislation that covers the using, sharing, and combining of personal data collected by different branches of public administration “is very challenging” when it comes to novel uses of the same data. The report, therefore, identifies agile development of regulations as a necessary step to make new services based on life events possible, as well as to facilitate the use of AI to serve the public interest. For this purpose, the report proposes “sandbox” environments or restricted trial grounds for formulating new regulations and novel technologies. The idea is that consenting citizens would voluntarily participate in trials and pilot projects operating in the “sandbox”, and allow the secondary use of their personal data, and the development and testing of new AI services, without the need to amend existing regulations. Even if the sandbox model is realized, it remains to be seen what response it will receive from citizens; whether or not citizens are willing to “let the digital twin empower them”, as AuroraAI’s promotional material suggests.

In addition to the regulatory constraints related to combining public-sector data, the desirability of a public services recommender system should also be questioned. In the Finnish context – with its high general trust in the state and its services – some of AuroraAI’s aims, including timely and effective provision of public services, are unlikely to be broadly contested as such. However, the way that the

project is currently imagined gives rise to a number of questions that merit careful consideration. For example, how would it be determined which services are either offered, or not, to citizens? The recommendations based on predictive models could go against accepted norms or notions of appropriateness. For example, offering divorce-related public services as soon as the statistical criteria for such a “family relations life event” are met, would likely be frowned upon. The services offered to citizens would need to be carefully curated, but on what legal or moral grounds? Another issue concerns the influence that recommendations could have on citizens’ actions. The project’s promoters say that AuroraAI empowers individuals by giving them control over the decisions they make. However, it is well established that “nudging” – by means of the design of choice architecture and appropriately timed suggestions (or lack thereof) – has an effect on the choices made. AuroraAI would, therefore, act as a decision guidance system, affecting, for example, which social benefits to apply for, or which health and well-being services to make use of. Effective recommendations would increase or decrease service use, and would, therefore, have financial consequences for the public sector. What, then, is optimized on the level of the public services system: is service usage maximized, are costs minimized, or is citizen well-being improved? What set of criteria are these decisions based on and who chooses them? So far, these broader questions about the automated offering of public services have received hardly any attention in public discussion.

Automated benefit processes at the Social insurance institution of Finland (kela)

The Social Insurance Institution of Finland, known as Kela, is responsible for settling some 15.5 billion euros of benefits annually under national social security programs. Kela views AI, machine learning, and software robotics as integral parts of its future ICT systems. Kela’s ongoing developments in the automation field include customer service chatbots, detection and prevention of misunderstandings or fraud, and data analytics. Legislation on benefits is complex, spanning hundreds of pieces of separate regulations set in place over 30 years, and legislative compliance of automated procedures is a major concern for Kela. One particular issue, identified in the 2019 Automating Society report, was the need to fulfil requirements concerning how benefit decisions are communicated to citizens, so that the reasoning and results of automation are translated into an understandable decision.

The issue of communicating automatic decisions has since sparked an investigation into Kela’s ADM processes. Already in 2018, Finland’s Chancellor of Justice – an official who supervises how authorities comply with the law, and who advances the legal protection of citizens – received a complaint concerning the communication of Kela’s unemployment benefit decisions. When investigating the complaint, the Chancellor paid attention to the fact that Kela had implemented ADM procedures to settle the benefits. There were tens of thousands of such automated decisions, and no one person to contact who could provide additional information on the decisions (Oikeuskanslerinvirasto 2019a). As a result, in October 2019, the Chancellor started a further investigation into the use of automation in Kela’s benefit processes.

The Chancellor’s information request to Kela (Oikeuskanslerinvirasto 2019b) provides an overview of compliance and legal protection issues relevant to public sector ADM projects. The information request focused on the requirements of good governance, the legal accountability of officials for their actions, and how these requirements and responsibilities relate to automated decision processes. The Chancellor noted that while Kela publicly states that it has a right to make decisions automatically, it does not provide information on which decisions it automates, and whether this refers to partial or full automation, or whether it should be considered decision-making supported by automation. Based on these considerations, the Chancellor requested more detailed information on Kela’s ADM processes, including what benefit decisions it automates and what the legal basis of such decision automation is. Kela’s views on ensuring good governance, as well as the legal protection of the citizens concerned, were also requested. Kela was asked to provide details on how it plans to further develop decision automation, for example, what types of ADM can be employed in the future. Furthermore, Kela was asked to take a stand on ADM when it comes to the accountability of officials; in particular, how responsibilities are distributed among Kela’s leadership, ADM system developer, and how individual officials handle the decisions. Finally, Kela was requested to provide details on the fulfillment of citizens’ rights to information on how decisions are made, how decision-making algorithms work, and the content of programmed decision-making rules.

The information request that the Chancellor sent to Kela also points out a potential distinction between partial and full automation of decisions. This suggests that whether officials employ automation as a tool or an aide for decision- making, or whether decisions are made completely automatically, may have ramifications on the conduct of good governance and the distribution of officials’ responsibilities for the legality of their actions. In a broader context, ADM is sometimes framed as an incremental development: a continuation of older, technologically simpler means of automated information processing. In Kela’s case, “traditional” means, such as batch processes and traditional software code, have been employed to automate parts of benefit processes over previous decades. However, a distinction between completely and partially automated processes suggests that, after a certain point, increasing the level of decision automation is not simply a shift from less to more automation, but it brings about a qualitative shift to a different kind of process that has new implications for compliance with legal requirements.

The Deputy parliamentary ombudsperson considers automated tax assessments unlawful

In the previous Automating Society report, we discussed two inquiries by the Deputy Parliamentary Ombudsperson of Finland, Maija Sakslin, resulting from complaints about mistakes the Tax Administration made by using automated tax assessments. While the mistakes had already been corrected, the complaints resulted in the Deputy Ombudsper- son’s further investigation into automated tax assessments, paying special attention to how automated processes fulfil the requirements of good governance and accountability of officials, as well as to the legal protection of taxpayers. She stated that the taxpayer’s legal right to receive accurate service and justification of taxation decisions are currently unclear, and requested clarification on these issues from the Tax Administration (Oikeusasiamies 2019).

In November 2019, the Deputy Ombudsperson gave a decision on the matter which contains clarification on the scale of automated tax assessment. According to the Tax Administration, hundreds of thousands of issues are handled annually without human involvement, so that the information system automatically carries out the whole process. The automated procedures include sending reminders, but also making decisions on additional taxes, tax collection and monitoring of payments made. According to the Tax Administration, more than 80% of all tax decisions are fully automated, and if automation were banned, more than 2000 new employees would need to be hired (Helsingin Sanomat 2019d).

The Deputy Ombudsperson points out several issues with the lawfulness of the Tax Administration’s automated procedures. The first issue is the legal basis of automation. The Deputy Ombudsperson notes that automatic tax assessments were based on more general legislation not directly related to decision automation, and states that their legal basis does not, therefore, fulfill the standards set by constitutional law. Employing automated procedures would require an unambiguous legislative basis that defines, among other things, which issues are to be processed automatically. In addition, the legislation lacked a definition of an algorithm, and one would be necessary in order for them to be made available for public scrutiny. The second issue is the accountability of public servants for automated decisions. The Tax Administration handles official’s legal responsibilities for their actions by naming process owners for all automated processes. This process owner, then, was considered responsible for decisions made under that particular process. In the Deputy Ombudsperson’s view, such an arrangement means that the accountability of officials has been defined in an indirect manner, which means that personal accountability of individual officials remains unclear under both constitutional and criminal law. The third critical issue concerns the requirements of good governance. According to the Deputy Ombudsperson, the fulfillment of the requirements of good governance would imply that taxpayers are informed when they are subject to automated procedures. Citizens dealing with the Tax Administration, then, have the right to know the basis of automated processes, as well as when their issue has been resolved automatically.

Due to these considerations, the Deputy Ombudsperson states that the legal basis of the Tax Administration’s automated procedures is unclear, and the procedures are, therefore, unlawful. The decision does not mean that the use of automation should be immediately ceased; rather, the Deputy Ombudsperson stresses the urgency of investigating the legislative needs introduced by ADM procedures. Furthermore, she deplores the broad employment of automated procedures in the Tax Administration without a proper investigation into the legislative needs.

Digital minds hiring service discontinued

The most controversial Finnish case in the AlgorithmWatch report (2019) featured a company called Digital Minds, founded by two Finnish psychologists, which aimed to develop “third-generation” assessment technology for employee recruitment. After the report was published, Matthias Spielkamp and Nicolas Kayser-Bril wrote a follow-up story about Digital Minds for Politico (Spielkamp and Kayser-Bril 2019) underlining the need for critical reflection:

A Finnish company has rolled out a new product that lets potential employers scan the private emails of job applicants to determine whether or not they would be a good fit for the organization. The company, Digital Minds, portrays its offering as something innocuous. If applicants give their consent, what’s the harm? The truth is: We don’t know the answer to that question. And that’s what makes new, potentially intrusive and discriminatory technologies like this one so scary.”

For Juho Toivola, one of the two founders of Digital Minds, the piece in Politico was a “German take on the matter,’’ reflecting a need to have a precedent in the fight against algorithmic powers. According to Toivola, the critique of their service fails to understand the problem they are trying to solve; the goal is to make the employee assessment more convenient and efficient by utilizing existing data, rather than collecting all data separately with each recruitment. The personality analysis is done through IBM’s Watson, which is a commonly used personality test across the globe. The analysis focuses, for instance, on how active people are online and how quickly they react to posts or emails. Five different personality characteristics are analyzed, including agreeableness, conscientiousness, emotional range, and how extrovert or outward-looking a person is.

In order to get an assessment of the possible harm of the Digital Minds service and its legality in relation to GDPR, Nicolas Kayser-Bril reported Digital Minds to the Finnish data protection Ombudsperson. In May 2019, Finland’s national public service broadcasting company’s reporter, Virpi Hukkanen, wrote a piece about the ongoing process with the Ombudsperson, suggesting that Digital Minds is offering a service that is legally problematic (Yle 2019). The Finnish Data Protection Ombudsperson, Reijo Aarnio, said to YLE that he suspected the personality assessment deduced by means of email correspondence violates the Labor Privacy Act that states that the information must be collected from the jobseekers themselves. Aarnio also questioned the validity of the jobseeker’s consent when asked for permission to analyze emails. If the candidate is in a vulnerable position and needs a job, he or she might not be in a position to decline access to their data. Moreover, as with letters, emails are covered by the confidentiality of correspondence act.

Remarkably, however, the news piece revealed that less than ten jobseekers had agreed to participate in Digital Mind’s analysis of their social media posts, and only one had agreed to participate in the analysis of email correspondence. No such analysis had been performed as part of a recruitment process. The fact that the company had very few actual clients gave a new twist to the debate concerning Digital Minds. As Toivola emphasized, he was talking about ‘a proof of concept’ rather than an actual service. It turned out that either intentionally or unintentionally Toivola had been collapsing contexts: founders of Digital Minds had experience of dozens of clients and they had been involved in thousands of assessments of job applicants. But these assessments had mostly been done by means of conventional analysis methods within the recruitment context. After the news piece, Toivola actively participated in social media discussions about their service, bringing to the fore their aims and intentions. Shortly after that, however, the service was discontinued. According to Toivola, Digital Minds is rethinking how to position their future offerings in an ethically more robust manner.

One of the lessons learned from Digital Minds is the importance of clarifying how exactly ADM becomes a part of making decisions that have real-life consequences. Similarly to other companies that offer hiring services, Digital Minds waived their responsibility in terms of real-life consequences by suggesting that they do not make actual hiring decisions, but merely automate the process of assessment, based on which employers then make decisions. Ultimately, it is the organizational environment and the divisions of labor that socio-technically arranges ADM systems that determine whether uses of automation are just and ethically robust. Machines fail to care about real-life consequences and ADM system implementers should take this into account.

Key takeaways

The Finnish cases underline the importance of thoroughly investigating cases when assessing the use and the potential harm of ADM. When the aim of companies is to sell their services, they can alter the details and exaggerate numbers in a way that supports their marketing claims. The investigations should be prepared for the fact that the details about

ADM systems – which are often laborious to uncover – contain the most important information. This means that in seeking more robust knowledge about the fundamental issues at the heart of the debate on AI, algorithms, and ethics, company representatives need to be part of the dialogue. That dialogue can only be nurtured in a culture that values openness: ideally, developers of ADM systems need to feel that they are not being attacked, but that they are participants in an ongoing societal debate. When assessing a case like Digital Minds, for instance, we are not merely talking about one single start-up company, but a larger trend in automation processes making important decisions about people’s lives.

The collaboration between Utopia Analytics and Suomi24 reminds us of the everyday reality of socio-technical systems. The way many technical systems are built, with layers of software written by groups of coders who do not necessarily communicate with each other, makes them vulnerable to various kinds of breakages that might be impossible to predict beforehand. This stresses the fact that the implementation of AI tools is dependent on their integration into existing technical systems. With insufficient quality controls in place it might be impossible to know beforehand how the AI tools will behave.

In terms of public sector cases, the relationship between legislation and plans to implement fully automated decision-making procedures – i.e., procedures that do not involve humans at all – emerges as one key issue. The Tax Administration and Kela have already broadly implemented ADM, and the lawfulness of their procedures is now under scrutiny by supervisory authorities. The Deputy Parliamentary Ombudsperson’s investigation and decision on the use of ADM in the Tax Administration points out that the legal basis of fully automated procedures is insufficient. In her decision, the Deputy Ombudsperson encourages relevant authorities to investigate the legislative needs arising from automation.

This interpretation of the law is important for the broader context of ADM in the Finnish public administration, suggesting that technical procedures that are already widely employed are at odds with existing legislation. Indeed, the use of ADM in Kela is undergoing a similar investigation by the Chancellor of Justice.

Driven by efficiency-related pressures among other things, also other Finnish authorities are either employing or planning to employ automated procedures in their decision-making, the latter is exemplified by the Migri case.

When it comes to the legal status of ADM procedures, Migri’s case is different from the other cases mentioned in this report. The automation plans are still pending new legislation that suffered a setback in the Parliament’s Constitutional Law Committee. This highlights the need for more general, rather than case-by-case, legislation on the ADM procedures used by the public administration. The Ministry of Justice has launched a project to develop general legislation to ensure that the requirements of legality, good governance, legal protection of citizens, and accountability are met when the public sector implements ADM procedures (Oikeusministeriö 2020). A preliminary assessment of legislative requirements was published in February 2020. The way in which Finnish public officials are accountable for their decisions under criminal and constitutional law seems to give the public sector ADM discussion in Finland a distinct flavor.

The other public sector cases point to different ways in which existing legislation holds back the public sector’s automation intentions. From the perspective of Eksote, and similar care providers, automated detection of risk factors is well-intentioned, reasonable, and cost-effective. These kinds of projects, however, tend to individualize the treatment of risk, with possible punitive outcomes when potential problems are brought to the awareness of officials. In a similar way, legislation is at odds with AuroraAI’s plans to access data from different sectors of public administration. While the purpose of AuroraAI is not to intervene when risks are identified, but rather to recommend when needs are identified, the similarities with Eksote’s risk monitoring system are evident. The purpose of the legislation that holds back automation plans is not to prevent data use, but rather to set boundaries for it. With their surveillance logic, monitoring systems that track individuals’ actions might undermine societal cohesion and generate a culture of distrust that should be prevented, for good reasons.

References

Accenture (2016): Accenture tukee Migriä Digitaalinen virasto -hankkeessa, https://www.accenture.com/fi-en/company-news-release-maahanmuuttovirastoa-digitaalisaatiohankkeessa

Eduskunta (2019): Valiokunnan lausunto PeVL72019 vp – HE 18/2019 vp, https://www.eduskunta.fi/FI/vaski/Lausunto/Sivut/PeVL_7+2019.aspx

Eksote (2018a): Lasten ja nuorten ongelmia ennakoidaan Suomen laajimmalla keinoälymallilla Lappeenrannassa, http://www.eksote.fi/eksote/ajankohtaista/2018/Sivut/Lasten-ja-nuorten-ongelmia-ennakoidaan-Suomen-laajimmalla-keinoälymallilla-Lappeenrannassa.aspx

Eksote (2018b): Digitalisaatio tulevaisuuden oppilashuollossa? Case Eksote/Lappeenrannan kaupunki, https://vip-verkosto.fi/wp-content/uploads/2018/10/Digitalisaatio-tulevaisuuden-oppilashuollossa-Case-Eksote-24.9..pdf

Etelä-Saimaa (2019): Eksote yrittää pitää nuoret aikuiset elämässä kiinni keinoälyn avulla – Taistelu syrjäytymistä vastaan jatkuu asiakkaiden kanssa toteutettavalla pilottihankkeella, https://esaimaa.fi/uutiset/lahella/79450bc3-1641-4a0f-bcea-11df63988bdc

Fujitsu (2019): Fujitsu ja Eksote selvittävät eteläkarjalaisten nuorten syrjäytymisriskiä tekoälyratkaisulla, https://www.fujitsu.com/fi/about/resources/news/press-releases/2019/eksote-tekoaly.html

Helsingin Sanomat (2019c): Yli 800 ulkomaisen opiskelijan oleskeluluvat juuttuivat käsittelyyn, osa korkeakouluista joutuu palauttamaan kymmenientuhansien eurojen arvosta lukuvuosimaksuja, https://www.hs.fi/talous/art-2000006306736.html

Helsingin Sanomat (2019d): Verottajan robotti on karhunnut ihmisiltä liikaa rahaa, ja apulaisoikeusasiamiehen mukaan automaattinen päätöksenteko rikkoo myös perustuslakia, https://www.hs.fi/kotimaa/art-2000006321422.html

Keskuskauppakamari (2019): Työperäisellä maahanmuutolla korjausliike väestöennusteeseen, https://kauppakamari.fi/2019/09/30/tyoperaisella-maahanmuutolla-korjausliike-vaestoennusteeseen/

Migri (2018a): Annual Report 2017, https://migri.fi/documents/5202425/6772175/2017+Annual+Report

Migri (2018b): Kamu AI Chatbot, https://www.slideshare.net/DigitalHelsinki/tekoly-ja-me-osa-2-kamuaichatbotmigrinesitys2018

Migri (2018c): Have you met Kamu, PatRek and VeroBot? Chatbots join forces to offer advice to foreign entrepreneurs in Finland, https://migri.fi/en/article/-/asset_publisher/tunnetko-kamun-patrekin-ja-verobotin-chatbotit-neuvovat-yhdessa-ulkomaalaista-yrittajaa

Migri (2019a): Finnish Immigration Service Strategy 2021, https://migri.fi/documents/5202425/9320472/strategia2021_en/b7e1fb27-95d5-4039-b922-0024ad4e58fa/strategia2021_en.pdf

Migri (2019b): Finnish Immigration Service to pursue a considerable reduction of processing times, https://migri.fi/artikkeli/-/asset_publisher/maahanmuuttovirasto-tavoittelee-kasittelyaikojen-voimakasta-lyhentamista?_101_INSTANCE_FVTI5G2Z6RYg_languageId=en_US

Ministry of Economic Affairs and Employment (2017): Finland’s age of artificial intelligence, https://www.tekoalyaika.fi/en/reports/finlands-age-of-artificial-intelligence/

Ministry of Economic Affairs and Employment (2019): Finland leading the way into the age of artificial intelligence, https://www.tekoalyaika.fi/en/reports/finland-leading-the-way-into-the-age-of-artificial-intelligence/

Ministry of Finance (2019a): Implementation of the national AuroraAI programme, https://vm.fi/en/auroraai-en

Ministry of Finance (2019b): AuroraAI – Towards a human-centric society. https://vm.fi/documents/10623/13292513/AuroraAI+development+and+implementation+plan+2019%E2%80%932023.pdf

Ministry of Finance (2019c): #AuroraAI – Let your digital twin empower you, https://www.youtube.com/watch?v=A2_hlJrEiWY

Ministry of the Interior (2019): Renewal of personal data legislation in the field of immigration administration progressing, https://valtioneuvosto.fi/artikkeli/-/asset_publisher/1410869/maahanmuuttohallinnon-henkilotietolainsaadannon-uudistus-etenee?_101_INSTANCE_YZfcyWxQB2Me_languageId=en_US

Oikeusministeriö (2020): Automaattista päätöksentekoa koskevan hallinnon yleislainsäädännön valmistelu. OM021:00/2020. https://oikeusministerio.fi/hanke?tunnus=OM021:00/2020O

Oikeuskanslerinvirasto (2019a): Työmarkkinatukipäätöksessä ilmoitettavat tiedot lisätietojen antajasta, https://www.okv.fi/media/filer_public/f4/06/f4063426-fe67-48e0-ae6a-1ad7eb2c8feb/okv_868_1_2018.pdf

Oikeuskanslerinvirasto (2019b): Selvityspyyntö OKV/21/50/2019, https://www.okv.fi/media/filer_public/9d/5e/9d5e3a9e-9af5-425a-8c04-8da88de0c058/okv_21_50_2019.pdf

Oikeusasiamies (2019): Verohallinnon automatisoitu päätöksentekomenettely ei täytä perustuslain vaatimuksia, https://www.oikeusasiamies.fi/r/fi/ratkaisut/-/eoar/3379/2018

Oppimaa n.d.: Ennustetyökalun kehitystyö – Lasten ja nuorten ongelmia ennakoidaan Suomen laajimmalla keinoälymallilla Lappeenrannassa, https://www.oppimaa.fi/oppitunti/ennustetyokalun-kehitystyo/

Spielkamp, Matthias and Kayser-Brill, Nicolas (2019): Resist the robot takeover, https://www.politico.eu/article/resist-robot-takeover-artificial-intelligence-digital-minds-email-tool/

Suomi24 (2019): Moderointi puhuttaa – tällaista se on Suomi24:ssä, https://keskustelu.suomi24.fi/t/16034629/moderointi-puhuttaa---tallaista-se-on-suomi24ssa-(osa-1)

Utopia Analytics (2020): Utopia Analytics website, https://utopiaanalytics.com/

Yle (2018): Reiät lapsen hampaissa voivat kieliä huostaanoton riskistä – tutkimus löysi yli 1000 merkkiä, jotka ennustavat lapsen syrjäytymistä, https://yle.fi/uutiset/3-10313460

Yle (2019): Psykologien rekry-yritys skannaa jopa työnhakijan sähköpostin ja tekee siitä analyysin – Tietosuojavaltuutettu haluaa selvityksen palvelusta, https://yle.fi/uutiset/3-10798074

Team

Minna Ruckenstein

Minna RuckensteinMinna Ruckenstein is an associate professor at the Consumer Society Research Centre and the Helsinki Centre for Digital Humanities at the University of Helsinki. She directs a research group that explores economic, social, emotional, and imaginary aspects of algorithmic systems and processes of datafication. Recently projects have focused on algorithmic culture and rehumanizing automated decision-making. The disciplines of anthropology, science and technology, economic sociology, and consumer research underpin her work. Minna has been published widely in respected international journals, including Big Data & Society, New Media & Society, and Social Science & Medicine. Prior to her academic work, she was a journalist and an independent consultant, and this professional experience has shaped the way she works, in a participatory manner with stakeholders involved. Her most recent collaborative projects have explored practices of content moderation and data activism.

Tuukka Lehtiniemi

Tuukka LehtiniemiTuukka Lehtiniemi is an economic sociologist and a postdoctoral researcher at the Centre for Consumer Society Research at the University of Helsinki. He is broadly interested in how the uses we invent for new technologies are shaped by how we imagine the economy to work. His current research focuses on the data economy and automated decision-making technologies. He previously worked at Aalto University, and, prior to that, in expert positions in the Finnish public sector. In 2018, he was a fellow at the Alexander von Humboldt Institute for Internet and Society in Berlin. His work has been published in New Media & Society, Big Data & Society, and Surveillance & Society.