IAMA makes careful decision about deployment of algorithms possible
Utrecht Data School and Prof. Janneke Gerards develop Impact Assessment Mensenrechten en Algoritmes
In the future, algorithms can support governments and companies in carrying out their legal obligations. However, inaccuracy, ineffectiveness or worse, a violation of human rights must be prevented, such as the algorithms used by the Dutch Tax Authorities that led to racial profiling; a scandal that almost everyone now knows as the ‘toeslagenaffaire’.
At the request of the Ministry of the Interior and Kingdom Relations of the Netherlands, Mirko Tobias Schäfer, Arthur Vankan and Iris Muis of Utrecht Data School and Professor of Fundamental Rights Janneke Gerards developed the ‘Impact Assessment Mensenrechten en Algoritmes’ (IAMA). The IAMA is an instrument, basically a manual, that supports organisations in making decisions about the development and deployment of algorithms. Step by step, discussion points are described that must be addressed before the algorithm is implemented. By shedding light on the course of a careful decision-making and implementation process for algorithms, IAMA can help prevent situations such as the ‘toeslagenaffaire’.
Iris Muis stated in LAA magazine:
IT, data managers, lawyers, really everyone involved needs to be at the table. Only then can you all make the right decisions. It's not just about human rights. The IAMA starts with the basics: what exactly are you going to use the algorithm for? Is there a legal basis for it? How is the data collected? Is it reliable? The IAMA is the steppingstone from which all existing, relevant frameworks concerning algorithms in the Dutch context are hung. You go through them one by one. Such an overview did not exist before.
Three phases
The IAMA describes the decision-making process in three phases. In the preparation phase it is determined why an algorithm will be used and what the expected effects are. For example, one of the first questions that policymakers have to consider is what the concrete goal is of deploying algorithms. In the second phase, about what the so-called input (data) and the throughput (algorithm) should be, the more technical aspects are discussed. For example, the 'garbage in - garbage out' principle states that if one uses poor quality data, the output of the algorithm will also be of poor quality. In the third phase of output (outcomes) is determined how the algorithm generates outcomes. This entails, for example, that people must have sufficient opportunity to overrule decisions made by the algorithm.
Report Amnesty International
On 21 October 2021 Amnesty International published the report Xenophobic machines - Discrimination through unregulated use of algorithms in the Dutch benefits scandal. Just like the Dutch Data Protection Authority in their investigation Onderzoek Belastingdienst kinderopvangtoeslag, they concluded that the privacy of individuals had been violated, adding that the discriminatory algorithms violated human rights. According to them, "the measures that the government says it is taking are inadequate. For example, officials are not obliged to map out the human rights risks in advance, there is insufficient supervision of algorithms. Moreover, government agencies are allowed to keep the use and operation of algorithms secret."
According to Amnesty International, a future scandal is plausible, at least untill good rules are drawn up. Among other things they suggested introducing "a binding human rights test prior to the design and during the use of algorithmic systems and automated decision-making."
Human rights!
As algorithms can seriously affect people's fundamental rights, those involved in the decision-making and implementation process of algorithms will have to pay attention to human rights. They will first have to ask themselves whether algorithms affect people's fundamental rights, and to what extent, and then determine how they can prevent or mitigate this violation of fundamental rights, and if not, whether this violation is acceptable. It may be, for example, that the social benefits are greater than the expected fundamental right violation, and that this fundamental right violation thereby can be justified. For example, the ‘coronatoegangsbewijs’ was recently introduced; it restricts the freedom of movement of individuals but because it serves public health, the infringement was found to be justified.
Careful consideration is therefore important here. In these and in all other steps of the decision-making and implementation process of algorithms, IAMA provides policymakers with discussion points and assessment frameworks so that they can actually take a carefully considered decision about the deployment of an algorithm.