Discriminated by an algorithm

Vrouw in het donker, tegen neon verlichte achtergrond

Algorithms quantify us: poor, rich, customer, no customer, homosexual, poorly educated, etc. Decisions, from tax levy to personnel policy, are based on that. That has consequences for our constitutional rights.

The shadow sides to algorithms, Big Data, the Internet of Things and Artificial Intelligence are frequently reported by the media. Often, a lot of attention goes to the usual suspects: Facebook, Google, Twitter and YouTube. Research project “Algorithms and Constitutional Rights” aims to look beyond that, focusing on the acts of smaller businesses, insurances, banks and, of course, the government. Business and government decisions based on algorithms may have consequences for our constitutional rights.

On to Parliament

Kasja Ollongren, Dutch Minister of the Interior and Kingdom Relations, calls the project which is commissioned by her, “important” and “solid” in an accompanying letter to parliament. The project focuses on the impact of algorithms on privacy rights, freedom rights and right to equal treatment, among others.

Currently involved in the project are professor Janneke Gerards (Fundamental Rights Law) and professor Remco Nehmelman (Public Institutional Law) and Legal Research Master student Max Vetzo, who was responsible for an important part of the research according to Gerards. Being a part of Utrecht University, all researchers aim to explore the bottlenecks.

 

Discrimination by an algorithm, how does that work?

Vetzo: “Algorithms are often used to distinguish between people or situations, for instance to determine who gets a mortgage and who doesn’t, or to vary insurance premiums between different insurance companies. The problem with that, is the assumption that algorithms – as opposed to humans – have ‘neutral’ outcomes based on such distinctions. That is not the case. The data with which an algorithm works, as well as the algorithm itself can contain bias and prejudice, leading to discrimination.”

People assume that algorithm outcomes are ‘neutral’. That is not the case.

Deliberate discrimination?

Vetzo: “Discrimination can be an unintended side effect to algorithmic decision making, but, as we argue in the report: every form unintended discrimination can be purposefully orchestrated. Because algorithms can be incredibly complex, especially in cases of ‘deep learning’, it is difficult to grasp how and on which (possibly discriminatory) grounds a certain algorithmic decision is made. That makes it hard to transparently motivate decisions made with help of algorithms. That, in turn, leads to the problem that possible discrimination becomes hard to prove. Because algorithms are utilized in a multitude of fields – from crime control to education, to taxation, etcetera – these problems of discrimination are recurring more often.”

Hokjes vakjes systeem kaartenbak

“Equality of arms” is being threatened, you write. That sounds serious.

Gerards: “It is! Say, you are rejected for a job and you think that something is wrong with the procedure, in which an algorithm was used. Try and prove that! How are you going to fight that decision? You don’t have the algorithm; the opposing party does. The opposing party can refuse to give insight into the way the algorithm works, because the creator owns the rights to that algorithm. Or the opposing party can dodge responsibility by claiming to merely use the algorithm. In short: one party possesses the necessary; the other doesn’t. Moreover, if there is no insight into the algorithm, it will prove difficult for a judge to give judgement. This is also true for disputes concerning price discrimination with sale offers, for example, or with insurance premiums.”

Algorithms can support judges the way Amazon suggests products to customers: ‘With this category of violations under these special circumstances, other judges usually give fine nr. X.’

Are judges already using algorithm to base their judgement on?

Gerards: “In the Netherlands, not very often, but it can be a convenient tool, for instance for calculating the amount of a fine. Data about previous court cases can, in addition the law, guide lines and experience, support the judge. Almost like Amazon provides a suggestion for the customer: ‘With this category of violations under these special circumstances, other judges usually give fine nr. X.’ That can increase equality of rights. And that would also serve as an example on how to properly use an algorithm: as a tool. Technological knowledge in addition to human knowledge.”

Is the complexity of algorithms comprehensible for a judge?

Gerards: “That can prove to be challenging. Some algorithm are self-learning. When they feed themselves with data, they can get smarter, but also more stupid, drifting the outcomes away from their original intention. As with every technological advancement, it’s a matter of trial and error. I attended a workshop concerning this subject, it was mentioned that if you might able to certify algorithms, as you would food or medicine. Is an algorithm tested before it enters the market? What data does it contain? And what are the side effects? That could help, but with self-learning algorithms that is challenging, too. Either way, jurists and policy makers always have to make a translation: how does this decision influence someone’s life or a certain situation? Do we want it to? We also have to think about who is responsible for the decision: the builder of the algorithm, the one using it, or another party?”

We might able to certify algorithms, as you would food or medicine. Is it tested? What data does it contain? Are there side effects?

What tasks do you see for jurists?

Vetzo: “The most important tasks for jurists, I think, is to collaborate with IT experts in order to tackle the problems described by our book. The human rights issues we address are urgent and require a combination of judicial and technological solutions. These solutions can only be offered when technicians and jurists collaborate and are willing and able to look past the confined borders of their discipline.”

Gerards: “Strangely, technological advancement does invite ethicists, but hardly ever jurists. Jurists, in turn, have to gain more technological knowledge, which is why I am very satisfied with the minor Law, Innovation and Technology at Utrecht University. It is, of course, often true that jurists start regulating new forms of technology after it has been in use for a while – take the rules concerning drones, for example. I think jurists should collaborate with ethicists and technicians at an earlier stage in the process: What can be made, what are the consequences and how do want to deal with that?”