Can the GDPR prevent that the ‘computer says no’?
Legal analysis supported by complexity theory
Last May, the EU General Data Protection Regulation (GDPR) became applicable. This new legislation aims to improve the protection of personal data for everyone in the European Union. Two important sections of the GDPR, sections 9 and 22, aim to reduce the risks of discrimination and profiling based on the processing of personal data.
Using knowledge from the Complex Systems Science field, a legal scholar from Leiden University and a researcher at the Centre for Complex Systems Studies from Utrecht University teamed up to perform a detailed analysis of the limits of sections 9 and 22. They concluded that the law in its current form may not effectively prevent discrimination based on ethnicity, religious beliefs, sexual preference, health status and other sensitive traits.
Personal data and discrimination
Non-discrimination law is an important field of European human rights law. It is illegal to treat individuals differently because of their sex, cultural background, ethnicity, religious beliefs or family affiliation, amongst others. Because the processing of personal data is almost ubiquitous, the EU legislator has decided to include non-discrimination provisions in data protection law since 1995. To reduce the risk of discrimination, the processing of sensitive traits of individuals and automated decision-making (including profiling) based on sensitive traits is generally prohibited, unless it is specifically allowed.
Complexity and discrimination
If a single tech platform collects many data points measuring individual behaviour like movement patterns, buying patterns or interactions with other individuals, these emergent properties can become apparent without directly registering sensitive individual traits. A number of high-profile cases where discriminatory effects resulted from seemingly neutral data processing have been presented in recent years, for example in Cathy O’Neil’s “Weapons of Math Destruction”. Even though each individual example can usually be explained empirically, to prevent such risks in the future, a better understanding of the underlying mechanisms is needed.
Qingyi Feng and Michiel Rhoen have proposed that, in terms of complexity science, a number of grounds for unlawful discrimination can be qualified as emergent properties of individuals, groups, communities and societies. This proposition can explain and predict the discovery of sensitive traits from a sufficiently large data set, even when these traits are not enumerated in the set.
Controlling discrimination risk
Although the GDPR is formulated in a technologically neutral way, it does not sufficiently account for the accidental or intentional discovery of a sensitive trait that coincides with an emergent property of a complex system that is captured in a large number of individual data points seemingly unrelated to those traits. Based on the nature of emergent properties, Rhoen and Feng suspect that no individual legal or technical measure can totally prevent the risk of discrimination.
Still, the economic benefits of optimizing commercial and government activity should not be completely discarded because the complete elimination of discrimination risks is not possible. Rhoen and Feng therefore propose that risk acceptance or avoidance should be based not only on precautionary measures, but also on improved discourse with the public to decide which risks are acceptable and what additional remedies should be available if the risk becomes a reality.
Publication
Michiel Rhoen, Qing Yi Feng; Why the ‘Computer says no’: illustrating big data’s discrimination risk through complex systems science, International Data Privacy Law, https://doi.org/10.1093/idpl/ipy005