Facial Recognition Devices, one of the most widely debated and contested technologies

Gezichtsherkenning bij een man © iStockphoto.com/metamorworks
© iStockphoto.com/metamorworks

More and more we are surrounded by Facial Recognition Devices. They are developed in a context of international collaboration, trade and governance, but is the legal and normative framework we see today sustainable over time? On 17 November 2021, Strategic Interest Group (SIG) ‘Principles by Design’, UGlobe and RENFORCE organised ‘Facial Recognition Technology: Challenges for International Collaboration & Governance’, an interdisciplinary workshop to find an answer to this question.

Challenges for international collaboration & governance

The workshop on facial recognition technologies aimed at contextualising political debates within the EU about facial recognition technologies from international standpoints. In so doing, the workshop allowed us to consider whether the EU’s approaches to the regulation of facial recognition may need to be adjusted. The report of the workshop is available for downloading.

Prevalence and controversies

Facial recognition technology is one of the most widely debated and contested technologies. As one application in the field of computer vision, facial recognition technology refers to a set of systems which collect, analyse, verify, and/or identify a person’s face. Facial recognition has been deployed for security at airports, registration and security at schools, and to prevent shoplifting.

More controversial, however, is the use of the technology by the police to prevent and investigate criminal conduct by the police. Various NGOs, researchers, and public institutions have called for the prohibition of the use of facial recognition technology by law enforcement authorities.

UU Legal Research master student Claudia Ionita discusses the issues of facial recognition technology with international experts Albert Salah and Ansgar Koene.

The EU’s regulatory responses

The EU, among others, have responded to some of the public concerns. For instance, the EU’s proposed Act for AI of April 2021 has given particular attention to the risks arising from facial recognition technologies. Under the proposal, third party assessment would be required for AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons.

On another front, facial recognition technology has also been given particular importance when the EU adopted, in May 2021, the new dual-use regulation that aimed at strengthening cross-border transfer of digital surveillance technologies.

In October 2021, the European Parliament called for ‘a moratorium on the deployment of facial recognition systems for law enforcement purposes that have the function of identification’ unless ‘strictly used for the purpose of identification of victims of crime’ until certain criteria are fulfilled—including the condition that ‘the technical standards can be considered fully fundamental rights compliant’ (para. 27).

International collaboration on AI across different regulatory and political contexts

Some of these celebrated regulatory responses add a new layer of complexity, however, with regard to international collaboration to foster technological development. On the one hand, ‘openness’ in research and innovation remains to be ‘a cornerstone’ in the EU’s ‘cooperation with the rest of the word’, as noted by the European Commission’s Executive Vice-President for A Europe Fit for the Digital Age.

On the other hand, according to the Commission, such ‘openness’ in international cooperation should be based upon ‘rules and values’, as well as ‘reciprocity and a level-playing field’. What remains uncertain, however, is the precise content of these ‘rules and values’ and, moreover, how the idea of reciprocity works with rule/value-based cooperation.

During the workshop, experts from different disciplines (including computer science, political science, higher education studies, and law) discussed ethical and regulatory challenges associated with international cooperation in the development of AI, by using facial recognition as a focal point of discussion.

International collaboration in the field of AI would have to address the varied acceptability or permissibility of technology across multiple communities. Governments differ in terms of how they problematize the development and use of facial recognition technology. Such variance across states, jurisdictions, and communities creates critical challenges for international technological collaboration, especially without any effective international normative frameworks.

Organisation

The event was supported by Utrecht University and Gerda Henkel Stiftung and organised by Machiko Kanetake, Lucky Belder, Karin van Es, and Arthur Gwagwa, as part of the following UU research groups:

  • Research platform on Disrupting Technological Innovation? Towards an Ethical and Legal Framework within the Utrecht Centre for Global Challenges;
  • Special Interest Group on Principles by Design: Towards Good Data Practice within Governing the Digital Society;
  • Digital building block, the Utrecht Centre for Regulation and Enforcement in Europe (RENFORCE).
Read the workshop report
Watch the full interview with Ansgar Koene
Watch the full interview with Albert Salah