Disrupting Technological Innovation? Towards an Ethical and Legal Framework

Technological innovation brings both benefits and harms to our societies

Take, for instance, encryption technology. It is indispensable in securing banking transactions, private communications, and personal data in the cloud. At the same time, this technology makes it harder for a state’s police agencies to detect and prevent fraud, organized crimes and terrorist attacks. Encryption is part of a much larger realm of information technology which has enriched our forms of communication but which has also been widely contested. Consider giant IT innovators such as Google, Facebook, Apple, Twitter et cetera. They are by no means mere neutral platforms. Information can be selected by these platforms, and their complex algorithms may allow users to see only what they already like; this invisible information filter can hinder the free flow of information and potentially polarize people’s preferences and political opinions.

What we need, however, is an ethical and normative framework on technological innovation.

Information technology is one of the examples of so-called “disruptive technology” coined by Christensen in 1997. Technological innovation (including, for instance, artificial intelligence, robotics, 3D printers, blockchain, and driverless cars) deeply disrupts existing markets and consumer behavior. Studies have been produced to map economic and social dynamics of disruptive technology.

Important questions

The framework should support innovation but may also limit it—and thereby even disrupt it—in order to safeguard the fundamental values that protect the public interest. In order to establish the ethical and normative framework, we have to answer a series of important questions.

 

  •     Who is responsible when the technology fails?
  •     Where to step in, in terms of regulation? Can we rely on shared principles and self-regulation or should algorithms developed by industry be open to any supervisory control?
  •     What international norms should be applicable to remote computer searches by law-enforcement agencies, and how do these norms relate to sovereignty concerns?
  •     What kind of robotic technology should be allowed in applications ranging from the care of the elderly and disabled to law enforcement and warfare?
  •     What would be the concrete relevance of human dignity and human rights in regulating the use of innovative technology?
     

Past events

12 December 2018, Utrecht.

Dialogue: Disrupting Technologies of Trust: Ethical and Regulatory Challenges of the Platform Economy.

This dialogue on technological innovation addressed the complex interaction of trust, regulation, and data technology, in the context of the platform economy, such as Airbnb, Uber and ‘reputational technologies’.

Check out the main speakers in these video's:

Disrupting Technologies of Trust: overview
Regulating the Platform Economy – What are the options?
Trust and Reputation in the Platorm Society
Democracy and the Platform Economy

Friday 2 November 2018 (Eindhoven)

­ Debate  –  “ROBOGOV: Robots, Rights and Norms” 
 The event took place as part of the “Robot Love: Eindhoven 2018” exhibition. How shall we interact with robots? Any ethical and legal limits? Shall human beings acknowledge the rights of robots? How does the “governance” of robotic technologies look like? 

Monday 28 May 2018 (12.00-13.00, Utrecht)

Brownbag Lunch Seminar  –  “Robots and Rights” 
Should a robot have “rights”? In this seminar, Utrecht University researchers discussed legal issues regarding automated robotic technologies, robots as legal person, and algorithms and human rights implications. 

Follow-up blogpost: “Legal Status of Robots: The RENFORCE/UGlobe Seminar and Why I Decided to Sign the Open Letter

Monday 9 April 2018 (12.00-13.00, Utrecht)

Brownbag Lunch Seminar  –  “Cambridge Analytica Fallout” 
In response to the Cambridge Analytica saga involving Facebook users’ data, Utrecht University researchers discussed various regulatory issues, including the implications on information filter bubble, data protection, unfair trading practices, and algorithms and biases. 
Follow-up blogpost:  “Cambridge Analytica and Facebook Fallout: The Renforce/UGlobe Seminar”

Thursday 15 March 2018 (15.15-17.00, Utrecht)

UGlobe Dialogue  –  “De nieuwe wet op de inlichtingen- en veiligheidsdiensten: vooruitgang of teloorgang?”
This first UGlobe Dialogue on disruptive technology was organized in the week before the referendum in the Netherlands on a new Dutch Law on the Intelligence and Security  Services (the Wet op de inlichtingen- en veiligheidsdiensten, Wiv). 
Programs & speakers: Bob de Graaff, Nico van Eijk, Beatrice de Graaf, Peter Koop, Mireille Hagens 
Follow-up blogpost: “Disruptive Technologies – A UGlobe Dialogue on Bulk Interception of Communications” 

De nieuwe wet op de inlichtingen- en veiligheidsdiensten: vooruitgang of teloorgang?
Vier deskundigen zullen vanuit verschillende disciplines en perspectieven hun visie geven op de nieuwe Wiv als nieuwe stap in de regulering van technologie die het mogelijk maakt op grote schaal communicatie van burgers te surveilleren.
We are the interdisciplinary platform

About Disrupting Technological Innovation

We are the interdisciplinary platform within the Utrecht Centre for Global Challenges (UGlobe). Our platform strives to analyze an ethical and normative framework for regulating technological innovation. The project focuses on: information technology, its application to government and commercial decision-making, the design of algorithms, and the introduction of robotics, and artificial intelligence.
As part of our activities, we organize UGlobe Dialogue Series on “Technological Innovation, Ethics and Law" (see also this blog). The Dialogue Series aims to strengthen ties between UGlobe researchers and external researchers dealing with innovation and ethics/law in other universities. The Series also strives to contribute to the debate with the wider public.

Project coordinators
Assistant Professor
Assistant Professor