Main story

Artificial Intelligence

Foto van Sven Nyholm en Dong Nguyen

Even though we cannot be sure what tomorrow holds, some things seem almost inevitable, like the rise of artificial intelligence (AI). Still, that does not mean we should just sit around and wait for the world to change around us. Utrecht University researchers Dong Nguyen (computer science) and Sven Nyholm (philosophy) are working to shape the future of AI to benefit society in the best possible way. We asked them about their insights into the past, present and future of artificial intelligence.

An interdisciplinary approach is absolutely essential for AI

Foto Dong Nguyen en Sven Nyholm

Sven Nyholm: ‘It’s difficult to make good predictions about the future of artificial intelligence. Even among experts, there are a lot of different ideas about what will be possible, when and how. An interesting consideration is that the idea of robots doesn’t actually originate from science: the word “robot” first appeared in a theatre play in 1921. On the other hand, the term “artificial intelligence” was coined by scientists in the 1950s. Accordingly, a lot of ideas that people have about robots and AI are a mix between science and fiction.’

Dong Nguyen: ‘I agree that it’s very difficult to make good predictions about the future of AI. A more immediate problem is that AI systems with good intentions may have unforeseen consequences. Fortunately, researchers and companies are taking that concern very seriously. An important new research direction is explainable AI, meaning that decisions made by AI systems should be understandable for humans, instead of being a “black box”.’

A lot of ideas that people have about robots and AI are a mix between science and fiction

Nyholm: ‘One example of such an unforeseen consequence is an incident in 1983, when a Soviet satellite warning system falsely indicated that the US had launched a missile attack. Only the fact that an officer distrusted the system and didn’t raise the alarm kept the Soviet Union from starting a nuclear war.’

Nguyen: 'I think an important matter in AI right now is educating the users. My main take-home message would be: don’t be overly distrustful of artificial intelligence, but be aware of its shortcomings.’

Nyholm: ‘As a sidenote, there’s another unforeseen danger of AI that I don’t hear a lot of people talking about: its carbon footprint. AI systems require so much computation that the environmental consequences are also considerable.’

Foto Sven Nyholm en Dong Nguyen

Getting into an emerging research field

Nguyen: 'I studied technical computer science, but with a minor in psychology and courses on language technology, so in a way, I assembled my own AI programme. I’ve always been interested in making computers understand language, mainly everything below the surface, for example: how do you make computers understand social context?'

Nyholm: 'After studying philosophy, I gradually got more interested in applied ethics. In 2014, I wrote one of the earliest articles on the ethics of self-driving cars, based on a famous thought experiment called the trolley problem. Say, a car is about to hit a group of people, but by diverting the car, you can save the group and kill only one person instead. Are you a bad person because you deliberately took an action that killed someone, or are you a good person because you saved a group of people? It was still hypothetical when I wrote it, but the year after that, the first accidents with self-driving cars started to happen. All of a sudden, everyone was interested in the ethics of AI, and I was one of the experts. One of the fun things about working in AI is that people are endlessly interested in it, much more than in traditional philosophy.’

Nguyen: ‘And it is such a quickly changing field. Even as a researcher, I feel like I’m always catching up. That’s not a bad thing at all. It just means it’s never boring and I’m always learning new things.’

Not just a technical problem

Nyholm: ‘It seems like, in recent years, technical people have become more and more open to the ethical aspects of AI. They might not always like it, but there’s a general consensus that it’s important.’

Nguyen: 'We’re very aware of the impact of AI, and that makes it important to think about the consequences. For example: should AI reflect the world as it is or as it should be? If you do an image search on “CEO”, you mainly get pictures of white men. Should that be different, even though this reflects the situation as it currently is?’

Technical people have become more and more open to the ethical aspects of AI

Nyholm: ‘Even if you do agree that AI systems should paint a fairer picture of the world, you’d still have to agree on what fairness is, and that differs between countries and cultures. A few years ago, psychologists started studying how people actually think about different variations of the trolley problem. In some cultures, older people are prioritised over young people, and in other places, it’s the other way around. You see similar cultural differences in the way the coronavirus pandemic is handled: are the coronavirus measures intended to give more safety to risk groups or to give more freedom to younger people?’

Nguyen: 'This means it’s very difficult to make AI systems that are usable worldwide. Mainly, I think it’s important to be clear about the assumptions on which you build an AI system. I don’t believe in neutral systems. This is one of the reasons why an interdisciplinary approach is absolutely essential for AI. Any time I’m working on a research problem, I quickly realise it’s not just a technical problem. You have to take into account factors like privacy, ethics, data collection, social context and so on.’

Foto Sven Nyholm en Dong Nguyen

Human or super-human?

Nyholm: 'Another matter on which people have different opinions is whether AI systems should imitate human behaviour. People aren’t exactly great drivers, so should self-driving cars make the same mistakes we do? And if all human drivers on a highway are driving over the speed limit, should a self-driving car follow suit and also speed up, even if that means it’s breaking the law?’

Nguyen: ‘Human aspects like emotions are very difficult to put in an AI system, especially if you are only working with written text. One very recent change in my research field is that we also look at audio and video to see what useful information we can get that way. Of course, audio and video analysis are different research fields in themselves. That’s another example of why it’s unavoidable that AI is such an interdisciplinary field.’

Nyholm: ‘For humans, it’s also hard to really gauge emotion from written text. Even online teaching already makes it harder to feel the mood in the room. Emotions and moods are such complex phenomena. Can you really feel angry if you can’t feel your heart racing? I believe that really feeling emotions requires having a human or animal-like body, and I don’t think we’ll get there within our lifetimes. But again, there are very different opinions and predictions about that.’

Human aspects like emotions are very difficult to put in an AI system

Creating future-proof students

Nguyen: ‘The Bachelor’s and Master’s programmes in AI at Utrecht University are interdisciplinary at their core, and that’s quite unique. That also means the student population is very diverse, which is incredibly inspiring. We continuously adapt the study programme to fit new developments, and it’s very interactive.’

Nyholm: 'In a quickly evolving field like artificial intelligence, the million-dollar question is how you create future-proof students. The newest developments when they start studying will already be outdated by the time they finish their studies. I think the most important thing is to teach students to take an interdisciplinary approach. In Utrecht, we’re taking an incredibly broad stance on AI: from the technical to the ethical and everything in between.’

Nguyen: 'I think it’s important to teach my students to read scientific literature and to be critical thinkers. I’m not just teaching knowledge, I’m teaching them to learn. A lot of AI research is about creating self-learning systems, and in a sense, AI education is about creating self-learning humans.’

The Bachelor’s and Master’s programmes in AI at Utrecht University are interdisciplinary at their core, and that’s quite unique

Dong Nguyen is an assistant professor in computer science. She conducts research into natural language processing: the processing and analysis of natural language with computers.

Sven Nyholm is an assistant professor in philosophical ethics. His main areas of research are applied ethics (especially the ethics of technology), ethical theory and the history of ethics.

Artificial Intelligence In Utrecht

The research focus area Humancentered Artificial Intelligence bundles Utrecht University’s various activities in the field of AI. A sizeable group of AI researchers, including Sven and Dong, is engaged in attempts to understand, reproduce and even improve human intelligence. This can only be achieved through interdisciplinary cooperation. The main drivers of this research focus area are Jan Broersen, Professor of Logical Methods in Artificial Intelligence, and Mehdi Dastani, Professor of Intelligent Systems. Illuster asked them to respond to the interview with Sven and Dong.

Jan Broersen and Mehdi Dastani:

‘Artificial Intelligence (AI) is a rapidly evolving scientific discipline and a disruptive form of technology with an immense and still largely uncharted impact on society. Artificial Intelligence is set to impact our economy, scientific community and many aspects of our daily lives. It has become an irreplaceable part of life and forms the basis for numerous innovations that help us meet the challenges of our time and contribute to the further progress and prosperity of our society. 

UU's Human-centred Artificial Intelligence focus area aims to promote cooperation between researchers across the boundaries of traditional disciplines. The notion of human-centred AI has been gaining a lot of national and international attention and recognition lately. The development of AI technologies and innovations and exploration of the relevant legal, social and ethical aspects are integral to humancentred AI. As chairs of this focus area, we greatly appreciate the topics and views raised in this interview with two of our leading researchers. We feel the interview is a great example of the sort of interdisciplinary debate we hope to see more of in the near future.’