Jan Broersen, ERC Consolidator Grant 2013

Jan Broersen
Dr. ir. Jan Broersen

Driving a car yourself could soon be hopelessly old-fashioned. The automotive industry is busy testing driverless cars. But what happens if a driverless car runs over a child? Who is responsible? How can you teach a self-drive robot to consider all the options and make the right choices in everyday traffic? These are the questions that occupy engineer and mathematician Jan Broersen. His ERC research focuses on the implementation of a moral compass in intelligent systems. “I think it’s important that we carry out this kind of research now, while all kinds of robots are being developed, and not wait till afterwards.”

Broersen developed his research design in the Faculty of Information and Computing Sciences, but is transferring to the Faculty of Humanities. “Before you can start to build systems, you need to know more about underlying motives and assessment frameworks,” explains Broersen. “My research team combines philosophy, law and knowledge of computer systems.” Law teaches the research team about various forms of responsibility and the associated liability issues. “Will robots be treated as legal entities in the future? Would that be a good idea, or a really bad one?” Broersen asks himself, thinking out loud. “Philosophy gives you a grip on assessment frameworks and logic. I am also interested in collectivity. This technology is all about cooperation between man and machine, and often also between different machines. I would like to investigate what it means if several people and machines are to a certain degree responsible, and how you can capture that in logical 0-1 considerations. As a Computer Scientist I can examine how you can implement this knowledge in an automated system.”

Research freedom

This research is not only theoretical and philosophical; Broersen is also a builder. “I started out as a country boy in the far north of Noord-Holland, who loved tinkering with electronics. This brought me first to study electrical engineering at TU Delft, but I soon progressed to mathematics and computer science. After three years teaching mathematics at secondary school, I took my PhD at VU University Amsterdam, where I also discovered Intelligent Systems. Later, on co-supervisor John-Jules Meyer persuaded me to come to Utrecht. During my studies, I was endlessly fascinated by the philosophical questions and in particular the logic of philosopher of science Joop Doorman. I am really happy that this is now a part of my daily work. As far as that goes, the ERC has given me huge research freedom.”

The bar has been set high, we are aiming to produce a prototype of an automated responsibility system. “I just wonder how far I will get in five years,” says Broersen, putting things into perspective. “But you need to fix a point on the horizon to aim for. In any case we won’t just continue to talk about it, but we will be thinking about the formal system language needed to be able to capture the various elements and responsibility.”

Today, all kinds of systems are being designed in rapid tempo, including self-learning systems and applied systems such as healthcare robots and early-warning systems. And that’s not all. “Did you know that most of today’s financial transactions take place via computer programs?” asks Broersen. “Now that we are handing over part of our responsibilities, we need to ask ourselves how we can get robots to make moral decisions, in which for example ethics and social calculations play a role.”

Stupid computers

Broersen sees real perspective for this technology. “I distance myself totally from doomsayers who warn of the dangers of Artificial Intelligence (AI). They have a real tendency to exaggerate and make unfounded, almost apocalyptic predictions. They know nothing of AI in practice. For example, they warn about super-intelligence, while the real problems is that we tend to delegate responsibilities to machines that just aren’t smart enough. Of course AI systems are getting faster and faster, but that doesn’t mean they are getting smarter at the same rate. In fact the opposite may be true. In such a system, all moral considerations are in principle equally weighted. So we need to do something about that.”

Author: Youetta Visser