PhD defence Joris Graff: Can artificial intelligence make moral judgements?
Artificial intelligence (AI) is increasingly being used to make decisions in situations where moral considerations are at stake. In assessing job applications, for instance, as well as in fraud detection and healthcare. But can AI systems actually make moral judgements? Joris Graff examined this question in his doctoral thesis ‘Decision-makers without Reasons: On the Moral and Normative Capacities of Artificial Agents’, which he will defend on Friday 6 March.
Moral judgments
Graff questions whether AI systems, as currently being developed, are capable of carefully weighing up values, norms, and interests against one another and making moral decisions.
Drawing on the philosophy of Ludwig Wittgenstein, he argues that humans can make moral choices because they are part of a moral community, in which they learn from each other and hold each other responsible. AI systems are not fully part of that moral community. This means, Graff argues, that we should be cautious about outsourcing moral decisions to AI.
Machine ethics
That said, AI can be useful in situations where moral choices are involved. Because AI systems are able to process information more quickly, for example. Researchers in the field of machine ethics are therefore exploring how AI systems can be designed to mimic human moral weigh-offs as well as possible.
Graff works in the field of machine ethics and has developed a framework to help AI systems weigh up moral choices. His framework is based on fixed moral rules, but which rule carries the most weight in any given situation is not fixed in advance. This weighting can be learned from human input.
AI and autonomy
In the final part of his research, Graff underlines the importance of ensuring that algorithms support rather than undermine human autonomy. AI systems are morally relevant not only when they make decisions themselves, he argues, but also when they influence human decisions.
Ideally, algorithms should enable us to make more autonomous decisions, based on reasons we ourselves endorse. In practice, this often falls short: many algorithms lack transparency and influence us without us noticing.
- Start date and time
- End date and time
- Location
- Hybrid: online (click here) and at the Utrecht University Hall
- PhD candidate
- J.J. Graff
- Dissertation
- Decision-makers without Reasons. On the Moral and Normative Capacities of Artificial Agents
- PhD supervisor(s)
- Professor J.M. Broersen
- Co-supervisor(s)
- Dr D. Klein
- More information
- Full text via Utrecht University Repository