Workshop: Opportunities and Limitations of Artificial Moral Agents
This workshop on 5 and 6 March 2026 - organised by the Special Interest Group (SIG) AI, Ethics and Law - aims to bring together several speakers that help attendants to understand the scope of possible positions vis à vis the possibility and possible implementation of machine ethics.
Opportunities and Limitations of Artificial Moral Agents
The workshop will feature both speakers who discuss philosophical questions around the foundations of machine ethics and morally aligned AI, and speakers who discuss formal approaches towards machine ethics.
The philosophical talks will discuss questions such as: Can machines be responsive to moral reasons?; Is machine ethics a feasible way to create morally aligned AI systems?; Can machines provide moral testimony?
The formal talks will discuss how and to what extent tools from logic and reinforcement learning can be used to create morally compliant artificial agents.
AI systems carry out increasingly many tasks with moral import, impacting human safety, autonomy and dignity in domains such as traffic, law enforcement and healthcare. Humans operating in these domains are expected to exercise their moral competence to balance different interests in a morally acceptable way. Could AI systems replicate or mimic such moral competence? Or should they always be confined to an assisting or advisory role when moral decisions are at stake?
Some researchers argue that we can to a large degree automate moral decision-making and have taken steps towards implementing so-called ‘artificial moral agents’ under the header of ‘machine ethics’.
Other researchers have argued that machine ethics is a dead end because some feature of moral decision-making can never be replicated by AI. Of course, there are many intermediate positions, e.g. that AI systems can replicate some but not all moral decisions.
Speakers
- Henry Prakken, Utrecht University: (AI &) Law and (Machine) Ethics
- Eva Schmidt, TU Dortmund: The Reasons of AI Systems
- Emily Sullivan, University of Edinburgh: Can LLMs provide moral testimony?
- Jan Broersen, Utrecht University: On the biases LLMs cannot have
- Aleks Knoks, University of Luxembourg: Metanormative Theory for RL-Based Moral Agents
- Ibo van de Poel, TU Delft: AI, values and alignment
You can download the programme here.
The workshop precedes the PhD defence of Joris Graff on Friday March 6, 2026 at 14.15 hrs at University Hall, Utrecht University on Decision-makers without Reasons. On the Moral and Normative Capacities of Artificial Agents.
Registration
You can register for the workshop by filling in this online form.
- Start date and time
- End date and time
- Location
- Utrecht: Parnassos and Drift 27
- Registration
You can register for the workshop by filling in this online form.