The Responsible AI group conducts research on designing and developing AI techniques, tools, and methodologies that enable computational systems to abide by legal, ethical, and social requirements while prioritizing humans that will benefit from them.
Our research areas include:
- Trustworthy AI
- Privacy, transparency, and accountability in interactions and decision-making
- Design methodologies and metrics for trustworthy AI systems (e.g., privacy-by-design and ethics-by-design)
- Responsible autonomy and consent management
- AI and law
- Legal knowledge-based systems
- Legal argumentation
- Predictive justice
- Legal applications of NLP
- Norm-governed autonomous systems
- AI for law enforcement
- Explainable AI (XAI)
- XAI and legal requirements for explanation
- XAI for law enforcement
- Argumentative XAI
- Explainable-by-design methodologies
- Computational argumentation
- Symbolic approaches
- Integration with NLP and machine learning
- Applications in law, law enforcement and ethics
- National Police Lab AI: The National Policelab AI (NPAI) aims to develop state-of-the-art AI techniques to improve the security in the Netherlands in a social, legal, and ethical way. The research of the Policelab in Utrecht focuses on intelligent interactive dialogues, reasoning with (legal) arguments and crime scenarios and integrating symbolic and sub-symbolic techniques of AI.
- Hybrid Intelligence Project: Funded by a 10-year Zwaartekracht grant from the Dutch Ministry of Education, Culture and Science, the project is a collaboration between seven Dutch universities and aims to realize systems where machine intelligence augments human intellect and capabilities instead of replacing them. Our group is active in the Responsibility and Collaboration research lines.
- The Algorithmic Society (AlgoSoc): Also funded by a 10-year Zwaartekracht grant, AlgoSoc asks how we can realize public values in an algorithmic society where AI is commonplace. Our group is active in the Justice sector of the AlgoSoc project, where we look at the effects of AI on the rule of law.
- AI4Intelligence: This is a 5-year NWO-funded KIC project that focuses on research with a clear societal impact. In a consortium containing four universities, the Netherlands Police, and the Netherlands Forensic Institute (among others), we will look at the technical, legal, and organizational aspects of turning raw, multimodal data into trustworthy evidence that can be used in court.