Responsible and transparent AI
The police works proactive to implement the ethical and legal aspects of AI. This is done with both technical and non-technical research.
When machine learning models are used in practice, e.g. at the Netherlands Police, the explanation and motivation for (the outputs) of the models become relevant. My research focusses on explaining text classification models. In particular on explaining models in line with the algorithmic decision process of the model (faithful), and in a manner that is understandable to users. These explanations come in the form of annotator rationales: (sub)sentences or words from the input text that explain a model’s prediction.
Publications:
E. Herrewijnen & D. Craandijk. Towards Meaningful Paragraph Embeddings for Data-Scarce Domains: A Case Study in the Legal Domain (pdf, 1.1mb). Proceedings of the 6th Workshop on Automated Semantic Analysis of Information in Legal Text (ASAIL 2023).
E. Herrewijnen, D. Nguyen, J. Mense & F. Bex (2021) Machine-annotated Rationales: Faithfully Explaining Text Classification (pdf, 195 kb). Proceedings for the Explainable Agency in AI Workshop at the 35th AAAI Conference on Artificial Intelligence.
New techniques in the field of Machine Learning (ML) offer the Dutch Police many opportunities for data-driven analysis and decision-making. However, the complexity of these methods makes it unclear how the decisions made using these ML models were generated from the data. For the Police, it is of vital importance that decisions made are grounded in findings and supported by facts that strengthen the case. Moreover, these systems should be deployed in a legally and ethically responsible manner. In my PhD, I investigate and develop methods for (interactively) explaining decisions made by artificial intelligence (AI) systems, in a human-understandable, useful and truthful manner. To do so, I combine insights from informatics, philosophy, psychology and business administration.
Publications:
M. Robeer, F. Bex, A. Feelders & H. Prakken (2023), Explaining Model Behavior with Global Causal Analysis (pdf, 850kb). Proceedings of the 1st World Conference on eXplainable Artificial Intelligence (xAI 2023).
F. Selten, M. Robeer & S. Grimmelikhuijsen (2023), ‘Just like I thought’: Street-level bureaucrats trust AI recommendations if they confirm their professional judgment (pdf, 1.9mb). Public Administration Review.
M. Robeer, F. Bex & A. Feelders (2021) Generating Realistic Natural Language Counterfactuals (pdf, 717kb). Findings of the Association for Computational Linguistics: EMNLP 2021.