Prof. dr. F.J. (Floris) Bex

Buys Ballotgebouw
Princetonplein 5
Kamer 523
3584 CC Utrecht

Prof. dr. F.J. (Floris) Bex

Associate Professor
Responsible AI
+31 30 253 3193
f.j.bex@uu.nl
Projects
Project
Formal explanations with computational argumentation 01.10.2022 to 30.09.2027
General project description

If we want to apply artificial intelligence (AI) in high-risk domains, such as health care and the legal system, the applications need to be reliable and trustworthy. Additionally, they need to comply with an increasing range of ethical and legal guidelines. The fast growing research area of explainable artificial intelligence (XAI) is aimed at developing "good explanations" for AI systems and decisions derived with them.

In this project, we study explanations for and with computational argumentation. The main idea of Dung’s abstract argumentation, designed to model non-monotonic reasoning, is that an argument is only warranted if it is defended against counterarguments. We investigate how the benefits of explicit knowledge and clear reasoning mechanisms can be implemented for explanations; determine what makes good argumentative explanations; and apply argumentation-based approaches to provide explanations for decisions derived with other AI approaches. 

Role
Researcher
Funding
Utrecht University
Completed Projects
Project
PROBAS: Probabilistic decision-making based on Arguments and Scenarios 01.12.2016 to 01.12.2020
General project description

Bayesian networks (BNs) provide decision support in complex investigative domains where uncertainty plays a role, such as medicine, forensics and risk assessment. Yet, BNs are only sparsely used in practice. In data-poor domains, they have to be manually constructed, which is too time-consuming to support pressing decisions. Furthermore, few domain experts have the mathematical background to build a BN, a graph representing dependencies among variables with probability distributions over these variables. So despite the increased analytical power a BN could bring with respect to, for example, evidence aggregation or sensitivity analysis, many experts still use more qualitative concepts such as scenarios (stories, cases, timelines) and arguments (evidence graphs, ordered lists), which convey verbally expressed uncertainty ("strong evidence", "plausible scenarios").

If BNs are to be used in actual investigations, we need software tools and interfaces for BN construction that are engineered into the heart of the decision-making process. These tools should be based on familiar, more linguistically-oriented concepts such as arguments and stories, and complemented by algorithms intended to speed up and facilitate the BN-building process.

Role
Researcher
Funding
Utrecht University
Project
Intelligent Reporting Cybecrime 01.05.2016 to 01.09.2020
General project description

In the project "Intelligente Reporting" we aim to, together with the Dutch Police, provide an Artificial Intelligence framework for automatically processing online reports on cybercrime submitted by the public.

The framework uses a peer-to-peer approach: every part of the reporting process is handled by an individual module, which facilitates incremental implementation and connections to legacy systems. Data is only accessible by people interfacing with a specific module, and only the necessary information is shared between modules, which guarantees privacy.

We use a hybrid of machine learning techniques for recognising patterns in data and more transperant and understandable knowledge-based (argumentation) models. This allows us to use recent insights in AI in a responsible and explainable way. 

Role
Project Leader
Funding
External funding Nationale Politie innovatiesubsidie
Project members UU
External project members
  • Daphne Odekerken