Research Themes

Knowledge Representation and Reasoning

Our knowledge representation and reasoning research aims at foundational and systematic methodologies to integrate model-driven and data-driven approaches in developing AI systems. The development of safe and responsible artificial intelligence requires logics and formal tools that support specification, design, implementation, and verification of such systems. To build such AI systems, it is necessary to be able to structure and reason about the problem domain. To this end, we study the construction, optimisation, verification, validation, and explanation of logical, statistical, and probabilistic models. We investigate various types of algorithms to learn and reason with structural information of the problem domain. We actively investigate the following topics:

  • Logics for artificial intelligence
  • Probabilistic modelling and probabilistic reasoning
  • Causal reasoning 
  • Verification, analysis, and explanation of AI Systems
  • Argumentation
  • Evolutionary computing

Reinforcement Learning

Our reinforcement learning research aims to balance theoretical research and practical applications of single-agent and multiagent reinforcement learning systems. Adaptation of behaviour through learning is a key research topic and raises complex and fundamental research questions in AI systems. Reinforcement learning plays a crucial role in the development of interactive and adaptive AI systems in uncertain and dynamic environments. Many issues of practical relevance are related to theoretical issues, such as the efficiency and safety of reinforcement learning algorithms, communication between learning agents, and the use of prior knowledge or logical constraints to shape rewards. We actively study the following topics.

  • Multiagent reinforcement learning
  • Safe reinforcement learning 
  • Reinforcement learning with human-in-the-loop
  • Reward shaping
  • Applications of reinforcement learning

Autonomy and Decision Making

Our research in autonomy and decision making aims to design and develop autonomous interacting systems that automatically decide their actions. The world of AI systems increasingly consists of autonomous software systems that decide their actions in open, dynamic, and uncertain environments based on their internal states and their observations including other autonomous systems. In such environments, it is often impossible to anticipate all interactions beforehand. We study software systems that learn and adapt to the behaviour of other software systems using research in the field of autonomous agents and multiagent systems. We actively contribute to the following research topics.

  • Sequential and joint decision making
  • Rational decision theory and game theory
  • Bayesian decision theory 
  • Multi-objective optimisation
  • Interaction, negotiation, and communication

Agent-based Modelling and Simulation

Our research in agent-based modelling and simulation is focused on data-driven design, engineering, and execution of large numbers of software agents that model a complex system. Complex systems are often difficult to understand because of various issues that influence the behaviour and interaction of entities in the system. Agent-based simulation is a computational approach to modelling these complex systems by modelling individuals as software agents. This provides us with an opportunity to explain behaviour in complex systems as well as predict how minor or large changes in e.g. regulation can affect the overall system. We investigate how large-scale complex agent-based simulations can be developed. Within this research theme, we actively research the following topics:

  • Agent-based simulation frameworks
  • Large-scale Data-driven agent-based simulations 
  • Synthetic population 
  • Calibration of agent-based models
  • Simulation of complex systems: markets, diseases, immigration, mobility

Social and Cognitive Modelling

Our research in social and cognitive modelling aims at formal understanding and computational modelling of (human) behaviour. Theories and concepts from humanities and social sciences try to explain human behaviour and how they are controlled and coordinated by means of social and psychological concepts and mechanisms. Inspired by social and cognitive theories we develop computational mechanisms for modelling, controlling, and coordinating the behavior of interacting autonomous software systems. The emerging autonomous systems and multiagent systems require such effective and flexible control and coordination mechanisms to guarantee the overall system properties. Within this research theme we actively contribute to the following topics.

  • Cognitive agents and BDI systems
  • Behavioural modelling
  • Norms and norm enforcement
  • Responsibility and accountability
  • Emotion modelling and affective computing