European project works towards self-explaining artificial intelligence
Utrecht part of NL4XAI project to start in September
NL4XAI is a new European project focused on Interactive Natural Language Technology for Explainable Artificial Intelligence. The main goal of this four-year initiative is to train eleven creative and innovative early-stage researchers who will face the challenge of designing and implementing a new generation of self-explaining AI systems. At Utrecht University, Prof Kees van Deemter will supervise Alexandra Mayn, who will start her PhD project ‘Explaining Logical Formulas’ on 1 September.
According to EU legislation, humans have a right to explanations of decisions affecting them, even if artificial intelligence (AI) based systems make such decisions. However, AI-based systems mainly learn automatically from data and often lack the required transparency. This is why it is important to develop more transparent AI systems.
Explaining Logical Formulas
Alexandra Mayn will be working on a system for explaining logical formulas, supervised by Prof Kees van Deemter. Mathematical logic plays an important role in many areas of study, including artificial intelligence, philosophy and linguistics. Yet, humans sometimes struggle to grasp the exact meaning of logical formulas. This happens for example when formulas are very complex, when they have an unusual structure, or when learners are not yet fully accustomed to the conventions employed by the logic.
Mayn will investigate how Natural Language Generation (NLG) techniques can be employed to automatically and effectively explain logical formulas to non-experts, using texts that are formulated in ordinary languages such as English, Dutch, or Chinese. In order to do so, she will investigate computational techniques for simplifying and translating logical formulas into optimally intelligible natural language text, and empirically evaluate the usefulness of the resulting text for users.
Besides Utrecht University and TU Delft, the NL4XAI project includes universities and research institutes from the UK, Malta, France, Spain, and Poland. The early-stage researchers will follow a broad educational programme, training in natural language generation and processing, argumentation technology and interactive interfaces for explainable AI. Additionally, they will receive training in ethical and legal questions of the field and in transdisciplinary skills.
Open source framework
As a result, they will be well-prepared to design and build explainable AI models that generate interactive explanations based on natural language and visual tools which are intuitively understandable by non-expert users. The results will be validated by humans in specific use cases and will be accessible to all European citizens. The main outcomes are to be publicly reported and integrated into a common open source software framework for explainable AI. In addition, those results to be exploited commercially will be protected through licenses or patents.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 860621.