PhD defence: Interpretable predictions with Convolutional Neural Networks for complex data


Deep Learning (DL) and Artificial Intelligence (AI) are currently widely used tools to analyze massive and complex data sets. Despite being very flexible and powerful, Artificial Neural Networks (ANN) are often denoted as "black-box" methods; the causal association between predictions and data is not straightforward nor easy to explain. This thesis is focused on three applications with complex data of a specific type of ANN: 1-D Convolutional Neural Networks (1-D CNN). Through Explainable Artificial Intelligence (XAI) algorithms, 1-D CNN-based predictions can be made interpretable.

Firstly, we considered the possibility of improving the diagnosis of malignant tumors by using 1-D CNN, through the classification of Raman spectra of genomic DNA. Much of the focus is dedicated to discerning different sub-cell lines of the same tumor.

Next, 1-D CNN was implemented to predict El NiƱo Southen Oscillation (ENSO) from Zebiack-Cane (ZC) simulated data. We tried to understand what 1-D CNN can learn about these events' physical dynamics by distorting the parameters that rule the ocean-atmosphere coupling.

Last, a joint work with the ICU-Department at UMCU for improving the treatment of ICU patients is presented: 1-D CNN was utilized to predict nosocomial ICU-Acquired Infections (ICU-AI) dynamically. Specifically, 1-D CNN was trained to score the risk of an ICU-AI onset only by analyzing the massive amount of information available from the ICU monitors. The actual ICU-AI prediction was provided through Survival Analysis techniques, after embedding the 1-D CNN analysis into a wider set of traditional explainable variables.


Start date and time
End date and time
Academiegebouw, Domplein 29
PhD candidate
G. Lancia
Interpretable predictions with Convolutional Neural Networks for complex data
PhD supervisor(s)
prof. dr. ir. J.E. Frank
dr. C. Spitoni
More information
Full text via Utrecht University Repository