Emily Sullivan

Associate Professor
Theoretical Philosophy
Projects
Project
Machine Learning in Science and Society: A dangerous toy? 01.01.2025 to 31.12.2030
General project description

Deep learning (DL) models, are encroaching on nearly all our knowledge institutions. Ever more scientific fields—from medical science to fundamental physics—are turning to DL to solve long-standing problems or make new discoveries. At the same time, DL is used across society to inform and provide knowledge. We urgently need to evaluate the potentials and dangers of adopting DL for epistemic purposes, across science and society. This project uncovers the epistemic strengths and limits of DL models that are becoming the single most way we are structuring all our knowledge, and it does so by starting with an innovative hypothesis: that DL models are toy models. 

A toy model is a type of highly idealized model that greatly distorts the gritty details of the real world. Every scientific domain has their own toy models that are used to "play around" with different features, gaining insight into complex phenomena. Conceptualizing DL models as toy models exposes the epistemic benefits of DL, but also the enormous risk of overreliance. While explanations of AI success and failure have splintered in different directions, TOY provides a common cause for the surprising success and widespread failures of deep learning models across science (and society).

Treating DL models as toy models is the kind of transformative idea that can solve a number of existing problems, answer open questions, and identify new challenges in philosophy of science, on the nature and epistemic value of toy models and idealization; in philosophy of ML, by shifting the debate away from issues of DL opacity to more fundamental questions that underscore how DL models structure knowledge; and by bringing siloed debates in ethics of AI together with philosophy of science, providing necessary guidance on the appropriate use and trustworthiness of DL in society.

Role
Project Leader
Funding
EU grant ERC Starting Grant
Completed Projects
Project
Explain Yourself?! The scope of understanding and explanation from machine learning models 01.02.2020 to 30.06.2024
General project description

Machine learning models influence impactful decisions. However, ML models are increasingly complex and opaque, challenging current philosophical theories of explanation and understanding. This VENI project develops a new framework for identifying whether understanding from machine learning models is possible, and the impact these models have on theories of explanation.

Role
Project Leader
Funding
NWO grant