My main research area is Natural Language Processing (NLP). My current research interests are in language change and information-theoretic approaches to linguistics. On the language change front, I am studying the evolution of synonyms across time in multiple languages. At the same time, I am using methods from information theory to study morphology and syntax and how they trade off in many languages from all continents. In the past, I have worked extensively on clinical applications of NLP, mostly in psychiatry, and fairness and explainability in NLP and multimodal systems. I am also interested in experimental reproducibility and data quality.
The concept of word is indispensable in the study of language while its theoretical status and even its objective reality is contested. This study aims to explore the concept of word as a fundamental unit through the statistical trade-off between morphology and syntax. Building on existing methodologies1, we will investigate this trade-off across different stages of a language's evolution to understand the informational optimality of words. We will rely on replicating and extending a previous study, combining it with an approach developed by one of the applicants that explores the effect of word-boundary manipulations on the trade-off between word order and word structure. Finally, we will evaluate diachronic case studies. Our data starts with the Parallel Bible Corpus, but we will also explore other corpora that can provide more diachronic information. This work will teach us about the information-optimality of words and will also give us insights into historical language change, shedding light on wordhood from a quantitative perspective.
With the rise of machine learning models in sensitive areas, such as sexism detection on social media platforms, the accuracy of these models is of paramount importance. There are many ongoing research and evaluation campaigns in this field, like EXIST and EDOS. For this task, it is important not only the accurate predictions of the model but also to generate explanations for those predictions. Because most datasets that are used in the studies have been annotated by humans, it is important to understand the factors that can influence them. Therefore, assessing the reliability of annotations made by humans becomes crucial to ensure the quality of the validation process. In this project, we aim to measure the influence of explanations generated by prediction systems on annotators' agreement and compare them with model predictions. Our innovation is about using explanation techniques to better understand both model and human reliability.