Lecture by visiting researcher Maria Becker: Exploring (automatic) analysis of moralizations in argumentative contexts

to

In this hybrid lecture, Maria Becker, a visiting researcher at the Centre for Digital Humanities, will discuss her research, in which she combines qualitative and quantitative methods to detect and analyze moralization practices in texts. Everyone is welcome to join her lecture either in person or online.

In short

Moral values have a strong effect when used in argumentative contexts, as they typically embody a consistent societal understanding of what is deemed right/good and wrong/bad. Take for instance, terms such as ‘freedom’ and ‘inequality’. Consequently, moralizing practices emerge as powerful discourse tools and are widely used by speakers and writers in political speeches, online discussions and newspaper commentaries.

Beckers methods include the creation of a multilingual dictionary with words that hint at moralizing practices, the automated retrieval of moralizations from texts, and the annotation and analysis of (linguistic) features of moralizing practices.

Abstract

In recent years, researchers from different fields (such as social and political science, psychology, or computational social science) have investigated moral attitudes of individuals and societies as well as the morality of AI systems such as ChatGPT. However, so far there have been only few studies on the phenomenon of utilizing moral values as a discursive strategy, which we refer to as moralizations or moralizing practices. When people moralize, they refer to moral values in order to support their arguments on controversial topics and to underline their claims.

Since there are usually very consistent views within a society about what is right/good and what is wrong/bad, using moral values such as “freedom” and “credibility” (which is good) or “cheating” and “inequality” (which is bad) have a strong effect when used in argumentative contexts, like in the following example (taken from a speech in the German parliament): “We should introduce an upper limit for refugees to ensure the security of German citizens.” Here, the word “security” is used to support the demand for a cap for refugees, which therefore appears justified, since everyone agrees that security is something desirable.

Moralizing practices are widely used by many speakers and writers (and not only in populistic contexts as in the example above), e.g. in political speeches, online discussions or newspaper commentaries, and are an important discourse practice. We propose an approach for detecting and analyzing moralization practices that is applicable to texts from different genres and domains as well as from different languages.

Our approach combines qualitative and quantitative methods such as the creation of a multi-lingual dictionary with words that hint at moralizing practices, the automated retrieval of moralizations from texts, and the annotation and analysis of (linguistic) features of moralizing practices. We show that moralizations are characterized by specific linguistic and pragmatic features, which in turn can be used for building computational models that automatically detect and analyze moralizations in texts.

About

Maria Becker is a research associate at Leibniz Institute for the German Language (IDS) and at Heidelberg University (Germany), where she leads the research group ‘Computational Modeling of Complex Research Topics from the Humanities’. She received her master’s degree in Linguistics, Philosophy and Psychology, and her PhD in Computational Linguistics. Maria is interested in the application of machine learning methods in the area of digital humanities. Her research interests further include corpus linguistics, discourse analysis, media linguistics, annotation studies, science communication and sociolinguistics.