Unlocking the synergy between MARGOT and ASReview

two hands putting two pieces of a puzzle together

Systematic reviews play a key role in synthesizing large volumes of scientific literature. However, the manual screening process is time-consuming and prone to inefficiencies, especially when abstracts are long, contain a lot of unnecessary information or are inconsistently structured. 

MARGOT, an argument mining tool developed at the University of Bologna, automatically detects argumentative components—such as premises and conclusions—within scientific texts. This project explored how integrating MARGOT's argument-mined abstracts (AM abstracts) into ASReview’s screening workflow could refine the systematic review process.

Progress

Elisa Ancarani’s MSc thesis focused on analyzing how MARGOT's argument-mined texts affect the dynamics of ASReview's active learning process. Specifically, she investigated whether AM abstracts could effectively replace traditional abstracts across different domains and datasets. Her study compared screening performance using three types of text representations: titles only, titles with traditional abstracts, and titles with AM abstracts.

The findings demonstrated that AM abstracts can positively influence ASReview’s performance. While full abstracts still had a slight overall edge, AM abstracts outperformed them in three out of seven tested datasets. This indicates that argument mining helps distill pertinent information from abstracts, making them valuable even when they reduce the text length by approximately half.

An important observation was that the success of AM abstracts is context-dependent. For instance, datasets where abstracts contain crucial information, such as PTSD-related studies, suffered from missing abstracts, whereas domains like Opioids were less affected and even thrived with titles alone. The project also highlighted how data quality—such as mislabelled records and missing abstracts—strongly influences screening outcomes. Moreover, AM abstracts provided practical advantages in terms of lower memory requirements, allowing for the integration of larger BERT-based models within resource constraints where full-abstract vectors failed.

Rather than searching for a definitive “best” text representation, the study emphasized the data- and model-dependent nature of systematic review performance. AM abstracts, while not a universal replacement for full abstracts, offer scalable and efficient alternatives that can simplify information management and reduce reliance on full-text sources in certain scenarios.

People involved