‘Only by juggling different perspectives can we offer a powerful response to deepfakes’
International research project Solaris comes to an end
The spread of deepfakes is a threat to democracies worldwide, and there is no easy answer to this phenomenon. This is the conclusion of a multi-year European research led by Utrecht professor Federica Russo. ‘Deepfakes are not purely a technological problem. They are a complex mix of technology, context, legislation and regulations, and human behaviour.’
Deepfakes are images, sound or other material that has been manipulated or entirely created using AI models. This technology allows someone to create highly realistic images that are sometimes almost indistinguishable from real images. Deepfakes play an important role in the spread of disinformation, because video images are seen as powerful pieces of evidence. This technology turns that upside down: is what we see really real?
Federica Russo, professor of Philosophy and Ethics of Techno-Science, has been studying deepfakes in the past few years. The European research project, SOLARIS, focused on how deepfakes influence democratic processes and how we as a society should deal with them.
The project brought together various disciplines, ranging from computer science and ethics to psychology and law, in order to examine the subject from different perspectives. ‘This collaboration is crucial,’ says Russo, ‘because deepfakes cannot be viewed in isolation from the broader system in which they function. Deepfakes are not purely a technological problem; they also involve legislation, regulations and human behaviour.’
Debunked deepfakes
A striking result of the research is that the impact of deepfakes does not depend solely on their technical quality: even an implausible or debunked deepfake has an influence. Russo: ‘Deepfakes influence public opinion not so much by convincing people with false information, but by “tweaking” the debate. Even when a video is clearly fake, the images are already part of the broader narrative.’
The consequences for democratic processes are significant. During election season, deepfakes can damage candidates or political parties. Conversely, real compromising images can be dismissed as “deepfakes”, the so-called “liar's dividend”. When everything can be manipulated, even real information becomes suspect. This uncertainty has consequences for trust in the media, for example, and increases polarisation.
The international team of researchers also considered how deepfakes can be tackled. Russo: ‘There is a great temptation to seek the solution in technological innovations, such as detection models. But the result of this is a kind of race between technologies, in which the detection models will always lag behind the facts.’
Deepfakes are “tweaking” the debate. Even when a video is clearly fake, the images are already part of the broader narrative
Moreover, technological innovations ignore the mistrust that arises in society due to the spread of disinformation. Russo: ‘That is why it is important that strict requirements are imposed on news platforms and social media in terms of legislation and regulation, for example by imposing transparency, liability and legal protection. The European AI Act is a good start, but it does not yet sufficiently recognise the risks of generative AI.’
In addition, digital literacy and media literacy are important, says Russo. ‘People need to be aware that what they see may not be real. They need to be critical of the source, recognise the context of a video, and know how content is distributed on media platforms. But people should also be more aware that they need to be careful with their privacy. Not everything is suitable for public sharing.’
Russo wants to emphasise that she does not want to demonise the technology. ‘Deepfakes can also be a powerful tool for good causes. For example, by bringing historical figures to life and getting people excited about science or climate change, we can convey positive messages to the public.’
No safe space
But ultimately, concerns about abuse prevail. ‘Creating deepfakes has become so easy and accessible; you don’t have to be an expert to manipulate images. Think of X’s undressing app Grok, which was available to everyone. With apps like these, there is no longer a safe place, and that is damaging to trust between people. We need to have a conversation about whether such a powerful tool should be available for public use.’ The EU has made a good start by recently deciding to ban these apps, but it doesn’t mean the work is done, Russo says.
Russo is therefore pleased that the knowledge and expertise gained with SOLARIS is finding a home in Utrecht. For example, in the Special Interest group led by Maarten Hillebrandt (Utrecht University School of Governance) and Robert Weijers (Psychology), Tackling disinformation and misinformation, or with colleagues in the Faculty of Science. But the Freudenthal Institute, where Russo works, will also continue to focus on digital literacy. ‘This multidisciplinary approach is essential because the subject has many different aspects. Only by juggling different perspectives can we offer a powerful response to deepfakes.’