Disinformation, whether it’s about climate change, COVID-19 or the war in Ukraine, is plaguing the online spaces we rely on to understand and engage with the world around us. Social media platforms and governments are taking steps to curb this phenomenon online, but the supply of deceit shows no signs of abating – if only, it’s growing more sophisticated by the day. So what are proven, effective tactics to fight disinformation on social media? Researchers at Utrecht University are still grappling with this question, but they warn many of the existing measures fail to address the number one motive that makes disinformation thrive in the first place: too often, lies are more profitable than the truth. And that’s a wicked incentive for content producers and influencers who, in the age of social media, can be anyone of us.
Disinformation business
You’ve probably seen them all: a manipulated video of Joe Biden making transphobic comments, memes spreading false claims that 5G networks cause COVID-19, or images of alleged atrocities during the war in Ukraine, taken out of context or reused from other conflicts. Whichever form disinformation takes, they all have one thing in common: they are made with the deliberate intent to deceive you.
In the art of manipulation, these examples are just the tip of the iceberg. The increasing sophistication of new technologies, such as deep fake videos or audios, will make it harder to distinguish what is real from what is not. Just imagine what would be possible in fully immersive virtual environments in a few years from now.
Already today, the rise of false or misleading news poses one of the biggest threats to open, democratic societies that depend on the free flow of accurate information for citizens to participate in public life. “Disinformation is splitting society in two camps: there’s an ‘us’ who is right versus a ‘them’ who is wrong. That is leading to increased division, polarisation and mistrust”, says Bruce Mutsvairo, a professor in the department of Media and Culture Studies at Utrecht University. “If people can’t agree on even basic facts about vaccines or climate change, we can’t work on solutions to our shared problems. We live in two different realities.”
Producing and spreading disinformation can be done for multiple reasons, says Mutsvairo. “Governments or political candidates are known to peddle it to mobilise support for their cause or discredit the opposition”, he says, mentioning the infamous illegitimate use of data from millions of Facebook users from Cambridge Analytica to tilt the outcome of the 2016 US presidential elections or the Brexit referendum.
Regular citizens, too, are responsible for spreading falsehoods or conspiracy theories (‘9/11 was an inside job’), Mutsvairo stresses. “I investigate how politically motivated groups use it to exacerbate conflict in Mali and Ethiopia. A lot of young people feel they have been rejected by the State. They don’t trust the government or the media. They don’t see a future. It’s easy for these people to fall into disinformation's grip.”
But the most common motive to spread disinformation online is money - even if it serves other (political) interests too, says José van Dijck, Professor of Media and Digital Society at Utrecht University. “If we’re serious about fighting disinformation, we need to understand the perverse financial incentives and the wider infrastructure that enables it”, Van Dijck argues. According to her, the problem, ultimately, is not so much about fake news (after all, they’ve always existed) as it is about today’s information ecosystem and how vulnerable, or amenable, it has become to disinformation.
Social media platforms have a financial incentive to amplify content that is sensationalist, polarising and fake.
Free pass to disinformation
“Since the advent of digital and social media, anyone can post content on the Internet and reach millions within minutes. As a result, there are fewer gatekeepers to filter the veracity of the information that reaches us”, says Van Dijck. “The same tools that are helping spawn pro-democracy movements around the world are also enabling disinformation to spread faster and further than ever before.” In fact, as an MIT study found, false news stories are 70% more likely to be retweeted than true stories.
The virality of false information online doesn’t happen by chance. As Van Dijck explains, “Facebook, Twitter, YouTube and other social media platforms use algorithms to amplify content that is more likely to get your attention. That can be photos of your friends’ birthday parties or a nuanced report about mitigating COVID-19, but too often it means content that is sensationalist, polarising and fake.” The logic, Van Dijck argues, is simple: these are the types of posts and stories that appeal to us emotionally, and thus, keep us engaged.
Because engagement is platforms’ core business model and money-making metric. “The more time you spend on social media, the more data you’re giving away about yourself, and the easier it is for these platforms to target and sell ads. That’s how platforms make money out of their free services, by selling your attention to advertisers”, says Van Dijck. Just in 2021, ads earned social media platforms $153 billion, a number that is projected to grow to $252 billion by 2026.
Platforms make money by selling your attention to advertisers
“That is a perverse financial incentive to amplify content that is engaging, whether it’s accurate or not. That’s also the reason why recent social media initiatives, like flagging up misleading information, suspending fake accounts or hiring fact-checkers cannot fully or permanently tackle the problem,” says Van Dijck, who’s sceptic platforms can ever do enough to curb disinformation while still profiting from it.
Disrupting the economic incentives is surely an effective solution, but not an easy one, Van Dijck says, in part because we’re heavily dependent on these centralised platforms for our global information and communication needs. “A few handful of companies with commercial interests now control and procure the information diet of several billion people: the Big Five (Google, Amazon, Apple, Meta, and Microsoft) in the West and another three (Baidu, Alibaba and Tencent) in the East”, says Van Dijck, who analyses the workings of these platforms in the book The Platform Society.
“This situation of monopoly has given big tech platforms an exceptional power to decide what their responsibilities are, or not, with regards to content that users post on them. That Elon Musk, for example, can decide whether to allow Donald Trump back into Twitter or what should account as hate speech is a crazy situation where you have the owner of a particular platform decide upon the rules. These should be negotiated by social contract, conditioned by legal frameworks.”
Influencers: a new vehicle for disinformation
Regulators attempting to demonetise the spread of disinformation, however, are facing new challenges. “In the past decade, social media platforms have been developing new monetisation strategies that go well beyond selling advertising”, says Catalina Goanta, Associate Professor in Private Law and Technology at Utrecht University, and Principal Investigator of the ERC Starting Grant HUMANads. “Think of live shopping on TikTok, crowdfunding, or subscriptions on platforms such as Patreon, Twitch, and YouTube. Chief amongst these is influencer marketing, which allows Internet users to not only engage with advertising, but also to become advertising.”
Influencer marketing, which Goanta describes as a form of human advertising, is now a booming industry, expected to reach $15 billion by 2022. “At first, influencers earned revenue from promoting goods or services, often inconspicuously, from sponsoring brands to their large follower base. Nowadays, any Internet user can rise to fame, and be compensated for sharing multimedia content about virtually any topic, including news, conspiracy theories, or elections”, says Goanta. “That makes influencers a powerful vehicle for disinformation, especially since the relationship with their followers is largely perceived as one based on trust and authenticity.”
Influencers are cashing in money for sharing ads masked as facts.
The need to keep a steady audience may encourage influencers to engage in unethical practices, and, says Goanta, examples of it are mounting: from a coordinated campaign to pay TikTok influencers to spread Kremlin-propaganda about the war in Ukraine to the plot against the Pfizer vaccine, where some influencers immediately turned to their channels to disclose the attempt to recruit them, while others appeared to take up the offer.
“Influencers are cashing in money for sharing ads masked as facts. The industry behind influencer marketing is really opaque, so we only see an influencer holding a product or spreading a political idea, but we don’t see the contracting parties”, says Goanta. “And at the moment, we don’t yet have clear legal definitions to differentiate between content and advertising in social media.”
The European Union’s Digital Services Act (DSA), says Goanta, is a groundbreaking legislation attempting to address some of these online harms. “The DSA severs the liability of social media platforms for the type of content and economic transactions on them. It sets higher standards of consumer and citizen protection than platforms themselves are offering, demanding greater accountability for content moderation or greater transparency to the workings of their algorithms and targeted advertising practices”, explains Goanta.
“However, the European regulator has specifically left influencer marketing outside of the DSA, and in my opinion, wrongfully so. Influencer marketing is a buzzword, but the underlying phenomenon of native advertising is a long-standing problem in the world of advertising: the cat and mouse game of hiding advertising and influencing audiences. We’ve seen this with product placement, with native ads as news editorials and now with social media content”, says Goanta. “At Utrecht University’s Governing the social media and data economy group, we want to chart more clearly the harms that emerge in this evolving social media landscape and how they can best be regulated.”
Public alternatives
José Van Dijck agrees, the European Union’s Digital Services Act is a step in the right direction to strengthen platform governance. Besides, we need to invest in alternatives to corporate platforms. Van Dijck is cooperating with public organisations in public media, cultural heritage, festivals, museums and education to define how a fully socially responsible online space would look like under the project Public Spaces – Internet for the common good.
The main goal is “to design a stack of platforms where users are not viewed as exploitable assets or data sources, but as equal partners that share a common public interest. Existing alternative platforms based on public values need to be made accessible and interoperable, while new ones may have to be designed. This is not simply a technical process but also requires serious reflection on the (local) governance and moderation of platforms.”
Education is key
Until online public spaces become a solid alternative, there’s one more solution in our toolbox against disinformation. Because not everyone contributing to its spread is motivated by profit. Often, it’s just regular citizens like you and me who share false information unknowingly (referred to as ‘misinformation’). It’s therefore important that everyone realises their own responsibility, but also their own power to act against attempts to confuse and deceive, says Eugène Loos, Associate Professor at the Utrecht University School of Governance. Loos believes that educating citizens to spot disinformation is key to stop it from spreading.
“Providing media literacy for both young and older people is a durable solution now and in the future. If you learn how to assess the trustworthiness of a message or the credibility of the source, you will be better equipped to think critically before sharing and amplifying dubious content”, explains Loos, whose research on the access of reliable digital information and the role of media literacy programmes suggests it’s possible to start building immunity against disinformation.
Media literacy empowers people to distinguish what is reliable media content and what is not.
“Just as vaccines, exposing people to the tactics commonly used can help inoculate them against future manipulation attempts”, says Loos. And that works better when done preemptively, he says, after reviewing dozens of interventions from across European countries. “Exposing students to fake news websites or having them play fake news games could be far more effective than more traditional approaches that focus on debunking misconceptions with facts.”
The reason, Loos suspects, lies in the assumption that we can convince or change people’s minds by giving them the facts. “Years of behavioural research show that it’s also about reaching them on an emotional level”, he says. His own research among Dutch primary school children showed that we’re more vulnerable to fall for fake news when we’re emotionally invested. “That’s also the way out. Educational programmes will need to counter the emotional trigger of those falsehoods.”
More research about the secrets of successful media literacy programmes is certainly needed – for example, many of the interventions Loos came across were not evidence-based and they were mainly circumscribed to school settings, thus excluding older generations as well as the real-life scenarios where people are targeted with disinformation. Still, Loos is confident about the value of investing research in this educational approach: “Instead of assigning platforms or governments as the arbiters of truth, media literacy empowers people so that they themselves are able to distinguish what is reliable media content and what is not.”
Disinformation is ubiquitous across digital and social media platforms. Governments, social media platforms and regular citizens, all have a role to play in its spread. By shining a light on its business logic, our researchers identify the shortcomings of current policies and legislations that fail to tackle the economic incentive to spread disinformation. And by studying media literacy approaches, they can understand which works best to stop its appeal. In the end, disinformation may never fully disappear, but these joint efforts can already make it harder for beneficiaries to spread their falsehoods online and provide citizens with the skills and knowledge to access reliable, accurate information. Because that is a fundamental right.