July 29, 2021

Want Less Awful Content? Stop Focusing on Content Moderation.

By Clare Melford and Danny Rogers, Co-founders of GDI

When we started GDI in 2018 one of the great gaps in the increasingly loud discourse about disinformation was what we call the “follow the money” angle. It seems hard to believe now, when policymakers on both sides of the Atlantic are clamouring for companies to “demonetise disinformation,” but back in 2018, the conversation was mired in definitional issues about what was, or was not, disinformation; GDI was singular in focusing on the financial business model of disinformation.

Three years on, the discourse is again mired in confusing and complicated arguments about content moderation and free speech, algorithmic transparency and accountability. And again we at GDI are urging researchers and policy makers to “follow the money.” Targeting the financial incentives that underpin disinformation is the most effective, easiest and most democratic way of tackling the problem. To explain:

There are many levers tech companies can pull in dealing with disinformation. For a given piece of content or a known disinformation outlet, a platform or ad tech company can, for example:

a) remove content

b) reduce visibility in search results or news feeds (also known as downrank)

c) demonetise, i.e., significantly reduce the capability to monetize the content via advertising, e-commerce, direct donation or other channels.

Most of the conversation about how to tackle disinformation revolves around option a, content moderation and removal. This quickly descends into one of the following responses.

“it’s a game of whack a mole; there is too much; the technology can’t catch it all and we don’t have enough resources”

or

“but what about free speech.”

These two arguments can keep platforms, policy makers, researchers and academics spinning their wheels for ages. In fact, it has done exactly that. In the years since we started GDI, some version of this argument has been raging across the globe and across social media and tech companies.

Platforms and ad tech companies benefit from this state of affairs. Tying up so much brain power on this one very imperfect lever in the response to disinformation inhibits people focusing on the two levers that both make a real difference AND protect freedom of speech, namely algorithmic deamplification or downranking and demonetisation.

Far less research is being done, and far less media coverage given, to these two highly powerful, democracy protecting levers. Partly because the plumbing of the internet that is used to share and monetise content is less easily understood by most people than content itself. People are attempting to treat the symptoms — polarising, divisive content and the harms it causes — without treating the cause: its promotion within recommender systems in search and social media to reward engagement and ultimately drive revenue for the platforms. It’s a bit like one of those ransom notes in old movies, where the kidnapper cuts out letters from magazines so as to avoid the police recognizing their handwriting. In this analogy, the kidnappers would happily have us argue about the magazines’ right to publish the letters rather than focusing on who “wrote” the note in the first place.

The second of the two levers, the amplification of content so it ends up at the top of your search results or news feed, is actually the fundamental driver of the harms caused by disinformation. If I posted something deeply offensive that incited hatred of a minority group on my own social feed, and that was then only seen/shared among my immediate network, very few people would see it and little harm would result. It is only when algorithms pick up and automatically amplify content with which people engage that it surfaces in other people’s news feeds outside my immediate social graph, and that is where content can generate sufficient momentum to become truly harmful. Without amplification, individual posts are the online equivalent of people shouting on soap boxes on Hyde Park Corner, garnering an audience of three people and a dog.

This has led to much talk about algorithmic accountability, transparency, explainable AI, auditable algorithms, opening the black boxes, etc. The reality is that very few people understand algorithms, and this makes much of the discourse wooly, unspecific and intangible. A lot of brain power and attention is again going into something hard for outsiders to wrap their heads around, where the knowledge imbalance is in the technology companies favour. Again, distracting people from actionable solutions that could work to reduce harm.

It does not need to be so. The only purpose of the algorithm is to drive engagement, sell more ads, and make more money. They do this by prioritising the content most likely to keep us engaged on the platform longest so that we can be shown more advertising. There is no need to audit the details of the recommender system algorithms, for we already know what they are designed to do: The purpose of the algorithm is to maximise the chances of being able to sell the largest number of ad spots by putting the most engaging content in front of us at all times. And much research has shown that negative, hate filled, fear inducing content is much better at keeping us hooked than straight news or even kitten pics. So while the business model of technology companies remains primarily ad funding, the algorithms they design will “solve for engagement,” and the content of choice will be toxic, divisive and disinformation.

The final lever, monetisation, is a powerful one. Without the reward of advertising or other forms of monetisation as the end result, algorithms may well be trained in less damaging ways for the human brain. Advertisers have a right to choose which content their adverts fund. Currently in both the open web and closed social platforms, they have limited control over where their ads end up. Efforts underway across the online advertising system to improve both transparency and choice for the advertiser while also protecting privacy for the citizen, are welcome, if ponderous.

We at GDI are excited to see that the community is finally transcending the free-speech/content moderation debate and noticing the harmful effects of recommender systems, and inadequate transparency and control in the advertising system.

We fervently hope this will shift the conversation away from the distracting and ultimately unproductive debate over free speech toward a pointed one over the role that these platforms play in content amplification and the liability they should hold as a result.

Related Content

Research

Disrupting Disinformation: A Global Snapshot of Government Initiatives

GDI has examined the current legislation approaches of a dozen countries to address the problem of disinformation. Our study provides an overview and captures the gaps in the approaches of these governments that need to be addressed.

Find out more
Research

Disrupting Online Harms: A New Approach

The Global Disinformation Index (GDI) is publishing its written submission to the 2019 Christchurch Call to Action to mark its two year anniversary and to call attention to these earlier but still critical recommendations for how best to guide the work ahead to counter violent extremism (CVE) online.

Find out more
Research

The Quarter Billion Dollar Question: How is Disinformation Gaming Ad Tech?

Major brands are unwittingly funding disinformation domains. The GDI estimates that a quarter billion dollars (US$235 million) is paid annually to our database of 20,000 disinformation sites by ad tech companies placing adverts for many well-known brands.

Find out more

Stay Up to Date

Subscribe to our newsletter to get direct updates on GDI’s latest research, publications, events and more.

Follow Us:TwitterLinkedIn