October 9, 2020

How can advertisers disrupt disinformation? Don't fund it.

Combating disinformation can seem too big an issue for advertisers to confront: just a game of “whack a mole”.

But much of the worst disinformation in the English language appears on the same few sites. Just like during the 2016 election when an estimated 0.1% of Twitter users were responsible for 80% of disinformation on the platform, the Global Disinformation Index sees a similar pattern at the site level.

GDI’s technology has identified 519 sites in the English language which publish the highest volumes of divisive, polarising content.

Over one-third of those 519 sites carry ads. This group of ad-funded sites - which total 189 domains - peddle a range of overlapping topics**.**

  • COVID-19: 100 percent of the sites.
  • White Supremacy: 67 percent of the sites.
  • Anti-Semitism: 58 percent of the sites.

These sites do not just publish one story that is highly divisive disinformation. They tend to come back to the same topics again and again.

Among this group, there are a few sites that stand out for their “narrative density”. These sites carry the highest amount of disinforming content as a proportion of all their output.

These sites are flagged as repeat offenders for content relating to specific adversarial narrative topics. The GDI uses machine-learning-based topic modeling to identify adversarial narrative content across tens of thousands of English language sites.

These topic models can distinguish between credible news coverage of a topic and highly divisive disinformation on the same topic with over 90% accuracy. We also use our technology to determine whether these websites carry advertising and, given their traffic volumes, what the estimated advertising revenue to those sites might be. This gives us an indication of how much revenue the sites peddling in high-risk disinformation content are earning.

On just three topics - COVID-19 conspiracies, white supremacy and Anti-Semitism - the GDI estimates that all 189 sites will collectively earn nearly US$350K each month from ads served on them.

Figure 1: Example of anti-Semitic disinformation content

Figure 2: Example of white supremacy disinformation content

Figure 3: Example of COVID-19 disinformation content

These examples of ad-funded content, which incite hatred and make demonstrably false claims, were served by Google ads, in violation of Google’s own policies:

  • Anti-Semitic content (figure 1): “incites hatred against, promotes discrimination of, or disparages an individual or group on the basis of their race or ethnic origin."
  • White-supremacy content (figure 2): "encouraging others to believe that a person or group is inhuman, inferior, or worthy of being hated."
  • COVID-19 conspiracies (Figure 3): "relates to a current, major health crisis and contradicts authoritative, scientific consensus."

Google, as the market leader, is the ad exchange most commonly found on these worst offender sites. Google provides ad services to almost 60 percent of the worst offenders across these three topics.

The fact we can identify some of the top ad-funded sites pushing disinformation topics is good news.

It means that advertisers have access to the information they need to choose whether to stop advertising on them.

The decision to pull ads from a site is not an issue of free speech. While the right to say what we want is protected in many countries, often in law, there is no corresponding right to get paid for what you say. The right to free speech is protected, the right to free reach and to profit off of that speech is not.

The GDI aims to support brands, advertisers and ad tech companies with the information they need to make such decisions and align them with their corporate responsibility agendas. The recent Stop Hate For Profit boycott of Facebook over the summer was evidence of growing advertiser activism on this front.

GDI proposes the following policy responses by brands and advertisers:

  • recognise their role and power to defund disinformation and stop offline harms.
  • use impartial and trusted disinformation risk ratings for news sites as part of brand suitability decisions.
  • align their corporate responsibility agendas with what content they indirectly fund via marketing activities.
  • demand from ad tech partners the adoption of state-of-the-art detection of content which is at high risk of disinforming readers.

The GDI looks forward to working together on this journey to ensure ads do not create a financial incentive for disinformation.