Weaponising Biases

People have always sought to disinform others – all the way back to the days of ancient Rome. In the 21st century the key difference is technology, which brings unprecedented speed and scale as well as new tools for creating increasingly convincing false content.

Yet humans are subject to the same biases we’ve always had, making us uniquely vulnerable to the spread of disinformation online. Smart disinformation purveyors have learned how to leverage these psychological biases through the new mediums available to them. Here is a sample of some of these biases:

Bias blind spot

This refers to the common tendency for people to notice all the flaws in an opponent’s argument while failing to recognise any of their own – which explains why nobody thinks they’re biased. When we’re faced with the task of deciding whether a piece of information is true, the bias blind spot kicks in. We may ask “Can I believe this?” when the information is belief-consistent, and “Must I believe this?” when the information challenges our core beliefs.

Third-person effect

In the third-person effect, people believe that mass media messages have a greater effect on others than on themselves. That’s one of the reasons why propaganda is so effective. People think they personally are immune to it while only others are affected. This also goes some way to explaining the mentality driving the belief in conspiracy theories, where people think they’ve been enlightened while everyone else is still deceived by the mainstream media.

Declinism

The belief that a society or institution is tending towards decline. In particular, declinism manifests as a predisposition to view the past favourably but the future negatively. Combined with a strong sense of national pride and exceptionalism, this is why slogans like ‘Make America Great Again,’ and ‘Take Back Control’ were such effective messages for both the Trump campaign and Brexit.

Confirmation bias

Common across all forms of social media, this refers to people’s tendency to search for or interpret information in a way that confirms their preexisting views, while ignoring or dismissing information that challenges those views. This is one major reason why people are more likely to click on disinformation headlines that reinforce their views. Social media design leverages this human vulnerability to great effect with algorithms and filter bubbles.

Bandwagon effect 

Also known as the ‘herd mentality,’ the bandwagon effect is the tendency to believe something is true or good, just because many other people believe it to be so. On social media, the bandwagon effect helps disinformation purveyors spread their messages by providing social proof through posts that get numerous likes or shares.

False consensus effect

People have a tendency to overestimate the extent to which their own values and ideas are ‘normal’, assuming that the majority of others share them. In group settings, such as on social media, the false consensus effect can lead us to believe that our group’s views reflect those of the population as a whole. Social media heightens the false consensus effect because of algorithms that keep serving us content which matches our existing views.

Availability cascades

Availability cascades explain why certain false beliefs become fact in the minds of  many. They are a self-reinforcing process in which a collective belief gains increasing plausibility by constant repetition. Beliefs that seem to explain a complex social or political topic in a simple way are particularly prone to becoming part of availability cascades.

Hostile media effect

This refers to the tendency for an individual to perceive news coverage as biased against their personal position on a certain issue. It helps to explain why conspiracy theories tend to thrive and why ‘alternative’ media sources like InfoWars can gain such a large following.

Backfire effect 

Here, attempts to correct someone’s misperceptions (e.g. in response to disinformation) can instead end up strengthening their views. When confronted with attitude-inconsistent information, our instinct is to defend our deeply-held beliefs, causing us to cling to them more than ever – resulting in the backfire effect. This is one reason why attempting to ‘debunk’ people’s incorrect beliefs using fact-checked sources may not always work as well as expected.

Social identity theory (‘Us vs. them’)

People boost their own self-esteem by identifying as members of an ingroup, then reinforce that self-esteem by favouring their ingroup, while acting negatively towards a perceived outgroup. This behaviour encourages tribalism and can lead to deeper divisions between groups. Football teams and political parties are common examples, but people can also form themselves into such groups on social media.  

In a world that revolves around digital, it’s easy to forget that all humans are subject to similar behavioural quirks and biases. Having a working knowledge of these biases is important for gaining a more nuanced understanding of the disinformation problem – one that goes beyond oversimplified explanations. It lets us see what lies beneath the surface of sophisticated disinformation campaigns, many of which tap into basic human psychology.

Continue Reading

Follow the Money – How Disinformation Has Become a Big Business

Much has been written about disinformation for political ends, whether by foreign states interfering in elections or domestic actors trying to sway social or political discourse in their own country. Far less has been written about disinformation created purely to make money – financially motivated disinformation. At the Global Disinformation Index we believe that focusing on the financial incentives can significantly disrupt the creation and distribution of all types of disinformation. This blog post lays out why we have chosen to “follow the money” that is funding disinformation.

The financial incentives for disinformation is most often (but by no means only) online advertising. The automatic placement of adverts on websites means many brands simply don’t know that they are buying ad space on domains that disinform.  A 2017 study suggests that adverts for over 600 major brands were found on questionable domains linked to disinformation. Another study saw that major ad or content recommendation networks such as Revcontent, Google Display Network and Content.ad were connected with placing ad content – including from brands such as The Gap – on numerous websites peddling disinformation.

To substantially reduce disinformation, we have to reduce the funding stream that the advertising industry has inadvertently provided to domains that disinform. And to do that we have to understand the ad tech ecosystem that allows it to thrive.

A series of shifts have created an ad tech ecosystem which malicious actors can abuse. GDI sees four major trends in the last few years that have fed the problem:

Trend 1: Online media sources explode.

Media content creation is no longer centralised. Hundreds of hours of new content are uploaded to YouTube every minute. Every second, an average of 6,000 Tweets go out. With all this content, many more people now turn to platforms- rather than news publishers – to get their news.  Yet more content does not mean better content or more informed readers. Fiction is drowning out facts online. One study showed that 126,000 rumors which were spread on Twitter between 2006 and 2017 reached more people than the related true stories.

Trend 2: Advertising money floods the online space.

The graph below shows a huge increase in online advertising since the early 2000s (Figure 1). Projected figures show two-thirds of the expected growth in global advertising between now and 2020 will come from online spends (paid search and social media ads).

Figure 1: Evolution of Ad Spend, 2000-2020

Source: https://www.recode.net/2018/3/26/17163852/online-internet-advertisers-outspend-tv-ads-advertisers-social-video-mobile-40-billion-2018.

Trend 3: An opaque ad tech ecosystem capitalises on the gold rush.

A whole universe of ad tech companies have arisen to service this demand from advertisers: from demanding and supplying headline banner space on pages, to serving as exchanges to connect these buyers and sellers – while collecting, aggregating, packaging and selling all of the relevant data in between. In the last seven years, the number of ad tech companies has grown five-fold. The graphic below (Figure 2), known in the industry as the LumaScape shows the ecosystem for display advertising.

Figure 2: The Lumascape of AdTech Players

Source: https://lumapartners.com/content/lumascapes/display-ad-tech-lumascape/

As with all gold rushes, fraud has run rampant in the ad tech ecosystem. Fraud can take the form of ads placed on websites visited only by bots, or ads not shown at all or only shown below the “fold” of the page. By 2025, the World Federation of Advertisers expects ad fraud to cause US$50 billion in losses annually.

Trend 4: Microtargeting and programmatic ads have become more precise and scalable while taking humans out of the decision process.

With the online ad spend booming, there has been a rise in the automatic placement of ads, or “programmatic” ad networks, where ads are placed automatically in real-time based on an auction. This “header bidding” happens every time you load a website and involves numerous exchanges and hundreds of “demand-side” platforms.

More content but much less money for the good stuff

All of this has meant that over the last few years the proportion of an ad spend reaching the end publisher has been reduced as the growing number of intermediaries in the ad tech system have taken a cut: Demand Side Platforms (DSPs) Data Management Platforms (DMPs), Supply Side Platforms (SSPs), exchanges, and others. And with the advent of disinformation, this small trickle at the end of the pipe is now split further between quality and junk news sites – meaning even more challenging times for news outlets (see Figure 3).

FIGURE 3

This crowded and opaque ad tech ecosystem has created opportunities for fraud and provided funding to disinformation merchants. The result is that there has been – and will continue to be – a sharp drop in revenue to quality news publishers. So not only has the increase in disinformation overwhelmed the spread of high quality information, it has also made quality journalism less financially viable.

Turning off the money tap to disinformation is critical not only to reduce the volume of disinformation but also to redirect money back to higher quality news domains. It will further help the ad tech ecosystem protect itself against the sort of abuses that have allowed disinformation to thrive.

Towards a solution – The Global Disinformation Index

GDI wants to restore trust in what we read online by driving out abuses of the ad tech industry. Our index will do this by assessing and labeling domains on their risk of disinforming and providing a real-time feed of this data into the ad exchanges, the places where ad placement decisions are made. This will allow exchanges to divert advertising money away from disinforming domains and back to quality ones. It will give advertisers the option to select low risk domains when they set out their ad spend criteria. This direct intervention in the ad tech ecosystem should reduce the financial incentive to create disinformation in the first place, and redirect ad dollars back to high quality domains.

Look for our forthcoming paper, where we will lay out our approach for achieving this change.

Continue Reading
Close Menu