August 19, 2020

Why Did the ANTIFA Disinformation Narrative Succeed in Going Viral?

As GDI and others have argued, content that evokes a strong emotional response gets more attention, clicks, and online ad revenue. This is a formula that continues to fuel content that taps into people’s fears and biases. People are worried about COVID-19; the increased financial, political and social uncertainties; and the resulting protests across the United States. Against this backdrop, disinformation actors have exploited these fears. Take the success of recent ANTIFA disinformation narratives which have gone viral—and many of which are making money off of display ads. False rumors that ANTIFA is planning rioting and violence have exploded throughout the internet—in spite of the fact that ANTIFA lacks a leader, membership roles, or any centralised and defined structure.

Unconstrained by reality, disinformation sites are being rewarded for making sensationalist clickbait news and are siphoning money away from quality journalistic sources.

In order to combat online disinformation, this revenue stream must be disrupted to remove the incentive of financially-motivated actors and impede the ability of politically-motivated actors to amplify their message as part of the disinformation ecosystem. The idea that there is a centralised ANTIFA organisation has been created by stoking online fear—and monetised by disinformation sites through online ads.

Example 1: Official Paris Tourism ad delivered by Google

From California to South Dakota, false rumors concerning ANTIFA have caused anxiety, unease, and even armed response. The alleged events have ranged from the absurd—“Children ANTIFA face-painting” as part of a purported flag burning at Gettysburg on the Fourth of July—to more serious threats of violence and looting. Local authorities and organisations have worked hard to stop this disinformation. Some social media platforms have also shut down the groups and networks where it originated.

Yet disinformation about an alleged violent ANTIFA organisation taking over America’s cities and politics still has persisted, remained influential and been spread.

For example, a false rumor in Yucaipa, California started with a viral Youtube video featuring armed men in masks who were preparing for ANTIFA looting. The local police lieutenant said that ANTIFA rumors were false and that peaceful protests would be supported by the department. The ANTIFA disinformation video amassed 23,000 views, while the police’s video was only viewed around 200 times.

In Gettysburg, Pennsylvania, a post claiming there to be an ANTIFA flag burning protest over the Fourth of July weekend was shared over 3,000 times on Facebook. Despite it being denounced as disinformation, hundreds of people still showed up ready to stop the event. Many were armed—some with assault rifles and others with baseball bats—but on the day of the event there was no gathering of ANTIFA protestors in Gettysburg.

Examples 2 and 3: LendingTree and Subaru ads delivered by Google

The commonality of each of these examples is that each intended to spark fear (i.e. ANTIFA is going to come into your town), social media was the vehicle for its dissemination, and sites made ad money off the click publishing this disinformation.

Research has found that people generally share content when they are afraid in order to alert others of a threat. By enlisting the already anxious masses, fear becomes a powerful tool for actors intending to create a viral message.

Financially-motivated actors further spread ANTIFA disinformation by peddling inflammatory content to drive clicks and cash in on fear. Advertisements from well-known companies such as Subaru and LendingTree have appeared beside ANTIFA disinformation (see Examples 2 and 3). Since the pandemic began, GDI has been documenting these ads and the ad networks that provide them: Amazon, Google, OpenX, Revcontent and Taboola, among many others.

Until political leaders and platforms act, we can largely expect to see the same cycle of disinformation that makes money from spreading fear and political divisiveness.

The solution is to reform the ad tech ecosystem and bring increased awareness to the threats of disinformation and the need to direct ad funding to independent and trusted journalism.

The GDI is trying to be part of this solution by rating news domains for their disinformation risks—both in real time and through more in-depth reviews.

All actors along the ad tech value chain —from brands to supply-side service providers—can use these ratings to determine their own risk-thresholds.

Monetising content is not the same as free speech. The right to say what you want does not include the right to make money from it.

Through such changes, we can begin to create an online world where disinformation narratives are disrupted and defunded before they have the chance to go viral.