November 1, 2019

The New Trends of Influence Operations

Over 70 countries have now been the target of some form of a disinformation campaign in the last year, according to new research from the Oxford Internet Institute.

At the GDI, we see that these disinformation operations - unlike older passive forms of media propaganda - draw on social media users’ active participation (such as likes, comments and shares) to gain traction. These new campaigns are also pushing adversarial narratives. The actors behind them - which are a disperse group - still share common characteristics and tactics (see figure below) aimedat creating conflict, as the GDI has argued before.

In this blog post, we present four disinformation campaigns from around the world that represent this new model and its specific trends of:

  • Cross-platform activities: state-sponsored disinformation operatives now leverage multiple platforms to maximise the reach of their messages. They respond rapidly to shifts in current events, to maintain desired state narratives. Instagram is especially popular for this purpose.
  • Co-opting legitimate influencers: This approach often includes the tactic of connecting with legitimate influencers to achieve an important goal of placing disinformation into the mainstream.
  • Increased secrecy: Narratives can start out being sprinkled across different networks. Often, operatives take new measures to obscure campaign origins, sometimes even at the expense of reach.
  • Emerging players: Heavyweight state actors such as China are expanding their focus from domestic to global audiences. Tactics and levels of sophistication vary; there’s no ‘one size fits all’ approach as many new actors are getting in the game.

Case 1: Indonesia and its Twitter propaganda botnet

The upheaval and growing conflict in the Indonesian province of West Papua has spread into disinformation targeted at local social media networks and the Western media.

Bellingcat has exposed a network of Twitter bots promoting pro-Indonesian government narratives. The campaign was launched on Twitter and spread via the botnet, but has also leveraged Facebook, YouTube and Instagram via promotional pages. One example that Bellingcat has documented is a fake online media site (with Facebook, Twitter and YouTube profiles) called Papua West:

This combination of multiple platforms gives the messages substantial reach and influence. A comparison of posting times reveals the automated nature of these Twitter accounts. When seen side-by-side it becomes obvious that the accounts are tweeting in tandem.

When campaigns such as those in West Papua come from the government itself, social media platforms have the power to take the most effective action and remove the accounts. They have done this before with various influence campaigns, but so far there is no news about action on the West Papua case.

Case 2: Hong Kong's protests and China's response

Hong Kong has been in the news in recent months due to ongoing protests. Protesters started by demonstrating peacefully against Carrie Lam’s proposed bill to allow extradition to mainland China, but violent responses from the Hong Kong authorities exacerbated the situation, pushing the city into further unrest.

Recent research by the Australian Strategic Policy Institute (ASPI) has shown that the Chinese government’s campaign is focused on portraying the protesters as violent. Much of the content is written in Chinese characters. This suggests the main target audience is Chinese communities overseas, as well as Hong Kongers themselves. This tweeting activity surged simultaneously in June, as documented by ASPI:

While the Chinese government has been conducting domestic influence campaigns via its 50-Cent Army for a number of years, it has used the current situation in Hong Kong to actively influence global audiences in both English and Chinese.

Case 3: Iran and the ‘Endless Mayfly’

This campaign first came to light shortly after the 2018 murder of The Washington Post journalist Jamal Khashoggi, when a Twitter persona “Mona A Rahman” contacted a US-based terrorism analyst to share an inflammatory article about Israel, supposedly from the Harvard Kennedy School’s Belfer Center. But the website turned out to be fake, impersonating the original on a similar URL (see below from Citizen Lab, which tracked the campaign).

A central tenet of "Endless Mayfly" was publishing adversarial narratives about Saudi Arabia, Israel, and the US on news-like domains that pretended to be respected media sites including Bloomberg News and The Atlantic). These narratives were then spread on social media by a number of inauthentic personas, such as “Mona”.

Outreach efforts included reaching out to legitimate journalists, activists and dissidents in an attempt to engage them with the false narratives. This even resulted in some notable politicians, like the far-right French parliamentarian Marion Le Pen, picking up the bait of Iran’s seeded messaging:

Case 4: US 2020 & IRA CopyPasta

Facebook recently suspended a series of Instagram accounts allegedly linked to Russia that were flagged by researchers at Graphika for trying to seed political and social conflict ahead of the US 2020 elections. According to Facebook, the suspended accounts are part of the same operation, which originated from Russia and has ‘some links’ to the Internet Research Agency (the original St Petersburg troll farm).

During the course of the campaign, the accounts posed as various US political communities, such as black activist groups, Muslims, and gun rights activists. Some of their posts focused directly on the election, praising either Bernie Sanders or Donald Trump, while others attacked key political figures such as Joe Biden, Kamala Harris or Elizabeth Warren (screenshots from Graphika):

Graphika highlights that many of the posts in this new campaign have reused original IRA memes, hence the name ‘IRA CopyPasta’. But this campaign took great care to hide its origins, focusing on secrecy above audience growth. Although the campaign mainly revolved around Instagram, its operators also leveraged Twitter via posting screenshots of various tweets onto the campaign’s Instagram pages.

These new tactics and campaigns remind us at the GDI that staying ahead of the curve of disinformation is critical. The GDI is working with partners to root out disinformation campaigns before they spread by identifying them early. Please join us in this work.