How a Fake Article about the New Defense Secretary “Defunding the Military” Foreshadows the Disinformation Playbook in the Biden Era

How a Fake Article about the New Defense Secretary “Defunding the Military” Foreshadows the Disinformation Playbook in the Biden Era

  • March 1, 2021

By Jacob Silver

In recent years, bad actors have conducted countless disinformation campaigns with varying degrees of sophistication. In 2020, for instance, researchers uncovered a multi-year effort to push narratives aligned to Russian security interests using an array of compromised websites and spoofed email addresses, the source of which remains unclear to this day. 

But when it comes to disinformation, sophistication and success are not always correlated; bad actors do not need heavy financial backing or technical prowess to warp online discourse. In one recent case, a fabricated article screenshot about the new US Defense Secretary seeking to “disband” the military reached hundreds of thousands of people, providing useful insight into what we can expect the disinformation landscape to look like over the next several years.

Background 

On December 7th, Joe Biden announced he would nominate retired Army General Lloyd Austin to be the country’s Secretary of Defense, making him the first Black nominee for the job. Among the many journalists to cover the anticipated appointment was Joe Gould, a military reporter for Defense News. A January 19th article of his focused on a procedural aspect of Austin’s nomination process and mentioned the historic nature of his appointment.

The story said nothing about any plan from Secretary Austin to “defund” or “disband” the military; there was no such plan, and Austin had mentioned nothing of the sort in any hearing or interview. Gould, as he wrote three days later, was therefore deeply confused to receive dozens of emails asking whether Austin indeed intended to “dismantle” the military, as they had seen in an article he had apparently written. He soon discovered that the emails were the result of a disinformation campaign: a screenshot of a fake headline and lede, along with a real picture of Austin and Gould’s own byline.

This image achieved a high level of reach despite relatively low sophistication; there was no associated URL, misleading video, or image manipulation beyond the addition of text. Below is an examination of why this meme was particularly well-suited to fool people and quickly spread online and what it can tell us about disinformation in the coming years.

Fake Article Screenshot

Screenshots are a favored technique among bad actors; they can offer a high level of credibility for relatively low effort. They also create barriers to end-user verification, as there is no link to click through, nor text to copy-and-paste into a search bar. Incidentally, these same advantages inhibit researchers’ ability to identify instances of the disinformation comprehensively. Not only is search functionality for image text limited on most platforms, but trolls regularly  use screenshots as a way to manipulate or circumvent technological measures for reducing the spread of false and manipulated images.

Screenshots are also a popular way to spread true information online, so the very presence of a screenshot is, for many, no reason to be suspicious of its content. But the ease of image manipulation should cause everyone to question claims in images reflexively — and platforms to more proactively monitor for and strictly enforce rules against manipulated text screenshots.

Snippets of Real Information

In a world where disinformation routinely gets past platform’s filters and countermeasures, the last line of defense is the end user’s willingness and capacity to verify what they are seeing through trustworthy sources. However, by including pieces of genuine information, the creator(s) of this screenshot assured that a couple quick Google searches might not only fail to dispel the notion that the story was true, but may even reinforce it. 

Google shows, for instance, that Joe Gould really is a military reporter, and that Lloyd Austin really did vow to “eradicate extremism in the ranks”— via the Washington Post, no less. Further, the screenshot employs a similar font to the one used on Defense News, and the same picture of Austin the publication used for Gould’s genuine article on the 19th.

While these bits of true information do not support the falsehoods in the manipulated screenshot, they may satisfy a casual observer enough to believe its contents. That dozens of people were able to track down Joe Gould’s contact information to ask him whether the article was true shows that many do have the instinct to conduct research, but that that research will not always turn up satisfactory results. This is also why the rapid deployment of fact-checks is a vital tool in the fight against disinformation; research must be rewarded with true and useful information.

Cross-Platform Amplification

One key feature of disinformation is that individual instances may originate on fringe platforms, and then spread to wider audiences via mainstream channels, often through deliberate seeding. In fact, cross-platform spread is one of the best early indicators we have that a bubbling disinformation campaign may soon achieve viral spread.

While we cannot say with absolute certainty where this screenshot originated, the earliest known instance occurred on 4chan, with another post appearing on Facebook not long after. Over the ensuing days, it was shared to several Telegram channels with a combined audience of over 160,000 users. By Jan 25th, there were at least 129 publicly identifiable shares on Facebook, earning thousands of likes, comments, and additional shares, with untold numbers among private groups and accounts. 

In some cases, commenters stepped in to challenge or debunk the posts. However, debate around a fabricated screenshot often still benefits bad actors by fueling divisiveness around the topic. Indeed, the ensuing chaos is usually the ultimate intent of a disinformation campaign, even taking precedence over fostering belief in the original lie.

Platforms have wildly different policies, protocols, and enforcement mechanisms when it comes to disinformation. We cannot control what memes, images, videos or narratives appear on the internet. But we do often have the ability to catch disinformation campaigns early and do everything in our power to intervene before they go viral. In the case of the “defunding the military” meme, however troubling the spread on Facebook by January 24th, collaboration between us and the platforms led to the content’s removal before it could grow any further.

Suspicious Seeding on Facebook 

While disinformation is often spread unwittingly by those who believe it, its reach is typically amplified deliberately through inorganic channels. One Facebook Page participating in the amplification of the screenshot, Republicans Worldwide, was created in July 2017, and has several notably suspicious characteristics and behaviors. The page has eight administrators from six different countries, including the United States, Argentina, Germany, Luxembourg, Norway and Sweden. There is minimal engagement on posts, despite relatively high posting frequency, and the account, classified by Facebook as a “Political Organization,” has less than 2,000 followers. The page generally posts low-quality pro-Trump memes, while consistently sharing pro-Trump content from pages linked to Steve Bannon, the NRA, and PJMedia among others. 

On January 10, 2021, the page shared a Jesus vs. Satan meme template that was previously used by Russian state-sponsored troll farms in Facebook advertisements ahead of the 2016 US Presidential election. 

On January 13, 2021, the page posted a meme flyer for a January 17, 2021 armed march on ‘Capitol Hill and all State Capitol buildings’ called ‘Refuse to Be Silenced’. The group also recently began promoting its alternate page on MeWe, a newly emerging fringe social platform alternative to Facebook.

This group and others show signs of suspicious behavior in amplifying disinformation; while we do not have access to all of the data necessary to confirm it, we know that inauthentic accounts and pages are one popular way to get a piece of deceptive content to catch on among genuine users. 

Conclusion

Despite an incredibly challenging start to the year, 2021 has kicked off with some encouraging signals on the disinformation front. A number high profile accounts known for pushing disinformation have been shuttered. QAnon remains banned on mainstream platforms (albeit through inconsistently enforced policies).  Platforms have taken limited but targeted actions against false COVID-19 related content, and legislators have recently started paying more attention to the impact of online disinformation campaigns. But there is more we can all do to limit the power of disinformation in 2021 and beyond:

  • Platforms should monitor screenshots of article text more proactively, and provide researchers with tools to do the same. CrowdTangle’s recent launch of image text querying and identification is a promising start in this area, and others would do well to follow suit.
  • Journalists should treat cross-platform spread as one of the most important features of disinformation, as it not only can help predict what narratives and memes are liable to catch fire, but also which are the likeliest to be deliberately seeded by nefarious actors.
  • Researchers should leverage domain-level intelligence to identify likely points of origin for disinformation that can later spread on social media.

There is no comprehensive way to stop disinformation: as long as there are social media platforms, bad actors will find ways to seed them with false and misleading content. But their doing so is only worthwhile if there is a strong chance it will gain a sizable audience and make a meaningful impact. The better we understand the ever-evolving tactics of these bad actors, the more we can do to prevent that from happening.

Close Menu