March 11, 2021

Anti-Vaccine Networks Thrive on Instagram Despite Recent Policy Shifts

On February 8th, Facebook made headlines for taking a more aggressive posture against anti-vaccine content across its platforms, including Instagram. Since April of last year, the company has released and iterated on a set of criteria for identifying, flagging, and sometimes removing false content related to the coronavirus. But Facebook has long faced criticism from researchers for the lack of clarity on how it enforces these policies, which are limited and delayed in relation to the breadth of health disinformation on their platforms, particularly when it comes to vaccines.

Facebook’s recent push comes in apparent recognition of the unprecedented real-world consequences of false public health narratives. One recent poll suggests that nearly half of U.S. adults say they either will not or might not take a COVID vaccine, and studies have shown that online misinformation is a major contributing factor to vaccine hesitancy. Military leaders, citing online rumors, have reported that up to one third of service members in some units have declined the vaccine. Health experts have shown that a slow uptake of the vaccine, fostered by that hesitancy, can result in thousands of additional deaths.

But what exactly constitutes “anti-vaxx” content, as it is popularly known? In their recent announcement, Facebook singled out a number of examples, such as the debunked claim that vaccines cause autism. But many posts defy easy classification, and not all anti-vaccination content take the form of such straightforward assertions. Using a combination of network mapping and qualitative content analysis, we uncovered an array of content on Instagram that violates Facebook’s stated commitment, raises questions about how far that commitment goes, and speaks more broadly to the evolving nature of online anti-vaccine disinformation.

Why Network Mapping?

We focused on Instagram because, besides being a known haven for anti-vaxxers in the past, tagging behavior on the platform lends itself to network-based approaches to analysis. We started with a broad scope, pulling the 10,000 most-engaged Instagram mentions of “vaccine” or “vaccines” in the two-week period following Facebook’s announcement. The vast majority of these posts were completely innocuous or even celebrated vaccine progress.

Normally, to narrow the search, one would further filter using terms more closely associated with anti-vaxx content. But linguistic filtering, while powerful, is limited to one’s foreknowledge of the content one is looking for. Anti-vaccine narratives evolve both naturally as current events unfold and deliberately as malicious actors seek to evade platform content filters. Network mapping circumvents these challenges by surfacing accounts based solely on their associations: who they tag, and who tags them. Because popularity on Instagram depends heavily on re-posts and shoutouts from like minded accounts, this user-oriented method provides a birds-eye view of where anti-vaccine user groups digitally congregate, uncovering new narratives rather than pre-supposing them.

Mapping Process

To build a network map of anti-vaxx content, we pulled the post data and fed it through a Python script that pulls out the ‘relationships’ inherent in the posts, extracting each instance of one account tagging another. These "edges” provide the basis of the following network map and help determine which accounts are closely associated enough to form visible clusters.

This map gives us some visual sense of how different accounts cluster together, but we can also employ mathematical tools for uncovering and labeling groupings. Using a modularity algorithm, we were able to split up the map into clusters, denoted by a variety of colors. The algorithm decides which groups of accounts are closely enough associated with one another — and sufficiently distinct from other groupings — to constitute their own cluster, in a technique broadly known as “community detection.”

Having these labeled groupings meant we could comb through the data cluster by cluster, quickly identifying which clusters and accounts were spreading anti-vaccination disinformation and which weren’t. For example, one cluster consisted mostly of local news outlets and reporters in New York, and another was mainly large companies and organizations like the NBA and UNICEF promoting vaccination efforts. But inevitably, we came across several clusters in which most or all accounts were sharing anti-vaccine content. Below is the same map with those anti-vaccine accounts highlighted in red.

Network mapping not only allowed us to efficiently uncover serial anti-vaccine offenders, but also understand the ways in which they interact and the particular narratives and techniques they use to have the biggest impact.

Accounts and Content

Using these methods, we identified 61 accounts posting anti-vaccination content. These accounts boast a cumulative 2.4 million followers and 490 thousand likes on vaccine-related posts in just a 2-week period.

On the mildest end of the spectrum, a number of accounts in the US and Australia advocated strong opposition to and physical protest against mandatory vaccination. Besides flouting local guidance around mask-wearing, these events routinely feature blatantly anti-vaccine and conspiratorial propaganda, all of which are featured in photo and video form. These accounts often argue that they are not anti-vaccine and are simply making a political statement in support of health freedom, but their content consistently runs afoul of Facebook’s stated content policies on vaccine disinformation.

The most frequent way these accounts spread anti-vaccine narratives is by isolating, exaggerating, and even fabricating instances of adverse reactions to vaccines. One highly popular tactic is to track down a reported instance in which someone died a short period after receiving the vaccine, presenting it as incontrovertible proof that the vaccine is highly lethal. In reality, the CDC’s records show a death rate of 0.0015% following the vaccine, none of which have been definitely linked to the vaccine itself. These figures dispatch with the pernicious lie, also popular on Instagram, that the vaccine is riskier than contracting COVID-19, a disease that has killed over 1 out of every 100 people who have contracted it in the United States.

Further, these tactics involve incessant discourse over recently deceased individuals who never asked to be anti-vaccine martyrs and whose grieving families may want nothing to do with these online movements. Oftentimes the families of deceased individuals become victims of targeted online abuse and harassment, creating separate and more clearly defined ToS violations.

Accounts in this anti-vaccination network frequently provide a platform to individuals and organizations whose own accounts have been removed from Instagram due to Facebook’s health misinformation policies. For example, two days after the platform removed the account of celebrity chef Pete Evans for misinforming the public about COVID-19, a post promoting his speaking slot at an anti-mandatory vaccination rally earned 3 thousand likes. Accounts continue to repost and feature prominent anti-vaxxer Robert F. Kennedy Jr., who boasted the largest anti-vaccine account on Instagram before being banned, and whose organization still enjoys a popular following on the platform.

Conclusion

Facebook has said that their actions against health disinformation will take time and care to implement, and their more aggressive steps in recent weeks are commendable. However, many of the accounts surfaced through our analysis have long flouted Instagram’s content policies and have signaled no intention of stopping. Further, accounts that spread disinformation are often adept at tweaking their content to fall just within the bounds of platform policies — or, more accurately, just within the bounds of what they expect will not be enforced.

The goal of this exercise is not to police precisely what does or doesn’t qualify as anti-vaccine disinformation, either objectively or in terms of Facebook’s policies. Rather, it is to clarify the ways disinformation evolves and understand how to effectively monitor and combat it. Researchers should be using their entire toolkit of techniques to uncover and analyze health disinformation across the social web. Health officials and policymakers should work directly with those researchers to understand which anti-vaccine narratives are resonating the most and formulate their responses accordingly.

Data from CrowdTangle, a public insights tool owned and operated by Facebook.