October 11, 2019

Learning from Cambridge Analytica for the 2020 Elections and Beyond

Watching The Great Hack, a documentary about the now defunct marketing consultancy Cambridge Analytica, one can’t help but fear that today’s elections are at the mercy of whichever corporation can master the wizardry of psychographic targeting and disinformation operations. Yet the film doesn’t establish whether Cambridge Analytica’s magic was real or merely an elaborate sleight of hand. As a result, it presents a flawed model of the threat disinformation poses to the U.S. 2020 elections and beyond.

The film’s title doesn’t refer to a hack in the traditional sense. Rather, it references the film’s core model of political disinformation - in which Cambridge Analytica used targeted advertisements to psychologically hack the minds of enough swing voters to alter outcomes in elections around the world, including the 2016 U.S. election and the Brexit vote of the same year. This is a compelling narrative that produces a clear villain, but it’s also an overly simplistic model of how behavior change works. Committing to it risks learning the wrong lessons about how to safeguard future elections.

For example, research finds that micro-targeted political advertising is largely ineffective when it comes to changing votes. It’s also not clear that Cambridge Analytica’s psychological models were any more predictive of behavior than traditional voter files, according to political communications scholar Dave Karpf - who has called the company “the Theranos of political data”.

The psychological hack model can’t fully explain Cambridge Analytica’s role in recent elections because it’s a modern incarnation of the “magic bullet” theory of propaganda effects. This static, one-way model relies on perceiving the audience of disinformation as passive receivers that are easily manipulated once exposed to the right packets of malicious content. It also views disinformation as a finished product that reaches its audience exactly as crafted by its creator - with the effects they intended.

This is not to say that disinformation operations can’t influence politics, but the scope of the psychological hack framing is too narrow to capture any meaningful effect. Attempting to identify a noticeable behavioral change from a single ad or even a campaign of ads, especially on the scale necessary to swing an election, will come up empty because it focuses on vote counts and misses the social and cultural environment in which information is collectively produced, disseminated, interpreted, and remixed.

The hack model is so common in our understanding of what happened in 2016 that it’s often difficult to consider alternative framings, but researchers have proposed better approaches.

Kate Starbird and her co-authors at the University of Washington recently released a paper proposing a participatory model of disinformation. In their model, disinformation from centralised actors like Cambridge Analytica intersects with a broader ecosystem of regular users who are both an audience to disinformation and contributors to its development and spread - a “largely emergent and self-sustaining activity.”

Similarly, communications scholar Alice Marwick has argued that a holistic approach to studying the effects of disinformation must ask what the audience does with it - how they interpret it, what they contribute to it, and why they share it. The Great Hack misses this idea of an active audience that does something with disinformation rather than simply being impacted by it.

Adopting a participatory model of disinformation is crucial for developing effective responses to the threat it poses. That’s why the GDI approaches disinformation as a networked phenomenon that rests on seeding conflict. We have coined the term “adversarial narratives” to describe this networked collection of truth and falsehoods that is crafted over time by a mix of coordinated and uncoordinated actors. As the diagram below shows, the effect of these layered narratives depends on the cultural context of the audience.

Model developed by GDI

Diminishing the harmful effects of adversarial narratives requires systemic solutions that go after root sociotechnical causes. That’s why the GDI is focused on removing the funding streams that (often inadvertently) fund the large ecosystem of disinforming websites.

This is not to say that producers of The Great Hack were wrong to focus on Cambridge Analytica, but the documentary should have analysed the company’s position within the broader media ecosystem - something it only gestured toward. It is such a vision and understanding that will be crucial for combating disinformation in the 2020 U.S. election and elections around the world.

At the Global Disinformation Index, we believe that disinformation is a systemic, networked process, not just a series of individual ‘hacks’. The hack model has been effective at raising awareness about the threat of political disinformation, but in order to identify and tackle other causes, we need to move beyond it.