September 7, 2022
The Internet and the digital world are rapidly evolving, and the time for policymakers, companies, and citizens to demand change is long overdue. Governments and regulatory bodies on a global level are developing frameworks to tackle the monetisation of disinformation in response to ad tech’s failed attempts at self-regulation.
To help contextualise and propose concrete recommendations, The Global Disinformation Index analysed current ad tech policies and their enforcement across the present policy landscape.
Understanding GDI’s adversarial narrative conflict framework is critical to translating the findings within this report — and more importantly to tackling today’s constantly evolving, complex online threat landscape. This landscape features tools and actors that can lead to abusive and harmful behaviours which often slip through the gaps of current monetisation and content moderation policies.
Overly simplistic definitions of disinformation rooted in fact-checking and “verifiably false information” are insufficient to enable demonetisation of harmful content. These definitions also create gaps for intentionally misleading narratives, especially when those narratives are crafted using cherry-picked elements of the truth.
Utilising the lens of adversarial narrative conflict — which goes beyond fact-checking or overly simplistic true vs false dichotomies — provides a more comprehensive basis for understanding disinformation tactics and risks.
Based on this framework, GDI tracks more than 20 adversarial narrative topics (such as climate change denial, voter fraud, antisemitism) and continuously monitors the supply policies of 44 ad tech companies (companies that provide the software and tools that are used for the placement, targeting, and delivering of digital advertising).
For this study, GDI’s analysis of 44 ad tech companies in its database focused on 15 different adversarial narrative topics. Our findings include:
Figure 1. Sample of publisher policy coverage on six adversarial narrative topics
GDI's research shows that the supply quality policies ad companies have in place are often incomplete and are not comprehensive enough to address all types of disinformation. Additionally, these policies are rarely updated to capture new or evolving adversarial narratives as seen in the case studies within this report.
Figure 2. Google continues monetisation of anti-Ukrainian content on OpIndia.com
International norms on best practices regarding our online space are in the process of being created by governments, private companies, citizens, and civil society organisations.
The potential to reform the disinformation ecosystem is close at hand but only if regulations and policies are enforced.
How can these groups achieve this important aim? To combat disinformation and protect our online and offline world, we must create a stronger regulatory regime which includes, but it not limited to, the following:
Enforcement remains the key challenge going forward. The regulatory shift to creating new transparency obligations will bring accountability to the ad tech industry, and address the opaqueness associated with online advertising, as well as bring independent expertise into the assessment of online content online. All stakeholders must work together to develop a long-term and industry-wide solution to end the monetisation of harmful disinformation.
For more information on GDI’s recommendations, please download the full report.
GDI has examined the current legislation approaches of a dozen countries to address the problem of disinformation. Our study provides an overview and captures the gaps in the approaches of these governments that need to be addressed.
The Global Disinformation Index (GDI) and the Institute for Strategic Dialogue (ISD) have analysed the digital footprints of 73 US-based hate groups, assessing the extent to which they used 54 online funding mechanisms. This research aims to map out the online infrastructure behind hate groups’ financing and fundraising in order to support efforts to defund and therefore disempower hate movements in the U.S. This research was graciously funded by the Knight Foundation.