Malicious actors peddle disinformation for myriad reasons. They may be highly organised nation states motivated by geopolitical aims, private marketing companies acting on behalf of political or commercial organisations, or ad hoc communities of like-minded individuals motivated by a shared ideology. But GDI’s founding thesis is that the majority of disinformation on the web is motivated by financial gain, the result of the dominant attention-driven business models that drive today’s internet.
This is where GDI focuses its efforts. To reduce disinformation, we need to remove the financial incentive to create it. Brands unwittingly provide an estimated quarter of a billion dollars annually to disinformation websites through online advertisements placed on them. GDI uses both expert human review and artificial intelligence to assess disinformation risk across the open web. We then provide these risk ratings to brands and advertising technology partners, providing them with a trusted and neutral source of data with which to direct their advertising spend.
Identifying disinformation is a complex and nuanced process that goes beyond fact checking. Disinformation, as we use the term, does not denote information about which reasonable parties may disagree, such as varying political views. Instead, we use the word to refer to deliberately misleading information, knowingly spread, or the omission of certain facts in service of a particular narrative.
GDI views disinformation through the lens of adversarial narrative conflict. Adversarial narratives share common characteristics:
They have the intent to mislead;
They are financially or geopolitically motivated;
They aim to foster long-term social, political or economic conflict;
They create a risk of harm to at-risk individuals, groups or institutions.
“At-risk groups” range from immigrants, to protected classes like women, persecuted minorities, people of colour, the LGBTQ+ community, children etc. “Institutions” goes beyond institutions themselves to also include the current scientific or medical consensus on topics such as climate change or vaccines, as well as democratic processes like voting laws or the judicial system. The harm caused by disinformation is wide ranging, from risks of financial damage to violence, illness or even death.
Content that promotes these disinformation narratives also poses a potential risk to brands. Advertisers have a right to choose where their adverts end up and what sort of content their ad dollars support. GDI's assessments of news sources enables advertisers and ad technology companies to minimise this risk.
The harms from adversarial narratives are increasingly evident across the world. From burgeoning hate speech and harassment to conspiracy theories and extremism, individuals are harmed emotionally, financially and physically as a result of toxic online content. At a societal level, increasing division and distrust of each other and of the institutions that make up our societies is eroding democratic progress, giving populists and authoritarians increasing visibility and power at the expense of competent and independent voices. No less than civilization’s progress since the Enlightenment is at stake.
The current state of our online discourse is creating huge brand risk for the advertisers whose ad spend pays for much of the internet. With little visibility into where adverts bought on the open programmatic web end up, advertisers are limited in their ability to stop their brands from appearing next to toxic content. The current state of brand safety means that many pieces of content that could endanger a brand are still monetised. It also means that quality journalism is not receiving funding because of blunt, automated blocking tools. Advertisers have a right to transparency over where their advertising dollars end up to ensure that they are not inadvertently funding the sorts of divisive content causing such harm around the world.