February 25, 2021

Lawful but Awful: How Disinformation Highlights the Limitations of the Brand Safety Ecosystem

Recent events such as the U.S. Capitol riot show the real world harms of online disinformation. Brands are working hard to ensure they are not inadvertently supporting harmful content. But are current “brand safety” measures enough?

Brand safety companies offer “brand safe” environments by using a combination of tools to ensure that ads do not end up beside content that conflicts with a brand’s image or goals. This includes protecting brands against appearing besides illegal content such as child abuse, drugs, weapons, terrorism and copyright infringements. But brand safety also touches on protecting brands from legal but harmful content. Such content would cover disinformation, sensitive social debates and hate speech. Some have come to call it the “lawful but awful” category.

Despite increasingly sophisticated tools, GDI has seen how brand safety efforts still unfortunately cannot always stop ads from appearing next to harmful content (a view also supported by recent research). When it comes to COVID-19 conspiracy content, the GDI estimates that nearly 500 English language disinformation sites have generated at least US$25 million in ad revenues in the past year of the pandemic. As the GDI has documented, most recently in EU countries, brands from American Express to Vimeo are unwittingly funding this “lawful but awful” content on COVID-19 conspiracies. Often these brands have no idea their company and products are being featured. The complexity and opaqueness of the programmatic ad supply chain creates blindspots for brands. The result: a brand appears next to harmful content which is out of sync with its values and corporate agenda.

Brand safety companies serve as an important and critical frontline of defence against such exposure. Yet in the last few years, the range of online harmful content has grown too broad and nuanced for them to do it alone. The need to accurately classify and risk rate all of the different types of potentially harmful content for brands requires a whole-of-industry approach that brings together different actors with different niche expertise to support brand safety solutions. Fortunately, industry-led approaches are growing, such as the recently developed framework by the Global Alliance for Responsible Media (GARM) (for brands, advertisers and ad tech) and the newly launched Digital Trust & Safety Partnership (for platforms).

Still, the GDI believes that the varied nature of “lawful but awful” content means it is slipping through the cracks of current solutions and requires more attention from content platforms, ad tech and advertisers. We need to look at the content’s framing and the repetition of such content across a single domain or stream. This approach is based on GDI’s work to track emerging adversarial narratives. This experience has shown that almost any content (regardless of the topic) can be weaponised through framing and repetition.

For this reason, the GDI calls on the brand safety community to expand its framework on what is flagged as harmful content to brands, to include a broader set of considerations:

  1. Content: The material itself on a website (across multiple stories).
  2. Framing: How such content is presented (often intentionally) to give it a specific meaning.
  3. Frequency: The number of times a narrative is repeated.
  4. Risk of harm: The individual(s), people, or groups targeted by the content are marginalised in some way which makes real harm more likely.

The GDI believes that brand safety efforts can be strengthened by considering all four of these elements.

For instance, take this example from the ZeroHedge, a financial news site which others have noted as carrying COVID-19 disinformation. GDI flagged this ZeroHedge article as part of its work to track ad-funded COVID-19 vaccination disinformation. The story seemingly is proffering investment advice and market analysis (content). This alone likely would not trigger a brand safety concern, and did not: a Financial Times ad was served on the page.

Yet reading further in the article, it clearly contains false information on vaccines (framing). Looking across the site, this anti-vaccination framing shows a repeated pattern and theme (frequency). The repetition feeds the creation of an adversarial, conspiratorial narrative that seeks to spark resistance to the official public health response. Ultimately this narrative has a high risk of generating offline harm (risk of harm) by undermining public trust in government efforts’ to roll out vaccines, respond to the pandemic and protect lives.

This range of “lawful but awful” content suggests there is power gained by brand safety companies working with expert third parties to assess and flag these risky sites for a range of niche issues: terrorism, hate speech, online piracy, incitement of violence, explicit content and disinformation. These topical experts can serve as trusted and neutral arbiters on “grey area” content. Moreover, they can assume some of the risks associated by labeling site content and source as unsuitable for brands.

Such an approach aims to strengthen the brand safety industry — and ultimately helps to ensure that brands like the Financial Times are not unwittingly exposed to brand unsuitable content.

No one company can develop and maintain the expertise needed to accurately assess the ever expanded range of lawful but awful content online. An ecosystem approach is needed by the brand safety companies ensuring they are working with experts in the field who can provide the nuanced judgements needed to keep brands safe online.