This week at Advertising Week Europe, some of the world’s largest advertisers gathered to discuss what’s new in the world of advertising. It was one of the first major industry gatherings at this scale since the riots at the U.S. Capitol in early January 2021.
That event changed things for many companies, as the real world consequences of letting online conspiracies and disinformation rage filled global headlines for days on end. Harmful and toxic content, that sought to undermine the credibility of US elections, had inspired thousands to launch a violent insurrection that left several people dead.
Companies, understandably, immediately reacted to this turmoil.
In the following weeks, the Global Disinformation Index (GDI) saw an increase in companies wanting to make sure their brands were nowhere near the sort of adversarial content that continued to spread and create division.
The challenge companies face is that there are still too few protections for their brands online. While brand safety technology does provide some solutions, it still faces challenges to keep up with the ever shifting landscape of harmful content online.
The solutions to this problem exist but they require leadership to ensure a whole of industry response. Otherwise while one company may be able to ensure their customers never end up next to the “bad stuff,” another company may inadvertently step in to monetise the content instead.
Our special publication this week shows how widespread the issue — and what can be done to change it. Harmful content is still being inadvertently monetised by some of the biggest brands in the world. Brands, like Amazon, Coca-Cola, Microsoft and Spotify, that are in fact supporting Advertising Week Europe now. We know these brands do not want to fund this content. In fact many of them have been very public about their concerns regarding disinformation and its impacts on our societies. We all deserve an ad tech system that works better. We must defund disinformation now.
The GDI aims to support brands, advertisers and ad tech companies with the information they need to make such decisions. This process should be seen as part of their corporate responsibility agendas. In doing so, we can clean up our information ecosystems and make the internet and our societies a safer place for all of us.
Industry initiatives like GARM are a good first step to set a bar for common definitions for content such as hate speech and disinformation.
At the same time, governments are taking efforts to address ad-funded disinformation. For example, the Digital Services Act (DSA) in the EU is looking at ad transparency commitments. Article 36 states that the Commission will facilitate and encourage the creation of codes of conduct for online advertising and will ensure that codes of conduct meet the online ad transparency requirements (as specified in Article 24 and Article 30). GDI welcomes this critical step.
Companies also are looking for solutions. Many brands have introduced advertising and publishing policies to restrict what ads can run on their networks. But these policies are often inconsistent, not standardised or aligned, and not enforced. For example, GDI found that for COVID-19 disinformation, many ad tech companies simply had no policies.
No one company can develop and maintain the expertise, standards and implementation protocols to tackle all of the harmful content online. What can make a monumental difference, however, is strong company leadership and the commitment to work with a whole industry approach to defund disinformation.
We know that the advertisers and ad tech are among the victims and not ultimately the culprits of disinformation — GDI looks forward to building with them a healthier advertising and information ecosystems.