While digital platforms including YouTube and Facebook have made “significant” progress in tackling harmful content and ensuring safety for both users and brands, Diageo’s global media director Isabel Massey has warned that the work is “only just beginning”.

The Global Alliance for Responsible Media (GARM) this week launched its first report tracking the brand safety performance of digital platforms and setting a benchmark for progress.

GARM is a cross-industry initiative founded and led by the World Federation of Advertisers (WFA). It was launched in June 2019 with the goal of effectively eliminating harmful content in ad supported digital media and has since grown to over 100 member companies, including agencies, platforms, and major brands. Diageo, Mars, Mondelez, PepsiCo, P&G and Unilever are counted among its members.

For Massey, the report is a “notable milestone” for the industry, as it enables advertisers to both understand the scale of the issue and then to “get granular” about the problems that need to be solved. However, it does not mean marketers can afford to take their foot off the pedal.

“GARM has been effective to date due to its laser focus on reducing harmful online content and preventing its monetisation,” Massey said, speaking during the WFA’s Global Marketer Week 2021.

“So as an industry we have to stay focused on this. I ask everyone to stay focused on this as the challenge is not going away.”

GARM’s report provides a single point of access for marketers looking to analyse brand safety across platforms and track their progress on tackling harmful content.

The initial document aggregates self-reported data from Facebook, Instagram, Pinterest, Snap, TikTok, Twitter and YouTube – data which hasn’t previously been accessible to marketers. Streaming platform Twitch, which joined the alliance in March, will be included in the next report later this year.

Brand safety, digital safety, is more than just what we’re doing today. It’s vital to the sustainability of our marketing industry of tomorrow.

Isabel Massey, Diageo

Of the 3.3 billion pieces of content removed across the platforms over the first report’s nine-month period, over 80% were either spam, adult and explicit content, or hate speech. According to GARM, “significant progress” has already been made by YouTube, Twitter and Facebook in tackling the problem.

There is still progress to be made on the report, Massey said. While the data on harmful content and the actions taken to prevent it is currently self-reported by the platforms, it will need to become independently audited so advertisers can talk with “greater confidence” about the progress they see.

Massey also hopes that advertisers will together dive into the data and identify collective actions that would accelerate progress across multiple platforms.

Over the course of the pandemic, with nations around the world put into lockdowns to restrict the spread of Covid-19, many marketers have had to pivot their advertising strategies and incorporate greater use of digital channels, both as a cost saving measure and to reach consumers at home.

Massey noted that such a rapid pivot and reliance on digital can only be achieved when advertisers have confidence that the online media environment is both effective and safe.

“So please, everyone remain focused on digital safety as it’s going to be core to us delivering effective advertising going forward,” she continued.  “And just remember that brand safety, digital safety, is more than just what we’re doing today. It’s vital to the sustainability of our marketing industry of tomorrow.”

My ask would be that brand marketeers put responsible media right at the heart of their investment decisions. It’s not a nice to have. It’s a must have.

Jane Wakely, Mars

Also speaking at the conference, Mars CMO Jane Wakely added that the imperative extends beyond brand safety and into “social safety”.

“What’s at stake is that we need to play our part in holding no traction for hateful, harmful content [and] misinformation. Because not only does brand safety depend on it, society’s safety does as well. So, my ask would be that brand marketeers put responsible media right at the heart of your investment decisions. It’s not a nice to have. It’s a must have,” she said.

The inception of GARM came in response to multiple brand safety scandals across the digital platforms. The issue first came to mainstream attention in 2017, when a Sunday Times investigation found that household brands were appearing next to unsavoury or illegal content posted by groups including terrorists, white supremacists and pornographers.

From there, YouTube and Facebook in particular have been regularly embroiled in similar scandals, with YouTube called out later that year for serving ads against videos featuring child abuse and Instagram lambasted for hosting self-harm content in 2019.

In July 2020, more than 90 major advertisers, including Coca-Cola, Starbucks and Unilever, joined a Facebook ad boycott as part of the ‘Stop Hate for Profit’ campaign, which called on the social network to implement stricter measures around hateful and racist content on its platforms.

Source Article