Platforms make progress against harmful content, but there’s still work to be done

The Global Alliance for Responsible Media put out its first brand safety measurement report.

Platforms are taking more action against harmful content, according to a report by the Global Alliance for Responsible Media released on Tuesday.

The report, the first of its kind since GARM put out definitions for harmful content last fall, tracks progress in rooting out toxic content based on aggregated data from seven platforms: Facebook, Instagram, Twitter, YouTube, TikTok, Snap and Pinterest. 

Twitch, which joined GARM in March, was not included in the report.

The report was based on questions developed by GARM to evaluate four key components of brand safety: How safe is the platform for consumers? How safe is the platform for advertisers? How effective is the platform enforcing its safety policy? And how responsive is the platform at correcting mistakes? 

GARM found that more than 80% of the 3.3 billion pieces of content removed from the platforms form Q3 to Q4 falls into three categories: spam, adult and explicit content, hate speech and acts of aggression.

Hate speech and acts of aggression were most heavily policed by the platforms, including bullying and harassment and the promotion of restricted goods such as firearms.

Facebook, for example, reduced the prevalence of hate speech on its platforms by 20% from Q3 to Q4, while the rate at which it proactively removed hateful content increased to 49% from 26%, due to improvements to its AI detection technology. YouTube increased the amount of harmful content it removed by 40% over the same time period, while TikTok removed 36% more content that violated minor safety. 

Overall, the platforms removed 14.9 million harmful accounts last year, 30% more than the year prior.

“Platforms have been investing more time, people and resources into content moderation, as well as content removal over time,” said Rob Rakowitz, initiative lead for the Global Alliance for Responsible Media and global chief media officer at Mars. “We're seeing a much more structured and rigorous approach to it.” 

Platforms also made progress in reducing the prevalence of harmful content, measured by the number of impressions on a given post. Interestingly, Facebook “significantly” reduced the prevalence of harmful content due to changes to its News Feed algorithm, GARM found, but Instagram, owned by Facebook, declined to report on many prevalence metrics. Twitter also declined to report on this area. Pinterest and Snapchat provided data only on content removed for this metric, while prevalence on TikTok for minor safety violations increased after the platform rolled out new policy updates to optimize identification of violations. 

But prevalence is a metric brands will be focused on in 2021, Rakowitz said. 

“[Prevalence] fundamentally goes back to the question of how safe the platform is for consumers,” he explained “Digital communications in general should be positive for society, brands and consumers.” 

Despite the progress, there’s still work to be done. In addition to quicker responses from the platforms, brands are looking for how each platform is prepared for issues such as data breaches.

GARM will also be looking for these numbers to improve continually, “something that we're going to keep an eye on over the next iteration of the report,” Rakowitz said.

“There's a definite willingness to share more data, more regularly,” he said. “[But] marketers are going [want to]make sure they actually have control over where their ads show up.”

This story first appeared on campaignlive.com.

Have you registered with us yet?

Register now to enjoy more articles and free email bulletins

Register
Already registered?
Sign in