Facebook tests topic exclusions for brands

The brand safety update will allow advertisers to block content associated with broad topics such as “crime and tragedy” or “news and politics.”

Photo credit: Getty Images
Photo credit: Getty Images

Facebook is testing an update to its brand safety controls that would allow advertisers to block activations from running against topics in the newsfeed with which they don’t want to be associated.

The platform is testing these new topic exclusion controls with a small group of advertisers, which it declined to name.

In a blog post, Facebook gave examples of how advertisers can use the new tool to block broad topics such as “crime and tragedy,” “news and politics” and “social issues.” Facebook said the tool will take most of the year to test, learn and develop.

The solution builds on Facebook’s work with the Global Alliance for Responsible Media. Facebook’s other brand safety commitments include removing harmful content, maintaining a high-quality ecosystem of publishers and advertisers and collaborating with the industry on creating additional controls.

“Providing advertisers topic exclusion tools to control the content their ads appear next to is incredibly important work for us, and to our commitment to the industry via GARM,” said Carolyn Everson, VP of global marketing solutions at Facebook, in a statement. “With privacy at the center of the work, we’re starting to develop and test for a control that will apply to News Feed. It will take time, but it’s the right work to do.”

Brand safety is a huge issue for advertisers, which for years have railed against, and even boycotted, social media platforms including Facebook, YouTube and Twitter for their inability to control hate speech and misinformation.

"This brings Facebook one step closer to full alignment with the GARM/4A's Brand Safety Suitability Framework," said Joe Barone, managing partner for brand safety in the Americas at GroupM. "We look forward to further details."

But keyword blocking has been detrimental to publishers, who often find perfectly suitable content caught in blunt filters that can’t distinguish article context or nuances between words.

Incorrect keyword blocking cost publishers $2.8 billion in the U.S. in 2019, and that was before advertisers started filling their blocklists with words associated with COVID-19 or the Black Lives Matter movement.

It’s not clear how Facebook will determine which content falls under these broad topic areas, but publishers are likely to be impacted.

Facebook, for its part, has been financially unaffected by the pandemic, last summer’s ad boycott and ongoing issues with hate speech and misinformation on the platform. The company reported Q3 earnings of 33% revenue growth and 44% profit growth on Wednesday as its user base grew to 3.3 billion people.

This story first appeared on campaignlive.com. 

Have you registered with us yet?

Register now to enjoy more articles and free email bulletins

Register
Already registered?
Sign in