Publishers are getting ensnared in the filters used by many advertisers to combat the YouTube brand safety crisis.
Earlier this year, many brands found their ads running next to offensive content on the video platform, prompting marketers and their agencies to enlist third-party monitoring. Now publishers as a whole are feeling the effects.
The filters sometimes flag content correctly, but other times, they create false positives or flag content that falls into a gray area of brand safety. And the methodologies used by third-party measurement providers vary, leading some brand safety filters to generate more false positives than others.
AdExchanger spoke to publishers that have seen their content run afoul of filters designed to catch fake news and racy, violent, gambling or political content – and what, if anything, they’ve done about those tags.
For these publishers, tracking the impact of these filters on a page-by-page basis is time-consuming and often opaque, but it’s clear enough to see the bottom-line impact.
Brand safety filters, for example, can cause private marketplace campaigns to underdeliver and open marketplace CPMs to fall.
Penalizing High-Risk Content
Underdelivering private marketplaces first alerted Ranker to its brand safety issues. The entertainment site lets users vote on topics and lists, including some that are sexual or violent in nature, such as the hottest celebrities or the deadliest crimes.
In early Q4, the publisher discovered that Integral Ad Science (IAS) was blocking all content on its site, even though only a minority of it fell into the adult or violence categories. When Ranker dug deeper, it discovered that many brands had pulled spend months earlier in Q2.
“These brand safety vendors are doing the best they can, but in using broad strokes to solve the problem, they are creating collateral damage,” said Ranker CEO Clark Benson.
Ranker’s problem stems from how IAS scores content. If articles about the cutest kittens and puppies are placed on the same domain as articles about hot women or the worst crimes, the kitten and puppy content will see its “adult” score inch up.
“We don’t block entire domains, but because we penalize high-risk things more than low-risk things that benefit you, a site that posts one pornographic article a month will get penalized a lot,” explained Dave Marquard, VP of product at Integral Ad Science.
Ranker also learned that some of its content was miscategorized after receiving vendor reports. An article about who might die next on “Game of Thrones,” for example, may be incorrectly flagged as violent content, or an actress named Fanny could likewise be flagged for an offensive keyword, noted Evan Krauss, Ranker’s SVP of global sales.
To resolve the issue and boost its brand safety score, Ranker took extreme measures: It demonetized many articles with the worst brand safety scores. That approach was more tenable than another remedy that IAS suggests: isolating all articles that will fail brand safety screens under a sub-URL, such as ranker.com/violentcontent. That could jeopardize the site’s SEO rankings and hurt revenue even more.
A Gambling-Lite Gray Area
The brand safety thorn for PCH/Media relates to how the filters define gambling. PCH visitors play games to win a chance at entering its sweepstakes – the digital application of its famous Prize Patrol. Because PCH allows users to play blackjack and slots, brands who block “gambling” on their brand safety filters can’t buy ad inventory on those pages.
“In our view, it’s completely brand-safe,” said Chris Moore, director of business development at PCH/Media. “There is no money exchanged on our site. Where we run into problems is around the contextual providers and brand safety companies of the world … who want to put a blanket over an entire category.”
One large demand partner added a brand safety block in Q3 that cost PCH/Media $20,000 a day. PCH/Media made its case to the partner and finally saw some spend start to return this week – but that doesn’t offset months of lost revenue.
Because PCH’s content falls in a gambling-lite gray area, it’s also encountered scenarios where brands that it’s dealt with for years suddenly can’t buy on some parts of its site. The brands say they are OK with PCH’s content, Moore said, so they place direct and private marketplace buys. But they can’t or won’t override their brand safety filters.
Political Content Flags
Intermarkets, which represents right-leaning politics and news sites such as The Drudge Report, also sees its content falling into gray areas around brand safety. The publisher has had to petition its removal from fake news lists, and it maintains close relationships with ad tech partners so it can troubleshoot and resolve issues quickly.
Given its ability to successfully overturn site blocks after being flagged for fake news, the publisher is more troubled by advertisers that block all news or political content.
“Some advertisers are going from saying they don’t want to be around political content to [avoiding] anything they don’t feel is shiny, happy and perfect,” said Erik Requidan, VP of sales and programmatic strategy at Intermarkets. “That’s a big division. And it’s hard to create a brand or have an environment that’s sterile, clean and happy when you’re breaking news.”
Plus, filters don’t take into account how a controversial topic like white nationalism or Black Lives Matter is covered; they ding publishers simply for talking about it.
“The stuff that makes [an article] newsworthy is what trips the filters,” Requidan said.
A Tech Learning Curve
The looming question is whether this problem of overly stringent brand safety filters will get worse or taper off.
Will advertisers loosen or remove their filters in pursuit of scale and lower CPMs? Or will they keep the filters long after memories fade of ads running next to terrorism videos?
“Between fraud, viewability and brand safety, it’s contextual accuracy that keeps the CEOs of agencies and brands up at night the most,” said Rob Rasko, CEO of consultancy 614 Group. “That’s the one where the phone call comes through, and that makes it the scariest.”
It’s possible that publishers’ woes will decline as filtering technology improves. “There are not a lot of companies right now with the ability to get that taxonomy correct, but you should expect there to be more,” Rasko predicted.
A provider like Grapeshot scans each page individually for keywords that might violate standard or custom brand safety settings. It wouldn't use the presence of non-brand safe content on the same domain to downgrade content except in rare instances, like for a porn URL or a publisher that doesn't allow its web crawlers, Oakins said.
In contrast, Integral Ad Science’s algorithm accounts for adjacent URLs when scoring sites in addition to keywords, which can make bland content appear unsafe.
“We look at a word like ‘shot’ in context,” said Daniel Oakins, Grapeshot’s global director of publishing. Using that approach lowers false positive rates.
Prices for brand-safe ad inventory are rising, Oakins said: “I think CPMs have gone up to make up for the lack of available inventory to buy. People still want to buy, so quality, safe content will rise in value.”
And that flight to brand safety may benefit direct-response advertisers, Ranker’s Benson predicted.
“If direct-response advertisers don’t care about being on a page about Trump or death because they are looking for conversion metrics,” he said, “this should give them more competitive CPMs.”