Bad habits are hard to reverse. Just ask anyone who’s tried to make a New Year’s resolution.
But advertisers are increasingly starting to rethink their programmatic media buying practices in a more sustainable, diversified and purposeful light.
“Aligning your brand values to media investment is the next evolution in purpose advertising,” said Rachel Lowenstein, global managing director for inclusive innovation at WPP-owned Mindshare.
The question, though, is how. Like plans to hit the gym in January, there’s a big difference between saying and doing, especially concerning intelligent brand safety protection.
The safety dance
When it comes to brand safety and brand suitability, marketers often struggle to balance their priorities.
Advertisers understandably seek to protect themselves from fraud, awkward placements and inadvertently funding misinformation, but in doing so rely on overzealous or blunt brand-safety filters that steer them away from news and diverse content.
Although well-intentioned industry efforts like the Global Alliance for Responsible Media (GARM) have come on the scene in recent years to establish global brand safety and suitability standards on the road toward a more sustainable digital media environment, putting those standards into practice at scale remains a big challenge, said Chris Vargo, CEO and founder of contextual categorization startup Socialcontext, which is backed by academics at the University of Colorado.
For instance, last year, GARM and the IAB Tech Lab worked together to develop a content taxonomy to help third-party verification vendors avoid problematic content without standing in the way of a publisher’s ability to monetize.
Brands that don’t take a more refined approach to brand safety and suitability risk needlessly rejecting content that addresses important social or environmental issues – and diminishes their campaign’s scale (and performance) to boot.
There’s no reason for objective, factual or positive reporting about potentially sensitive subjects to lose out on the opportunity to monetize due to an overly restrictive blocklist.
“Exclusion lists as a main strategy are antiquated, [and] holding companies and partners continue to be urged to refine them or reduce their use as much as possible,” said David Murnick, an advisory board member at the Brand Safety Institute and former EVP of investment media operations and partnerships/head of brand safety at Dentsu.
“If you put broad terms like news, LGBTQ and Black Lives Matter on an exclusion list, you’re going to end up eliminating a lot of good, positive or neutral content being covered in a safe and generally suitable manner,” Murnick said.
And some of the content being relegated to the sidelines could even have a positive impact on a brand’s bottom line.
Ads placed adjacent to content about sustainability, recycling and nature preservation – some of which gets bucketed as climate-change-related and is therefore blocked – is associated with a direct increase in purchase intent when compared with content about climate-change denial, according to research published in December from the Brand Safety Institute, Nielsen Innovate and AdVerif.ai, an Israeli startup working to devise tools that counter hate speech and fake news.
With that in mind, brands have to find a workable middle ground that allows them to achieve reach without monetizing junk.
“The real practical challenge is scale,” said Or Levi, CEO and founder of AdVerif.ai. “If you only target sites with positive content, you probably won’t have as much scale, which means you also want to be looking at sites with neutral news content while avoiding places that promote misinformation.”
Socialcontext has a novel approach to help ad buyers identify news stories about social and diversity issues that would usually be over-blocked based on traditional brand safety and suitability methods.
Rather than starting with a list of “problematic” keywords, Socialcontext develops definitions of concepts, such as gender and racial equality, using academic research. Professors in the field are asked to validate the definitions.
Grad students at the University of Colorado take these codified descriptions and label thousands of articles on the open web as either pro-diversity or not. These manually categorized articles then serve as the sample group for machine learning algorithms. Once these algorithms detect news that promote gender or racial equity, for example, advertisers can unblock articles that would otherwise have been excluded based on their blocklists.
“For the right advertiser, sponsoring gender equality content or female athletics doesn’t just help them with their DEI initiatives,” said Socialcontext’s Vargo. “It also helps them reach the right audiences.”
Mindshare works with Socialcontext to gather additional publisher intel for a more nuanced approach to brand safety.
The fact is, despite so much ink spilled pointing to the problem, publishers – especially minority publishers – are still forced to contend with overly aggressive blocklists that lead to inequitable monetization. Publishers in the LGBTQ community, for example, are put at a disadvantage because their content regularly uses terms such as “transgender,” “lesbian” or “bisexual.”
“As part of intentionally investing and being inclusive with your overall media mix, I pose that brand safety needs to further evolve to meet modern problems, and that’s something we’re working on,” said Mindshare’s Lowenstein. “For all the good that brand safety does, there are still challenges in how to best protect human safety.”