“Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.
Today’s column is written by Michael Greene, vice president of product strategy at AudienceScience.
The current fervor surrounding brand safety has brought a great deal of finger pointing as the industry tries to assess the blame. Media sellers have received their fair share of blame, as have agencies, with some pundits arguing that deeper diligence by agencies could help cure brand safety ills.
This, quite frankly, is rubbish. No doubt, agencies, as stewards of brands, can set risk tolerances and even pick and choose the exact sites and apps on which ads run, but no amount of agency intervention could have prevented recently publicized brand safety misfortunes.
In most cases, brand risk happens on the page level, not the site level. In this rapid and fragmented content landscape – from breaking news to user-generated content – even the safest of sites can evolve to contain landmines of risk. After all, at this point last year, how many brands would have accepted a media plan that excluded Facebook or YouTube, both sites that combine plenty of safe content with plenty of risk for brands?
This underscores the real issue, which is that human site selection – on its own – is not sufficient. Technology must play a role in brand safety, as it does in all other forms of ad verification, from viewability to fraud to audience demographic measurement. Creating a safer environment means brands must be allowed to assess and control for individual page-level context and quality – not just sites – before their ads run.
Despite many press releases to the contrary, the walled gardens continue to operate via models that are less transparent than most premium publishers, even when enabling reporting via accredited third parties. No doubt, being able access consolidated reporting from the likes of MOAT, IAS, comScore, DoubleVerify, and Nielsen is a major step forward, but is not enough on its own.
Marketers need and deserve an even playing field and consistent standards across publishers. True transparency doesn’t just mean viewability or brand safety reporting from your MRC-accredited provider of choice. It also means consistent methodologies for gathering, cleansing, processing, analyzing and reporting on impression-level data. It is by this standard that walled gardens fall short.
In particular, the mechanisms verification providers use to collect data from walled gardens deserve deeper scrutiny. While most publishers allow verification providers to collect data via pixels or in-app SDKs, walled gardens have been historically reticent to allow tags on their pages or install SDKs, insisting instead on custom integrations with verification providers. While the walled gardens have completely legitimate reasons to prohibit typical data collection methodologies, including data security and site performance, so do other publishers.
To date, these custom integrations – and their limitations compared to standard data collection methods – are poorly understood.
Are verification companies able to collect any raw data themselves, or do they completely rely on walled garden publishers to send them (accurate) raw data? How might (unintentionally) inaccurate data provided by walled gardens skew measurement results? What independent safeguards are in place to catch data inaccuracies and maintain the integrity of measurement? One must look no further than Facebook’s recent metrics inaccuracies – five in the past eight months – to see that end-to-end auditing of the entire measurement process is necessary.
To its credit, Google is allowing the MRC to audit its third-party viewability partnerships with IAS, Moat, DoubleVerify and comScore. Facebook appears headed in a similar direction. Yet this should have been happening long ago, timelines are still unclear and other players are stubbornly sticking to their guns, as is the case with Snapchat’s new Moat-powered viewability score.
Marketers’ concerns over walled-garden brand safety shouldn’t go away until they get true transparency. For true transparency, the verification companies need to be integrated inside the walled gardens and accessing the data and ad impressions in the same way they do across the rest of the internet.
If agencies don’t have the tools, but are forced to buy media within these walled gardens to satisfy their brand clients, then issues around brand safety aren’t the agencies’ fault at all. As the old saying goes, “Garbage in, garbage out.” In this case, there is a lingering stench of trash.