Home On TV & Video To Make TV Attribution Better, We Need To All Get On The Same Page

To Make TV Attribution Better, We Need To All Get On The Same Page

SHARE:

On TV And Video” is a column exploring opportunities and challenges in advanced TV and video.

 Today’s column is written by Jane Clarke, managing director and CEO at the Coalition for Innovative Media Measurement (CIMM).

Attribution, or the measurement and assignation of an outcome following an ad exposure, has revolutionized media planning and buying with the promise of being able to not only assess the impact of spend but optimize it moving forward.

While there are many different approaches to attribution, with each vendor applying a unique solution, the underlying data definitions that serve as the basis for their analysis should be the same.

And currently they are not – a shortcoming that is inhibiting the overall growth and impact of attribution as a measurement discipline. It’s not that the attribution analyses aren’t valuable, it’s that buyers can’t feel confident that there isn’t hidden error confounding their results.

This reality was made clear in an analysis of TV attribution providers just completed for CIMM and the 4A’s Media Measurement Task Force by Sequent Partners and Janus Strategy and Insights.

The research found that ad occurrence and exposure data are highly inconsistent across providers. The reason was not only the differences between the actual data, but primarily the methodology being used to convert that data into final ad occurrence files and exposure data, including weighting, editing and other data processing rules.

The situation is analogous to everyone working from their own “set of facts,” defining reality in different ways. That is why, if you had a variety of providers examine the same campaign, you would in all probability end up with very different conclusions about ROAS.

Data on ad occurrence and frequency varies greatly from attribution provider to attribution provider because currently there are no standards defining the methodology that should govern processing of this data. Television attribution results will become more transparent, consistent and reliable when providers adopt more stringent media measurement standards.

The Media Rating Council has an effort underway to develop outcome-based measurement standards. Ideally, when it comes to how data processing is defined, these standards should address:

  • Weighting. Providers need to implement a robust panel weighting scheme that addresses variables common to TV viewing such as DMA, HH size/Presence of Children and Income/Education/Occupation.
  • Unification. A standard process for unifying the database for ROI measurement is needed, providing a common base of viewers where there is both an opportunity for exposure and an opportunity for a response, such as website visitation, retail traffic or purchase.
  • Exposure Qualification. There needs to be agreement as to what standard exposure criteria is. Should it be one second, three seconds, five seconds, 10 seconds or a one-minute schedule at 300 GRPs?
  • Occurrences. There has to be rigorous quality control in the re-creation of as-run schedules and in the evaluation of Reach reporting from exposure data across schedules.

For attribution to fulfill its promise and for buyers and sellers alike to fully embrace it for ROAS analysis, we need a common approach to defining the starting point of occurrence and exposure data.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

In an industry where proprietary exclusivity is all, standards are sometimes bristled at.

Certainly, attribution providers can and should go to market with their own unique approaches. But all we need to start on the same page. And to accomplish that, we need standards to govern how occurrence and exposure is being defined and processed from data, to ensure accurate and consistent assessment of ROAS.

 

Must Read

Comic: No One To Play With

Google Pulls The Plug On Topics, PAAPI And Other Major Privacy Sandbox APIs (As The CMA Says ‘Cheerio’)

Google’s aborted cookie crackdown ends with a quiet CMA sign-off and a sweeping phaseout of Privacy Sandbox technologies, from the Topics API to PAAPI.

The Trade Desk’s Auction Evolutions Bring High Drama To The Prebid Summit

TTD shared new details about OpenAds features that let publishers see for themselves whether it’s running a fair auction. But tension between TTD and Prebid hung over the event.

Monopoly Man looks on at the DOJ vs. Google ad tech antitrust trial (comic).

How Google Stands In The DOJ’s Ad Tech Antitrust Suit, According To Those Who Tracked The Trial

The remedies phase of the Google antitrust trial concluded last week. And after 11 days in the courtroom, there is a clearer sense of where Judge Leonie Brinkema is focused on, and how that might influence what remedies she put in place.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

The Ad Context Protocol Aims To Make Sense Of Agentic Ad Demand

The AI advertising agents will need their own trade group eventually. For now though, a bunch of companies are forming the Ad Context Protocol, or AdCP.

OUTFRONT Is Using Agencies’ AI Enthusiasm To Spur Wider Programmatic OOH Adoption

The desire for a data-driven reinvention of OOH inspired OUTFRONT to create agentic AI tools for executing and measuring OOH campaigns and comparing OOH to other channels.

Inside PubDesk, The Trade Desk’s New Dashboard That Shows What Buyers Actually Care About

A peek inside PubDesk, The Trade Desk’s new dashboard that gives sellers detailed info on how buyers value their inventory.