Signal To Noise: Finding False Positives In Media Measurement

tomriordanData-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Tom Riordan, director of special operations at TubeMogul.

In March 2011, a major earthquake in Japan triggered a tsunami that led to a meltdown at the Fukushima Nuclear Power Plant, the world’s largest nuclear disaster since Chernobyl.

Understandably, this meltdown sparked global concerns about contaminated food and water and the dangers of radiation. These concerns later proved justified, when a number of alarming studies showed a significant increase in the rate of thyroid cancer in Japanese citizens, especially among residents of the Fukushima region.

But radiation alone wasn’t entirely responsible for the increase in cancer rates. Improved measurement techniques showed a stark difference compared with past measurements using outdated technology. In other words, the actual degree to which radiation increased cancer rates was lost entirely in the noise created by newer and better measurement.

All too often, marketers fall into the same trap as the health researchers in Fukushima. The combination of rapidly developing technology with emerging digital formats has drastically increased the number of metrics available to advertisers. As a result, marketers run the risk of developing a “metric of the moment” myopia, obsessing over the size of each individual data point rather than the more complicated, interdependent reality.

This tunnel vision obscures some of the biggest challenges currently facing marketers.

Problem: Suspicious traffic

Metric under the microscope: Viewability rates

Video player ad-serving interface definition (VPAID) technology has enabled marketers to accurately measure whether or not their ad had an opportunity to be viewed. But quantifying viewability rates in and of itself does not tell you whether the right person saw an ad, or even if a real person saw it.

Some of the sites with the highest viewability rates are sites that are known to have high rates of suspicious traffic. In some cases, these sites are used to meet viewability guarantees imposed by buyers – but what good is a viewable ad if it’s not seen by a real person?

Problem: Achieving effective reach

Metric under the microscope = Gross rating point (GRP)

While the GRP isn’t a new measure born of the digital age, the rapid fragmentation in consumption and decline of traditional TV viewership have very real implications for its continued effectiveness when used in isolation.

By buying on a number that bundles reach and frequency, marketers may overlook the increasing problem of maintaining audience reach, masked by increasing levels of frequency within a smaller viewership.


Source: Butler/Till

Problem: Calculating return on ad spend (ROAS)

Metric under the microscope: Cost per acquisition (CPA)

Biases inherent in CPA-based models often prevent them from being accurate measurements of advertising effectiveness. Frequently, cookie-based measurement disproportionately counts individuals who were already in-market and rewards recent exposure due to cookie deletion. It also incorrectly credits lower-cost media.

Due to these factors, CPAs are often more a function of total number of ads served than they are actual sales lift. To make matters worse, most CPA-based models only account for online sales while the majority of purchases still occur offline, which can lead to misguided optimization.

Of course, there are certainly more factors responsible for these problems than just the metrics under the microscope. But there are very real consequences for myopic measurement. Optimizing solely for viewability may cost marketers an extra 17 cents per every dollar just to reach actual humans. The misallocation of impressions could also lead to up to 60% of a target audience being missed. And focusing solely on a low CPA may cost publishers billions of dollars (PDF).

As attractive as these new metrics are, we must remember that even the most sophisticated measurement systems are still imperfect. Marketers still have to analyze data with a critical eye; the advertising industry is still collectively recovering from overreliance on click-through rates –  and we risk making the same mistake with viewability rates.

Just because we have these tools doesn’t mean that we should cast aside the best marketing instruments available to us: our brains and the human instinct to improve. The savviest marketers will question the sacred cows and engage in a relentless pursuit of how to best drive their businesses, constantly challenging and testing the metrics they use to make decisions.

Follow TubeMogul (@TubeMogul) and AdExchanger (@adexchanger) on Twitter.

Enjoying this content?

Sign up to be an AdExchanger Member today and get unlimited access to articles like this, plus proprietary data and research, conference discounts, on-demand access to event content, and more!

Join Today!