“Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.
Today’s column is written by Tim Gough, vice president of insights and analytics at Verve.
Advertising is being held more accountable for the actions it drives. That’s good news, but even so, there are limits.
While cost per install, cost per conversion and cost per acquisition have become standard pricing metrics for campaigns that drive online digital actions, we now see a push for cost per visit and even cost per incremental visit as pricing metrics for campaigns that drive offline actions.
This can be a problem for advertisers. Do agencies and advertisers really know what they are getting themselves into when it comes to these metrics? There are challenges to using offline visits from mobile location data to determine media pricing – the dangers are clear and present, and they can change campaign outcomes.
Offline visits: When defined by location data they are not deterministic
While deterministic metrics are ideal, determining offline visits with mobile location data is not a deterministic process.
Not every impression or exposed device can be reliably linked to an offline action. Instead, offline visits are generally calculated against a small panel from which online-to-offline data can be observed. The resulting visit and lift numbers are then scaled based on total impressions served.
The results can be dramatic. In some cases, panel outcomes are inflated by 100 times or more. The industry should be prepared to live with some uncertainty, but not at a 100x multiple, especially when pricing models shape strategy over time.
Meanwhile, some companies may offer metrics showing raw offline visits, promising to eliminate projections/inflations from the results. This is not a helpful solution because it yields underreported visits and undermines secondary metrics such as cost per visit.
Even for a panel of devices enjoying abundant location signals, the process of turning those signals into visits is not deterministic either. There is inherent noise in the system, including point-of-interest inconsistencies, environmental factors, location signal error and indoor jitter, which is signal interference from the structure. While visit detection models continue to improve, there will always be inaccuracy, especially when there are no adopted standards for location data or visit verification.
As examples of how things can start to break down, let’s consider two media-pricing models that are gaining traction in the marketplace and both rely on offline visits determined by mobile location data: cost per visit (CPV) and cost per incremental visit (CPIV).
CPV: Great for targeting efficiency, but not a performance metric
The premise here is simple: Count the offline visits that occur in a given span of time after ad exposure and divide that total by the advertiser’s media spend.
This is well and good, but CPV should not be positioned as a performance metric. For one thing, there is no causal path from exposure to offline visit. Contrast this with an app install: A consumer interacts with an ad and eventually clicks through to install the app. That action can be tied directly to the ad, so cost per install becomes a prime metric for pricing.
The CPV metric is a way to measure and optimize targeting efficiency. It can work well for, say, CPG campaigns in which brands need to drive reach and frequency decisions for regular grocery shoppers. The lower the CPV, the more successfully CPGs are reaching their target audience.
CPIV: Great for comparing outcomes, but unstable for performance
Cost per incremental visit helps advertisers buy media based on store visits that are directly attributable to ad exposure, but it’s not a stable enough metric for performance pricing in and of itself.
This metric depends on the careful use of unexposed control groups to identify truly incremental visits that would not have occurred naturally. It’s this incremental component that creates difficulty. Incremental offline visits cannot be directly observed and counted; they are always calculated from the output of a model – there can never be a “correct” answer to the question of how many visits were directly driven by media.
Furthermore, as the output from a model is never stable, CPIV can oscillate wildly along a campaign’s lifetime and beyond, due to the post-campaign attribution window.
So, while CPIV is a great way to compare multiple media partners against each other using a fair and consistent baseline, it is a poor metric for pricing media.
The truth about metrics and how to approach performance pricing
I am not suggesting that visits detected using offline-visit attribution are useless.
Offline visits are still valuable signals that can unlock consumer insights and drive media targeting and measurement. But since they are not deterministic, offline visits should only be used directionally for measurement – that is, to compare and optimize campaign performance over time and to evaluate performance across multiple media partners.
Metrics can’t do all the hard work for advertisers and marketers. Rather than rely on performance claims that offline-visit data metrics such as CPV and CPIV can’t sustain, advertisers must instead use a combination of data and common sense to set up campaign targeting and execution that’s geared for success.
Yes, that sounds difficult, and it is. Good digital advertising starts with a clear objective, then defines the target audience to be reached, the context in which to reach it, the creative experience to be offered and the action the consumer is expected to take as a result. Only then can the most appropriate success metrics be determined.