Andy Atherton is COO of Brand.net, an online advertising network.
A senior agency executive who manages the digital account for an Ad Age 50 CPG manufacturer recently delivered the best line I have heard in a long time. We were talking about Behavioral Targeting (BT) and he said, “In my experience BT is a much better sales tool than a success tool.” He then enumerated, through specific, practical examples, the many weaknesses of BT. Since I have been discussing this in private for years, this pithy formulation, delivered unprompted from an industry leader, convinced me that the time was right to do a little public debunking on BT.
I’ll frame the discussion using an example from another meeting I had the same week with a digital buyer for a major national peanut butter brand. A bewitching sales person had convinced this fellow that through the miracle of BT, based on Nielsen data, he would be able to run a mass reach campaign against customers of a named competitive peanut butter brand who were going to use the product for baking rather than spreading. In short, he could target, at scale, moms who are baking peanut butter treats for the holidays, but whose kids won’t eat PB&J sandwiches. I have to say that sounds amazing from a sales perspective. I mean, how could you not want to buy that? Unfortunately, the expectations the seller had set in the buyer’s mind were far from the reality of what’s possible.
With that said, let’s begin to dissect our example.
The origin of the underlying purchase data for the peanut butter targeting was Nielsen’s HomeScan panel. The HomeScan panel consists of approximately 95,000 households that have agreed to give Nielsen rich demographic data and to scan every barcoded item they bring into their homes. Of those, there are about 75,000 web-enabled households that have agreed to allow Nielsen to monitor some or all of their web behavior. Within this panel, Nielsen absolutely can identify all households that have purchased a named product and absolutely can enable media partners to indirectly target like households with online advertising. Unfortunately, even for a high penetration product like a major national peanut butter brand, a reasonable expectation would be that less than 25,000 of the 75,000 households made purchases of the named product. Furthermore, the user overlap between the Nielsen HomeScan panel and even a large media partner is unlikely to be above 50%. Let’s be generous and, for the sake of argument, say that this online media provider can identify 15,000 households that purchased the competitive peanut butter brand based on the Nielsen data. Let’s call these Competitive Peanut Butter buyers “CPB”s.
It is a fact that, based on Homescan data alone, Nielsen has absolutely no idea what these 15,000 CPBs intend to do with the peanut butter after they have scanned it–spread it, bake with it, feed it to pets– they don’t know. It is flat out misleading for a salesperson to suggest otherwise. Nielsen themselves certainly would never say that nor would they ever allow direct targeting to homescan panelists. But again, for the sake of argument, let’s just assume that magic occurs, and the 15,000 total sample is reduced to the 3,000 users who actually meet the profile that the buyer thinks he’s getting: CPB bakers only. Let’s call these users “CPB Bakers” and let’s say the buyer was willing to pay a $50 CPM to reach that incredibly tight target. Do the straightforward math: he could only spend about $1500 to reach those CPB Bakers, unless campaign frequency was permitted to go above 10 – already a frequency more than 50% higher than our CPG customers are targeting. If that campaign drove $3 in sales for every $1 in media – a fantastic ROI – only several dozen more cases of peanut butter would be sold. Nationwide.
You get the picture. To run a campaign reaching millions of users, as CPG advertisers need to do, the actual purchase data for thousands of users must be grossed up by a factor of 1000 or more and anonymized – thousands of specific users become millions of anonymous users – using Look-Alike Modeling.
Let’s keep digging.
Look-Alike Modeling is an attempt to identify similarities in browsing or clickstream patterns (“Look-Alike Patterns”), among the small group of users that have actually exhibited the target behavior (our CPB Bakers in this case). It takes that set of Look-Alike Patterns and looks for similar Look-Alike Patterns among other users for whom there is no data about the target behavior. This involves a huge assumption: that similarity in Look-Alike Patterns is a reliable predictor of purchase behavior. Ironically, Nielsen’s own data (as well as work by others) clearly demonstrates that at least clickstream data is not a predictor of offline purchases. But, again for the sake of argument, let’s assume the exact opposite: that Look-Alike Patterns are indeed predictors of offline purchases.
Even if we assume that Look-Alike Patterns are predictors of offline sales, with 3,000 users’ data driving a 3,000,000 user campaign, why would we assume that Look-Alike Patterns would be better predictors than the demographic, contextual and geographic variables that marketers have used for decades to segment their audiences? Continuing our specific example, why would we assume that a campaign intended to drive offline sales of peanut butter would perform better if it was based on the Look-Alike Patterns of 3,000 CPB Bakers than if it targeted, say, moms aged 25-54 in high-quality food, health and parenting content?
I am aware of no evidence that BT performs better than other approaches in driving offline sales. Indeed, what data I have seen suggests that it does not. It’s one thing if BT can be tested and tuned with CPA data in a “closed-loop” lead-gen or ecommerce setting, but it’s an entirely different thing to assume that BT will be more effective than other methods in driving offline sales without the measurements to prove it.
What do you think?