In the past couple of years, digital advertisers and ad platforms have increasingly returned to marketing mix modeling (MMM). In this generations-old approach, media channels (like TV, radio and print – now joined by the likes of Google, Meta and Amazon) are attributed for their sales contributions over the course of a months-long campaign.
But the past couple of years have also seen a flurry of academic marketing papers identifying major flaws in the MMM model, as noted by Julian Runge, an assistant professor of marketing at Northwestern University, and William Grosso, the CEO of Game Data Pros, in a column last week for Mobile Dev Memo.
One such flaw might be if a brand decided to try a new marketing tactic right now, in late October, or simply increase ads in a channel like CTV or programmatic display. Well, we’re about to hit the year’s main shopping period. That new tactic will look pretty good, having accompanied a big lift in sales. But it didn’t necessarily contribute much to the total; there’s just a lot going on in the world.
There is also the inherent bias of the marketer, who’s looking for data that proves advertising generated value.
Runge and Grosso point out that many marketing attribution practices used by businesses would never pass muster in the health care and pharmaceutical industry, where tests are very carefully honed and compared to true random trial groups.
As Runge and Grosso put it: “So why should businesses trust OCI techniques (‘observational causal inference’ or when insights are pulled from observed data without randomization or control groups to compare) when millions of dollars are at stake in digital marketing or product design?”
Platform MMM
There are other reasons not to trust the MMM trend as a ‘truthier’ attribution fallback, now that multitouch attribution and user-lever tracking is infeasible.
And that’s because MMM might just become another walled garden platform plaything.
Earlier this year, Google open-sourced its own MMM product, which it calls Meridian. Meta has an open-source MMM solution it calls Robyn, while Amazon’s is still a proprietary product, not open-source.
But platform MMM is the same as platform anything. It’s there to prove the platform succeeded, as much as that your marketing worked. Google’s Meridian, for example, is really good at tying together search, YouTube, TV and Google Ads campaigns.
If a Hollywood studio is promoting a new movie in Los Angeles, say, the campaign might be attributed based on a rise in search traffic for the movie or showtimes, or views of the trailer on YouTube after people saw the ad.
That’s useful for the marketer. But it’s also a good way for Google to prove the Googleverse generates value, if brands are going to be using this measurement methodology – which it now puts out for free.
Accuracy vs. Velocity
If MTA is impossible and MMM is questionable, then does any solution work?
There are things marketers can do to keep their attribution models in line. Runge and Grosso note that A/B tests, geo-based experiments and incrementality testing can steer MMM.
But there are always trade-offs. The problem isn’t creating a mix model that hits the truth on the head; it’s understanding that perfect measurement doesn’t exist and priorities can compete.
Take the pharma trial example brought up by Runge and Grosso: Why would marketers accept more bias and less confidence in their experiments when millions of dollars are on the line, while pharma companies would never accept such uncontrolled testing standards?
As academics have pointed out in recent papers on MMM, even with months of data collection to account for the impact of an ad campaign and incorporate factors like seasonal sales, weather and price changes, there still is no rock-solid way to attribute an ad channel. The trade-off comes down to a degree of confidence on the one hand and velocity on the other.
MTA is a terrible basis for attribution. But boy does it move fast.
Experimentation is also expensive. The shoe brand ASICS is all in on experimentation, for instance. But that means adding a specialist vendor for the purpose. The brand works with Habu to create geo-holdout test groups.
To test a particular tactic or bit of creative, the brand might run the campaign in Nashville, say, while withholding all advertising in New Orleans. This creates a control group to compare the results in those two markets.
Aside from adding Habu and potentially other testing vendors, hiring internal data scientists to oversee the process and paying the cloud compute costs, there are also the costs in lost opportunity. To create a holdout, a marketer must stop advertising in another market.
Talk to enough marketers about their incrementality and geo-testing, and you’ll hear a common aside about New York City and Los Angeles being out of bounds for experiments. For many brands, those markets are too important to daily sales to either sacrifice as a control group or mess with simply for attribution purposes.
To any marketer reading this who comes away with the frustrating conclusion, “So, nothing works and it’s all pointless,” here are three important takeaways.
- Find an attribution vendor you trust.
- Walled garden attribution is a useful source of truth for brands that exist only on Google, Meta and Amazon.
- Don’t sweat the nitty-gritty details of MTA, MMM or terms like “observational causal inference.” Everyone’s just making this up, after all.