Home Data-Driven Thinking Attribution’s Fatal Flaw: What Really Caused That Conversion?

Attribution’s Fatal Flaw: What Really Caused That Conversion?

SHARE:

danhillddtData-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Dan Hill, senior data scientist at Integral Ad Science.

How effective was the last ad campaign you worked on? What was the return on investment?

Chances are you don’t know. It’s all too common to leave ROI performance unmeasured. Advertisers often have no idea whether their budget was spent wisely or if it was even profitable to advertise in the first place.

Attribution providers can help answer these questions. They’re paid to estimate the effectiveness of ad campaigns. Each attribution provider has its own proprietary model for how to divide up credit for every conversion to the ad impressions that touched it. The most famous of these models is called last-touch attribution, where all credit is given to the last impression that the customer saw before converting. More advanced models use sophisticated sets of equations to assign credit along the entire path that the customer takes through the campaign, from touchpoint to touchpoint.

Simple or complex, the problem with these models is that they only measure how many conversions were touched by the campaign rather than how many were caused by the campaign. Unless you can tell the difference, it’s impossible to evaluate how successful the campaign was.

Selling Blue Jeans With Pizza Ads

Imagine there was a mix-up at the office where someone accidentally linked ads for pizza to conversions for a blue jeans campaign. The attribution provider is then asked to report on which ads in this campaign were the most effective. We know the impossibility of selling blue jeans with pizza ads, but how would some attribution models handle this situation?

If it’s a large campaign, we would expect to see overlap between people who were advertised pizza and those who bought jeans. The attribution provider would apply their analysis and report which publisher served ads that, coincidentally, touched the most customers who bought blue jeans. Some publishers would be chosen as winners and others as losers. No alarm would go off screaming, “Hey, these ads are doing nothing! Something is wrong!” The problem is that these reports don’t show how many conversions were actually caused by the ads.

Bias And Baseline

The way out of this scenario is for marketers to establish a baseline. How many conversions would have occurred if the ad campaign had not happened at all? Let’s call these natural conversions. Those natural converters didn’t need any ads to make their decision, so money spent on advertising to them was wasted. However, if we find that customers are converting more often than their natural rate, then the ads are working.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

To get to this baseline scientifically, we could perform an A/B test where we randomly give 10% of our audience a placebo, such as a public service announcement (PSA). Any difference between the ad exposure and PSA group could be attributed to the campaign. However, in this scenario, 10% of the ad spend is thrown out on PSAs. That’s a rather expensive option.

As an alternative, one could compare conversions by those who received ads vs. those who did not. This is cheaper than buying PSAs, but this exposes one to a whole array of selection biases. Users who receive ads are just different from those who did not. These targeted users were specially selected to receive ads, usually by some type of purchase-intent modeling, and so cannot be compared to the general population. Research has established that correcting for this bias is possible, but extreme care must be taken.

Moving Forward

Measuring true campaign performance is clearly difficult, but also too important to leave undone. It is widely known that today’s attribution systems are imperfect. An attribution model that can’t figure out whether pizza ads can sell blue jeans is hardly useful at all.

But, if more ad professionals apply a critical eye, then we can push the industry towards better and more reliable measurements of performance.

Follow Integral Ad Science (@Integralads) and AdExchanger (@adexchanger) on Twitter.

Must Read

Jamie Seltzer, global chief data and technology officer, Havas Media Network, speaks to AdExchanger at CES 2026.

CES 2026: What’s Real – And What’s BS – When It Comes To AI

Ad industry experts call out trends to watch in 2026 and separate the real AI use cases having an impact today from the AI hype they heard at CES.

New Startup Pinch AI Tackles The Growing Problem Of Ecommerce Return Scams

Fraud is eating into retail profits. A new startup called Pinch AI just launched with $5 million in funding to fight back.

Comic: Shopper Marketing Data

CPG Data Seller SPINS Moves Into Media With MikMak Acquisition

On Wednesday, retail and CPG data company SPINS added a new piece with its acquisition of MikMak, a click-to-buy ad tech and analytics startup that helps optimize their commerce media.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

How Valvoline Shifted Marketing Gears When It Became A Pure-Play Retail Brand

Believe it or not, car oil change service company Valvoline is in the midst of a fascinating retail marketing transformation.

AdExchanger's Big Story podcast with journalistic insights on advertising, marketing and ad tech

The Big Story: Live From CES 2026

Agents, streamers and robots, oh my! Live from the C-Space campus at the Aria Casino in Las Vegas, our team breaks down the most interesting ad tech trends we saw at CES this year.

Monopoly Man looks on at the DOJ vs. Google ad tech antitrust trial (comic).

2025: The Year Google Lost In Court And Won Anyway

From afar, it looks like Google had a rough year in antitrust court. But zoom in a bit and it becomes clear that the past year went about as well as Google could have hoped for.