Home Data-Driven Thinking The Attribution Error

The Attribution Error

SHARE:

Data-Driven Thinking“Data-Driven Thinking” is a column written by members of the media community and containing fresh ideas on the digital revolution in media.

Jeremy Stanley is SVP Product and Data Sciences for Collective.

As an industry we have largely concluded that existing measurement solutions (CTR, view-through and click-through conversion) have glaring flaws. And so we have turned to independent vendors (see Forrester on Interactive Attribution) to employ sophisticated algorithmic attribution solutions to value digital advertising impressions. These solutions cater to our desire to glean definitive and actionable data about what works from the oceans of data exhausted by our digital campaigns.

Yet algorithmic attribution is founded on a fatally flawed assumption – that causation (a desired outcome happened because of an advertisement) can be determined without experimentation – the classic scientific model of test and control.

No medicine is FDA approved, no theory accepted by the scientific community absent rigorous experimental validation. Why should advertising be any different?

Consider that there are two driving forces behind a consumer conversion.  The first is the consumer’s inherent propensity to convert. Product fit, availability, and pricing all predispose some consumers to be far more likely to purchase a given product than others.

The second is the incremental lift in conversion propensity driven by exposure to an advertisement. This is a function of the quality of the creative, the relevance of the placement and the timing of the delivery.

To determine how much value an advertising impression created, an attribution solution must tease out the consumer’s inherent propensity to convert from the incremental lift driven by the ad impression. Algorithmic attribution solutions tackle this by identifying which impressions are correlated to future conversion events. But the operative word here is correlated – which should not be confused with caused.

By and large, algorithmic attribution solutions credit campaigns for delivering ads to individuals who were likely to convert anyway, rather than creating value by driving incremental conversions higher!

To highlight this problem, let’s consider retargeting. Suppose that an advertiser delivered at least one advertisement to every user in their retargeting list (users who previously visited their home page). Then, suppose that 10% of these users went on to purchase the advertised product.

In this simple example, it is impossible to tell what impact the advertising had. Perhaps it caused all of the conversions (after all, every user who converted saw an ad). Or perhaps it caused none of them (those users did visit the home page, maybe they would have converted anyways). Either conclusion could be correct

More complex real-world scenarios just get more complicated. Biases arise from cookie deletion, variation in Internet usage and complex audience targeting executed across competing channels and devices. Sweeping these concerns aside and hoping that an algorithm can just ‘figure it out’ is a recipe for disaster.

Instead, the answer is to conduct rigorous A/B experiments. For a given campaign, a set of random users is held out as a control group, and their behavior is used to validate that advertising in the test group is truly generating incremental conversion or brand lift.

Further, through careful analysis of audience data, one can identify the ‘influenceables’ – pockets of audiences who are highly receptive to an advertising message, and will generate outsized ROI for a digital advertising campaign.

My own observation, observed across numerous campaigns, is that consumers with a high inherent propensity to convert tended to be the least influence-able!  Many of these consumers have already made up their mind to purchase the product. Showing them yet another digital advertisement is a waste of money.

Yet that is precisely what many advertisers reward today: serving ads to audiences who are likely to convert anyway to gather credit in the last view attribution schemes. Algorithmic attribution might make this marginally better (at least credit is distributed over multiple views), but at significant expense.

Advertisers would be far better served if attribution providers invested in experimentation instead. However, I anticipate that many attribution vendors will fight this trend. The only rigorous way to experiment is to embed a control group in the ad serving decision process that is checked in real time, to ensure specific users are never shown an advertisement. This approach is radically different from the prevailing attribution strategy of “collect a lot of data and throw algorithms at it.”

By leveraging experimentation coupled with audience insights, savvy marketers can extract far more value from their digital advertising dollars. Those who do so now will gain significant competitive advantages.

Follow AdExchanger (@adexchanger) on Twitter.

Must Read

AdExchanger Senior Editors Anthony Vargas and Alyssa Boyle.

POSSIBLE 2026: AdExchanger's Hot Takes

AdExchanger Senior Editors Alyssa Boyle and Anthony Vargas share their takeaways from three days chatting about agentic AI at POSSIBLE.

Reddit Reports A 75% Boost In Q1 Ad Revenue As It Reaches For 100 Million Daily US Users

Generative AI search has pushed traffic off a cliff across most of the internet, but not on social platforms. Reddit included.

POSSIBLE 2026: Can AI Help Agencies Finally Break Down Those Silos?

Domenic Venuto, indie agency Horizon Media’s chief product and data officer, sat down with AdExchanger during POSSIBLE at the Fontainebleau in Miami to unpack the role of AI in today’s media and advertising landscape.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Google Touts Its AI Ad Tech Adoption And New AI Max Features

Google announced new features and ad types for AI Max, its AI-based bidding product for search and shopping or sponsored product ads. The company also touted “hundreds of thousands” of advertisers using AI Max.

Hand pressing blue AI button on keyboard. Digital collage of artificial intelligence interface.

Meta’s Ad Machine Is Purring, So Why Did Its Stock Drop?

Meta’s Q1 call sounded like an AI and hardware pitch, but under the hood it was still about one thing: investing in AI to squeeze more money out of its ads business.

Alphabet Exceeds $100 Billion In Q1 And Its Profits Almost Doubled

Alphabet earned $109.9 billion in Q1 this year, up from $90.2 billion a year ago. And that’s not even the truly gobsmacking number.