Home Data-Driven Thinking The Attribution Error

The Attribution Error

SHARE:

Data-Driven Thinking“Data-Driven Thinking” is a column written by members of the media community and containing fresh ideas on the digital revolution in media.

Jeremy Stanley is SVP Product and Data Sciences for Collective.

As an industry we have largely concluded that existing measurement solutions (CTR, view-through and click-through conversion) have glaring flaws. And so we have turned to independent vendors (see Forrester on Interactive Attribution) to employ sophisticated algorithmic attribution solutions to value digital advertising impressions. These solutions cater to our desire to glean definitive and actionable data about what works from the oceans of data exhausted by our digital campaigns.

Yet algorithmic attribution is founded on a fatally flawed assumption – that causation (a desired outcome happened because of an advertisement) can be determined without experimentation – the classic scientific model of test and control.

No medicine is FDA approved, no theory accepted by the scientific community absent rigorous experimental validation. Why should advertising be any different?

Consider that there are two driving forces behind a consumer conversion.  The first is the consumer’s inherent propensity to convert. Product fit, availability, and pricing all predispose some consumers to be far more likely to purchase a given product than others.

The second is the incremental lift in conversion propensity driven by exposure to an advertisement. This is a function of the quality of the creative, the relevance of the placement and the timing of the delivery.

To determine how much value an advertising impression created, an attribution solution must tease out the consumer’s inherent propensity to convert from the incremental lift driven by the ad impression. Algorithmic attribution solutions tackle this by identifying which impressions are correlated to future conversion events. But the operative word here is correlated – which should not be confused with caused.

By and large, algorithmic attribution solutions credit campaigns for delivering ads to individuals who were likely to convert anyway, rather than creating value by driving incremental conversions higher!

To highlight this problem, let’s consider retargeting. Suppose that an advertiser delivered at least one advertisement to every user in their retargeting list (users who previously visited their home page). Then, suppose that 10% of these users went on to purchase the advertised product.

In this simple example, it is impossible to tell what impact the advertising had. Perhaps it caused all of the conversions (after all, every user who converted saw an ad). Or perhaps it caused none of them (those users did visit the home page, maybe they would have converted anyways). Either conclusion could be correct

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

More complex real-world scenarios just get more complicated. Biases arise from cookie deletion, variation in Internet usage and complex audience targeting executed across competing channels and devices. Sweeping these concerns aside and hoping that an algorithm can just ‘figure it out’ is a recipe for disaster.

Instead, the answer is to conduct rigorous A/B experiments. For a given campaign, a set of random users is held out as a control group, and their behavior is used to validate that advertising in the test group is truly generating incremental conversion or brand lift.

Further, through careful analysis of audience data, one can identify the ‘influenceables’ – pockets of audiences who are highly receptive to an advertising message, and will generate outsized ROI for a digital advertising campaign.

My own observation, observed across numerous campaigns, is that consumers with a high inherent propensity to convert tended to be the least influence-able!  Many of these consumers have already made up their mind to purchase the product. Showing them yet another digital advertisement is a waste of money.

Yet that is precisely what many advertisers reward today: serving ads to audiences who are likely to convert anyway to gather credit in the last view attribution schemes. Algorithmic attribution might make this marginally better (at least credit is distributed over multiple views), but at significant expense.

Advertisers would be far better served if attribution providers invested in experimentation instead. However, I anticipate that many attribution vendors will fight this trend. The only rigorous way to experiment is to embed a control group in the ad serving decision process that is checked in real time, to ensure specific users are never shown an advertisement. This approach is radically different from the prevailing attribution strategy of “collect a lot of data and throw algorithms at it.”

By leveraging experimentation coupled with audience insights, savvy marketers can extract far more value from their digital advertising dollars. Those who do so now will gain significant competitive advantages.

Follow AdExchanger (@adexchanger) on Twitter.

Must Read

Publishers Feel Seen At The Google Ad Tech Antitrust Trial

Publishers were encouraged to see the DOJ highlight Google’s stranglehold on the ad server market and its attempts to weaken header bidding.

Albert Thompson, Managing Director, Digital at Walton Isaacson

To Cure What Ails Digital Advertising, Marketers And Publishers Must Get Back To Basics

Albert Thompson, a buy-side veteran with 20+ years of experience, weighs in on attention metrics, the value of MFA sites, brand safety backlash and how publishers can improve their inventory.

A comic depiction of Google's ad machine sucking money out of a publisher.

DOJ vs. Google, Day Five Rewind: Prebid Reality Check, Unfair Rev Share And Jedi Blue (Sorta)

Someone will eventually need to make a Netflix-style documentary about the Google ad tech antitrust trial happening in Virginia. (And can we call it “You’ve Been Ad Served?”)

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: Alphabet Soup

Buried DOJ Evidence Reveals How Google Dealt With The Trade Desk

In the process of the investigation into Google, the Department of Justice unearthed a vast trove of separate evidence. Some of these findings paint a whole new picture of how Google interacts and competes with its main DSP rival, The Trade Desk.

Comic: The Unified Auction

DOJ vs. Google, Day Four: Behind The Scenes On The Fraught Rollout Of Unified Pricing Rules

On Thursday, the US district court in Alexandria, Virginia boarded a time machine back to April 18, 2019 – the day of a tense meeting between Google and publishers.

Google Ads Will Now Use A Trusted Execution Environment By Default

Confidential matching – which uses a TEE built on Google Cloud infrastructure – will now be the default setting for all uses of advertiser first-party data in Customer Match.