Home Data-Driven Thinking If You Want To Measure Incrementality, Do It Right

If You Want To Measure Incrementality, Do It Right

SHARE:

 sebastien-blanc“Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Sebastien Blanc, general manager, US, at Struq.

Considering the number of channels where marketers can spend their budgets, understanding and proving a return on ad spend (ROAS) is vital. If ROAS is properly understood, then marketing budgets shouldn’t be capped.

If you can drive revenue above the cost of your advertising, why would you stop? Unfortunately, CMO budgets are capped because understanding incremental revenue is nontrivial.

Incremental revenue is defined as revenue that would not have occurred without a specific campaign, everything else being equal. It is a view that is radically different and more reliable than last-click revenue.

The concept of incrementality is still maturing and different tests with the same name actually cover very different realities, ranging from accurate depiction of the truth to pure science fiction. But while testing incrementality is littered with pitfalls, such as misallocating users or premature decision-making, there are two methodologies that can help avoid them.

Different Approaches

An incremental revenue test compares the average revenue driven by users from two groups: those assigned to a retargeting group vs. users in a control group.

When and how users are assigned is critical. You never want to compare people who saw an ad to those who did not. This would flatter results, since you would only show ads to the highest-value users. We need both groups to have the same blend of users, some highly engaged and others less engaged.

The first step is to split your pool of users in two groups: one to be retargeted in a normal way and a control group. There are two ways to split the groups that avoid the problem of comparing “apples with pears.”

You can split new users randomly as they land on site and only show ads to the half that make up the retargeting group (bear in mind some in this group you will decide not to show ads to). Never show ads to the other half of users that comprise the control group.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

This methodology has the advantage of not triggering any cost of media for users in the control group, making the experiment slightly cheaper. Revenue per user is computed using all the conversions happening in each group – therefore ignoring notions like last click and last impressions.

Because this set-up takes all users into account, the methodology drives the most reliable results when budgets are close to the maximum theoretical budget. If you decide to spend only 10% of the maximum deliverable budget, results are likely to be way below the potential incremental revenue of your program.

You can also split users randomly at the point of ad serving, thereby showing ads to users in the retargeting ad group and charity ads to those in the control group. This methodology drives more reliable results at lower levels of spending and is easier to track on a daily basis. Because ads are shown to users, the control group will incur additional media costs that will decrease the ROAS of your program during testing. This methodology is what most marketers go for because you also help a charity in the process.

Neither methodology is perfect, so marketers need to choose based on specific sites and goals.

The Right Set-Up For Your Goals

As in most things digital, the devil lies in the detail. The first critical aspect involves understanding how many conversions are needed in the control group for the results to be reliable. There are several simulators out there to help you compute the right sample size. Keep in mind that before you hit this threshold, it is impossible to rely on any result because small samples often produce dramatic results, either positive or negative.

Beside statistical significance, it is also important to include at least one full decision cycle in your experiment, preferably two. If you know that customers usually take seven days to buy one of your products, then ideally the incrementality test must last at least 14 days to include two full cycles.

Most marketers want to go for 50/50 split of users. Even though it might sound more reliable, it actually does not make results more reliable or easy to interpret. It instead limits the revenue generating power of your campaign. On a website receiving more than 1 million visitors per month, you can reach statistical significance in a few weeks with a 90/10 split, thus maximizing revenue at the same time as you measure incremental revenue.

Finally, it is important to make sure the control group is not contaminated, meaning that no user in the control group should ever see a retargeting ad. You can guarantee that by only populating the control group with brand new users.

Being able to measure incremental revenue in an accurate way is the key to maximizing your growth as a retailer. Since each situation is unique, make sure you study your goals thoroughly and agree on the best possible methodology with your vendor before starting any test.

Follow Struq (@struq) and AdExchanger (@adexchanger) on Twitter.

Must Read

Publishers Feel Seen At The Google Ad Tech Antitrust Trial

Publishers were encouraged to see the DOJ highlight Google’s stranglehold on the ad server market and its attempts to weaken header bidding.

Albert Thompson, Managing Director, Digital at Walton Isaacson

To Cure What Ails Digital Advertising, Marketers And Publishers Must Get Back To Basics

Albert Thompson, a buy-side veteran with 20+ years of experience, weighs in on attention metrics, the value of MFA sites, brand safety backlash and how publishers can improve their inventory.

A comic depiction of Google's ad machine sucking money out of a publisher.

DOJ vs. Google, Day Five Rewind: Prebid Reality Check, Unfair Rev Share And Jedi Blue (Sorta)

Someone will eventually need to make a Netflix-style documentary about the Google ad tech antitrust trial happening in Virginia. (And can we call it “You’ve Been Ad Served?”)

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: Alphabet Soup

Buried DOJ Evidence Reveals How Google Dealt With The Trade Desk

In the process of the investigation into Google, the Department of Justice unearthed a vast trove of separate evidence. Some of these findings paint a whole new picture of how Google interacts and competes with its main DSP rival, The Trade Desk.

Comic: The Unified Auction

DOJ vs. Google, Day Four: Behind The Scenes On The Fraught Rollout Of Unified Pricing Rules

On Thursday, the US district court in Alexandria, Virginia boarded a time machine back to April 18, 2019 – the day of a tense meeting between Google and publishers.

Google Ads Will Now Use A Trusted Execution Environment By Default

Confidential matching – which uses a TEE built on Google Cloud infrastructure – will now be the default setting for all uses of advertiser first-party data in Customer Match.