Home Data-Driven Thinking The Uncomfortable Truth About Advertising Effectiveness: Why Marketers Avoid True Experimentation

The Uncomfortable Truth About Advertising Effectiveness: Why Marketers Avoid True Experimentation

SHARE:
Bill Grosso, CEO, Game Data Pros
Julian Runge, assistant professor of marketing, Northwestern University

With thanks to Igor Skokan, Global Marketing Science Director, Meta

Understanding advertising effectiveness is the cornerstone of successful marketing, yet marketers often shy away from the most reliable tools for the job.

Why? Because true experimentation – the kind that reveals hard truths about what works and what doesn’t – is uncomfortable. It challenges assumptions, disrupts routines and demands rigor.

Despite its proven benefits, we believe experimentation remains underused, often sidelined by observational causal inference (OCI) methods that don’t require specific intervention in marketing operations. These models, while easy to manipulate, can lead to flawed conclusions that misallocate resources and, worst case, hinder growth.

The case for experimentation

Controlled experiments, such as A/B tests and randomized controlled trials (RCTs), are the gold standard for understanding advertising effectiveness. They isolate causal relationships, providing marketers with actionable insights that no observational model can reliably deliver.

Consider this: A recent study found that digital advertisers who incorporated experimentation into their strategies saw a 30% performance boost in the first year and 45% in the second

Yet, despite these clear advantages, marketers often rely solely on OCI techniques such as attribution or marketing mix models (MMM). These models are comprehensive and can be estimated without a need to intervene in ongoing marketing operations to run experiments. A central analytics team with advanced scientists can execute them and inform C-level marketing strategy decisions – without having to sync with regional marketers or media buyers who implement these strategies.

But this approach can provide a false sense of control, allowing executive-level marketers to craft narratives that fit their biases rather than confront inconvenient truths. If experiments run by tactical, executing marketers disagree with strategic, executive guidance, the learnings all too often go unnoticed and unused. 

But, if lower-level experiments repeatedly disagree with your central strategy-level OCI model, the model is likely wrong, not the experiments.

The gold standard and a pragmatic alternative

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

By randomly assigning users to test and control groups, RCTs eliminate bias and confounding variables, providing a clear view of an ad’s incremental impact. However, RCTs require coordination and resources, particularly in multi-channel environments, and are increasingly challenged by stricter privacy regulations.

For marketers unable to implement user-level RCTs, cluster-level RCTs offer a viable compromise. By randomizing at the geographic or demographic level, these tests measure incremental impact while navigating privacy constraints. Geo experiments, for example, vary ad exposure by region, allowing for causal measurement without user-level data. When executed carefully, including balancing test and control regions in key pre-experiment data, such experiments can mimic the precision of user-level RCTs.

Some companies even opt for always-on geographical holdouts. This approach consistently informs marketing operations with high accuracy, enabling teams to validate and calibrate OCI effectiveness estimates. However, getting this right can require breaking through tactical-strategy-level barriers and instituting a shared learning agenda across executive and operational levels.

To note, cluster-level experiments are not without their challenges. Regional differences, external factors and spillover effects can complicate analysis. Yet, when executed carefully, they provide a significant improvement over traditional observational methods.

Breaking the cycle of complacency

Marketers need to stop settling for “good enough” and start demanding better. Experimentation isn’t just a tool; it’s a competitive advantage. Firms that embrace experimentation outperform their peers. It’s time to confront the bias against unbiased measurement.

The resistance to experimentation isn’t just technical; it’s cultural. Organizations must prioritize a mindset shift and champion experimentation as a core strategy. This requires leadership buy-in, cross-functional collaboration and a willingness to embrace the discomfort of learning.

Click here for Part 2: How to integrate experimentation into marketing measurement and make sure your organization is set up for success.

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Follow AdExchanger on LinkedIn.

Must Read

Comic: No One To Play With

Google Pulls The Plug On Topics, PAAPI And Other Major Privacy Sandbox APIs (As The CMA Says ‘Cheerio’)

Google’s aborted cookie crackdown ends with a quiet CMA sign-off and a sweeping phaseout of Privacy Sandbox technologies, from the Topics API to PAAPI.

The Trade Desk’s Auction Evolutions Bring High Drama To The Prebid Summit

TTD shared new details about OpenAds features that let publishers see for themselves whether it’s running a fair auction. But tension between TTD and Prebid hung over the event.

Monopoly Man looks on at the DOJ vs. Google ad tech antitrust trial (comic).

How Google Stands In The DOJ’s Ad Tech Antitrust Suit, According To Those Who Tracked The Trial

The remedies phase of the Google antitrust trial concluded last week. And after 11 days in the courtroom, there is a clearer sense of where Judge Leonie Brinkema is focused on, and how that might influence what remedies she put in place.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

The Ad Context Protocol Aims To Make Sense Of Agentic Ad Demand

The AI advertising agents will need their own trade group eventually. For now though, a bunch of companies are forming the Ad Context Protocol, or AdCP.

OUTFRONT Is Using Agencies’ AI Enthusiasm To Spur Wider Programmatic OOH Adoption

The desire for a data-driven reinvention of OOH inspired OUTFRONT to create agentic AI tools for executing and measuring OOH campaigns and comparing OOH to other channels.

Inside PubDesk, The Trade Desk’s New Dashboard That Shows What Buyers Actually Care About

A peek inside PubDesk, The Trade Desk’s new dashboard that gives sellers detailed info on how buyers value their inventory.