Home Data-Driven Thinking From Theory To Practice: How Organizations Can Embrace Experimentation In Marketing Measurement

From Theory To Practice: How Organizations Can Embrace Experimentation In Marketing Measurement

SHARE:
Julian Runge, assistant professor of marketing, Northwestern University
Igor Skokan, global marketing science director, Meta

With thanks to Bill Grosso, CEO, Game Data Pros

Experimentation has been proven to offer unrivaled insights into marketing effectiveness, but it remains underused. More on that here.

But what can organizations do to successfully integrate experimentation into their measurement frameworks? 

Build the right team and secure executive buy-in

The foundation of successful experimentation begins with having the right people and support structures. This starts with hiring skilled data scientists or analysts who understand marketing measurement and experimental design. These experts will lead the charge, designing and analyzing experiments to generate actionable insights.

But technical expertise alone isn’t enough. Without executive endorsement, even the best experimentation efforts will falter. A clear signal of strategic importance – like a direct reporting line to a C-level executive – can prioritize resources and set the tone for organizational commitment to experiment-driven decision-making.

Foster a culture of curiosity

Experimentation requires more than tools and expertise; it requires a mindset shift. Organizations must embrace a culture of curiosity, where testing and learning are core values and reach across departments and organizational levels. This includes celebrating both successes and failures as positive and opportunities for growth.

Leadership plays a pivotal role here. By endorsing experimentation as a risk worth taking, executives can dismantle silos between departments such as analytics, marketing and finance. Encouraging open communication and collaboration across tactical and strategic levels ensures that experimentation becomes an integral part of the workflow.

Commit to a learning agenda

A shared learning agenda is a practical way to align teams and foster collaboration across organizational levels. This agenda should outline clear objectives for experimentation, ensuring that every test answers specific, measurable questions. For example: What do we want to learn from this experiment? How will these insights influence our decisions? How will experimental results calibrate broader observational models like MMM?

By keeping experimentation focused on high-impact questions, organizations can prioritize efforts and direct resources effectively, ensuring that hypotheses, objectives and governance are aligned and in sync.

Start simple, embrace cluster-level RCTs

If your team is new to experimentation, begin with simpler interventions, such as introducing controlled variations in spending and geo-distribution of activities. Modern marketing platforms like Google Ads and Meta Ads Manager include built-in experimentation tools. While not perfect, these tools can serve as a gateway to more rigorous testing.

As you grow your experimentation effort, be vigilant about issues like randomization errors and confounding variables.

Whenever feasible, organizations should prioritize RCTs to isolate the incremental impact of marketing efforts.

As privacy regulations make user-level RCTs more and more impractical, cluster-level RCTs such as geo experiments provide a pragmatic alternative. By randomizing at a regional level, marketers can measure campaign effects while navigating privacy and logistical limitations.

Validate observational models, establish feedback loops

Observational causal inference (OCI) models are important to assess the big picture, but they require experimental validation to minimize biases. Experimental results can serve as a benchmark, ensuring that observational estimates align with reality. 

Advanced approaches, such as Bayesian modeling or machine-learning-based optimization, can incorporate experimental findings directly into these models, enhancing their accuracy and reliability. Experiments can also serve to scrutinize model assumptions and parametrization.

A challenge can be that experiments happen at tactical, executing levels, but MMMs are maintained at strategic, executive levels. Use management techniques such as review meetings, shared learning agendas and strategic initiatives that span organizational levels to ensure that higher-level OCI models such as MMM are validated and calibrated against experimental estimates. 

The true value of experimentation lies in its ability to drive ongoing improvement. Establishing feedback loops ensures that insights from experiments inform future campaigns and strategies. Regularly reviewing results and adjusting approaches fosters an iterative process that adapts to changing market dynamics.

Getting started

A great starting point for experimentation to drive immediate value can be when a new medium that the organization has limited experience with is added to the mix.

Data for this channel is sparse then, but OCI models require substantial amounts of data for estimation. An advertising channel must have a volume of historical data above a minimal threshold and variations in spend and exposure to be meaningfully incorporated into an MMM model.

In this instance, start by evaluating early investments in the new channel with a cluster-level RCT and use experimental results to calibrate OCI models. Set a learning agenda item that aligns executive and executing levels (e.g., “evaluate Instagram ads’ effect on brand awareness among adults aged 18-24” or “assess point-of-care advertising’s impact on sales of medication x”) and ensure a feedback loop between experimental results and your strategy-level OCI model.

Organizations that integrate experimentation into their marketing measurement frameworks unlock a competitive advantage.

It’s not a question of choosing between observational models and experiments but of combining the two and embracing experimentation as a cornerstone of your strategy.

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Follow AdExchanger on LinkedIn.

For more articles featuring Julian Runge, click here.

Must Read

Felipe Cuevas for TelevisaUnivision

We Went To Eight Upfronts This Week. Here's What We Learned

Upfront week is officially over. In case you missed any of the dog-and-pony shows — including Chappell Roan belting out “Pink Pony Club” during YouTube’s Broadcast — don’t worry; we’ve got you covered.

Let’s Be Upfront About Performance

During upfronts, publishers flexed their ad performance muscles at media buyers all week long in an effort to appeal to the biggest demands media buyers have during their upfront negotiations: flexibility and results.

Upfronts Day Two: Dancing And Data

TelevisaUnivision and Disney took over Day Two of upfronts week in New York City on Tuesday, and the throughline was data quality.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Warner Bros. Discovery’s Upfront Was All About Performance

Warner Bros. Discovery used its upfront stage to announce two new ad measurement efforts, including that it’s joining a CAPI-focused initiative led by OpenAP.

Upfronts Day One: Publishers Jostle For Position As Performance Drivers

AdExchanger Senior Editor Alyssa Boyle and Associate Editor Victoria McNally traversed the island of Manhattan on Monday to scope out upfront presentations by NBCUniversal, Fox and Amazon.

Viant Sees A Growth Wave Coming, But First Marketers Must Really Ditch Walled Garden Ad Tech

Viant’s modest growth story took a backseat to a far louder claim: that fed-up advertisers are finally ready to ditch the rigged economics of Big Tech’s walled gardens.