Home Data-Driven Thinking From Theory To Practice: How Organizations Can Embrace Experimentation In Marketing Measurement

From Theory To Practice: How Organizations Can Embrace Experimentation In Marketing Measurement

SHARE:
Julian Runge, assistant professor of marketing, Northwestern University
Igor Skokan, global marketing science director, Meta

With thanks to Bill Grosso, CEO, Game Data Pros

Experimentation has been proven to offer unrivaled insights into marketing effectiveness, but it remains underused. More on that here.

But what can organizations do to successfully integrate experimentation into their measurement frameworks? 

Build the right team and secure executive buy-in

The foundation of successful experimentation begins with having the right people and support structures. This starts with hiring skilled data scientists or analysts who understand marketing measurement and experimental design. These experts will lead the charge, designing and analyzing experiments to generate actionable insights.

But technical expertise alone isn’t enough. Without executive endorsement, even the best experimentation efforts will falter. A clear signal of strategic importance – like a direct reporting line to a C-level executive – can prioritize resources and set the tone for organizational commitment to experiment-driven decision-making.

Foster a culture of curiosity

Experimentation requires more than tools and expertise; it requires a mindset shift. Organizations must embrace a culture of curiosity, where testing and learning are core values and reach across departments and organizational levels. This includes celebrating both successes and failures as positive and opportunities for growth.

Leadership plays a pivotal role here. By endorsing experimentation as a risk worth taking, executives can dismantle silos between departments such as analytics, marketing and finance. Encouraging open communication and collaboration across tactical and strategic levels ensures that experimentation becomes an integral part of the workflow.

Commit to a learning agenda

A shared learning agenda is a practical way to align teams and foster collaboration across organizational levels. This agenda should outline clear objectives for experimentation, ensuring that every test answers specific, measurable questions. For example: What do we want to learn from this experiment? How will these insights influence our decisions? How will experimental results calibrate broader observational models like MMM?

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

By keeping experimentation focused on high-impact questions, organizations can prioritize efforts and direct resources effectively, ensuring that hypotheses, objectives and governance are aligned and in sync.

Start simple, embrace cluster-level RCTs

If your team is new to experimentation, begin with simpler interventions, such as introducing controlled variations in spending and geo-distribution of activities. Modern marketing platforms like Google Ads and Meta Ads Manager include built-in experimentation tools. While not perfect, these tools can serve as a gateway to more rigorous testing.

As you grow your experimentation effort, be vigilant about issues like randomization errors and confounding variables.

Whenever feasible, organizations should prioritize RCTs to isolate the incremental impact of marketing efforts.

As privacy regulations make user-level RCTs more and more impractical, cluster-level RCTs such as geo experiments provide a pragmatic alternative. By randomizing at a regional level, marketers can measure campaign effects while navigating privacy and logistical limitations.

Validate observational models, establish feedback loops

Observational causal inference (OCI) models are important to assess the big picture, but they require experimental validation to minimize biases. Experimental results can serve as a benchmark, ensuring that observational estimates align with reality. 

Advanced approaches, such as Bayesian modeling or machine-learning-based optimization, can incorporate experimental findings directly into these models, enhancing their accuracy and reliability. Experiments can also serve to scrutinize model assumptions and parametrization.

A challenge can be that experiments happen at tactical, executing levels, but MMMs are maintained at strategic, executive levels. Use management techniques such as review meetings, shared learning agendas and strategic initiatives that span organizational levels to ensure that higher-level OCI models such as MMM are validated and calibrated against experimental estimates. 

The true value of experimentation lies in its ability to drive ongoing improvement. Establishing feedback loops ensures that insights from experiments inform future campaigns and strategies. Regularly reviewing results and adjusting approaches fosters an iterative process that adapts to changing market dynamics.

Getting started

A great starting point for experimentation to drive immediate value can be when a new medium that the organization has limited experience with is added to the mix.

Data for this channel is sparse then, but OCI models require substantial amounts of data for estimation. An advertising channel must have a volume of historical data above a minimal threshold and variations in spend and exposure to be meaningfully incorporated into an MMM model.

In this instance, start by evaluating early investments in the new channel with a cluster-level RCT and use experimental results to calibrate OCI models. Set a learning agenda item that aligns executive and executing levels (e.g., “evaluate Instagram ads’ effect on brand awareness among adults aged 18-24” or “assess point-of-care advertising’s impact on sales of medication x”) and ensure a feedback loop between experimental results and your strategy-level OCI model.

Organizations that integrate experimentation into their marketing measurement frameworks unlock a competitive advantage.

It’s not a question of choosing between observational models and experiments but of combining the two and embracing experimentation as a cornerstone of your strategy.

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Follow AdExchanger on LinkedIn.

For more articles featuring Julian Runge, click here.

Must Read

John Gentry, CEO, OpenX

‘I Am A Lucky And Thankful Man’: Remembering OpenX CEO John ‘JG’ Gentry

To those who knew him, John “JG” Gentry wasn’t just a CEO. He was a colleague who showed up with genuine care and curiosity.

Prebid Takes Over AdCP’s Code For Creating Sell-Side AI Agents

The group that turned header bidding software into an open standard is bringing the same approach to publisher-side AI agents.

Meta logo seen on smartphone and AI letters on the background. Concept for Meta Facebook Artificial Intelligence. Stafford, UK, May 2, 2023

Meta Bets That Its Ad Machine Can Fund Its AI Dreams

Meta is channeling its booming ad revenue into a $135 billion AI drive to power its “personal superintelligence” future.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: Header Bidding Rapper (Wrapper!)

Microsoft To Stop Caching Prebid Video Files, Leaving Publishers With A Major Ad Serving Problem

Most publishers have no idea that a major part of their video ad delivery will stop working on April 30, shortly after Microsoft shuts down the Xandr DSP.

AdExchanger's Big Story podcast with journalistic insights on advertising, marketing and ad tech

Guess Its AdsGPT Now?

Ads were going to be a “last resort” for ChatGPT, OpenAI CEO Sam Altman promised two years ago. Now, they’re finally here. Omnicom Digital CEO Jonathan Nelson joins the AdExchanger editorial team to talk through what comes next.

Comic: Marketer Resolutions

Hershey’s Undergoes A Brand Update As It Rethinks Paid, Earned And Owned Media

This Wednesday marks the beginning of Hershey’s first major brand marketing campaign since 2018