Home Data-Driven Thinking What Weather Prediction Tells Us About Programmatic

What Weather Prediction Tells Us About Programmatic

SHARE:

jayhabeggerData-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Jay Habegger, co-founder and CEO at ownerIQ.

When I wrote this column last month, a major snowstorm was bearing down on New England, where I live. The storm could have either hit with a wallop or whimper. Which was more likely? Depends on which weather model you believe.

Sound familiar? In some ways, there are parallels between predicting the weather and programmatic advertising.

In the US, we have the North American Meoscale (NAM) Forecast System and the Global Forecasting System (GFS). The National Oceanic and Atmospheric Administration (NOAA) also makes the results of multiple weather models available free of charge. Canada produces weather forecasts using its own model, while a coalition of 34 European countries operates the ECMWF model. There are hundreds of smaller scale and experimental weather models used around the globe.

Each weather model has different strengths and weaknesses. Some are designed for short-range forecast accuracy, while others take the long view. Some models are best applied to specific forecasting problems or areas, while others focus on a broader picture.

All have access to the same raw weather data at massive scale, yet they produce different forecasting results with sometimes significant variability. Tremendous resources are deployed to interpret the data. NOAA, for example, claims a staff of 6,773 scientists and engineers. Clearly lots of shared data, Ph.D.s, computing resources and money are required to interpret and use raw weather data.

The takeaway for marketers is that despite the current worship of raw data among marketers and investors, the ability to interpret data and transform it into useful, predictive information is the more important trick and perhaps the one with the most added value.

The corollary conclusion is that interpreting data and creating useful information isn’t easy. The fundamental media problem is understanding how to sort through billions of ad placement opportunities appearing in front of hundreds of millions of users and use the limited budget to find the combination of these opportunities, users and ad frequency that produce the best result for the advertiser.

Only in extremely limited cases is a single signal and a rigid segment based on that behavior sufficient to use as a filter to narrow the number of possible opportunity-user-frequency combinations to a manageable number and produce great outcomes. The only time this technique works is in the presence of both a single highly predictive signal and a very small user population.

That scenario describes only one common tactic: retargeting. The only situation where this relatively simplistic technique works with reliability is with retargeting. This explains why site retargeting was the first widely adopted programmatic use case.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

In all other cases, the simplistic technique fails and data alone, of any type or scale, will not deliver the marketing outcome because the data signal alone is not sufficiently predictive to overcome the noise that is inherent in the programmatic landscape.

Noise comes in many forms. Fraud and brand safety are the obvious ones, but there are many others. They include publishers that load pages with lots of ad units. Publishers with content so eye-catching that it overwhelms any ads placed on the page. Users who seem to have nothing better to do than to surf the web and explore everything, making their observed behaviors of marginal importance. Users who share a family computer with their teenage daughter and therefore present a confusing bag of behaviors.

Marketers who bought into the idea of raw data being the most important or only determining factor in media placement are now discovering this to their dismay. A DSP software license combined with data from third-party aggregators, or even just with large quantities of first-party data, cannot produce great results in the absence of models that interpret that data in the context of the specific marketing problems to be solved.

Just like weather forecasting, marketers need programmatic forecasting models that score each of the billions of possible opportunity-user-frequency combinations according to each business use case and make a prediction about what is going to happen if the advertiser’s media is placed in a given opportunity.

And, it turns out, the quality of these models and how they are built adds more value to the marketer than the reliance on mere data alone. Recall that in weather forecasting everybody has access to the same data, yet the predictive results are anything but the same.

Data isn’t unimportant, to be sure. False signals will mislead any model. Some signals are more predictive than others. Raw data inputs do matter.

But, when it comes to programmatic excellence, data isn’t the only thing that matters. Raw data deployed without a model to sort through the billions of opportunity-user-frequency combinations is really just a baby step up from the often-derided spray-and-pray techniques. Models matter, too, and marketers should be as focused on how their data is going to be interpreted and used as they are on the data inputs.

Oh, and that snowstorm? Whimper. The Canadian model got it right. NOAA got it wrong. Those who cancelled their vacations expecting the wallop lost out.

Same data, different prediction, different outcome. The model matters.

Follow ownerIQ (@ownerIQ) and AdExchanger (@adexchanger) on Twitter.

Tagged in:

Must Read

John Gentry, CEO, OpenX

‘I Am A Lucky And Thankful Man’: Remembering OpenX CEO John ‘JG’ Gentry

To those who knew him, John “JG” Gentry wasn’t just a CEO. He was a colleague who showed up with genuine care and curiosity.

Prebid Takes Over AdCP’s Code For Creating Sell-Side AI Agents

The group that turned header bidding software into an open standard is bringing the same approach to publisher-side AI agents.

Meta logo seen on smartphone and AI letters on the background. Concept for Meta Facebook Artificial Intelligence. Stafford, UK, May 2, 2023

Meta Bets That Its Ad Machine Can Fund Its AI Dreams

Meta is channeling its booming ad revenue into a $135 billion AI drive to power its “personal superintelligence” future.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: Header Bidding Rapper (Wrapper!)

Microsoft To Stop Caching Prebid Video Files, Leaving Publishers With A Major Ad Serving Problem

Most publishers have no idea that a major part of their video ad delivery will stop working on April 30, shortly after Microsoft shuts down the Xandr DSP.

AdExchanger's Big Story podcast with journalistic insights on advertising, marketing and ad tech

Guess Its AdsGPT Now?

Ads were going to be a “last resort” for ChatGPT, OpenAI CEO Sam Altman promised two years ago. Now, they’re finally here. Omnicom Digital CEO Jonathan Nelson joins the AdExchanger editorial team to talk through what comes next.

Comic: Marketer Resolutions

Hershey’s Undergoes A Brand Update As It Rethinks Paid, Earned And Owned Media

This Wednesday marks the beginning of Hershey’s first major brand marketing campaign since 2018