Home Data-Driven Thinking What Weather Prediction Tells Us About Programmatic

What Weather Prediction Tells Us About Programmatic

SHARE:

jayhabeggerData-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Jay Habegger, co-founder and CEO at ownerIQ.

When I wrote this column last month, a major snowstorm was bearing down on New England, where I live. The storm could have either hit with a wallop or whimper. Which was more likely? Depends on which weather model you believe.

Sound familiar? In some ways, there are parallels between predicting the weather and programmatic advertising.

In the US, we have the North American Meoscale (NAM) Forecast System and the Global Forecasting System (GFS). The National Oceanic and Atmospheric Administration (NOAA) also makes the results of multiple weather models available free of charge. Canada produces weather forecasts using its own model, while a coalition of 34 European countries operates the ECMWF model. There are hundreds of smaller scale and experimental weather models used around the globe.

Each weather model has different strengths and weaknesses. Some are designed for short-range forecast accuracy, while others take the long view. Some models are best applied to specific forecasting problems or areas, while others focus on a broader picture.

All have access to the same raw weather data at massive scale, yet they produce different forecasting results with sometimes significant variability. Tremendous resources are deployed to interpret the data. NOAA, for example, claims a staff of 6,773 scientists and engineers. Clearly lots of shared data, Ph.D.s, computing resources and money are required to interpret and use raw weather data.

The takeaway for marketers is that despite the current worship of raw data among marketers and investors, the ability to interpret data and transform it into useful, predictive information is the more important trick and perhaps the one with the most added value.

The corollary conclusion is that interpreting data and creating useful information isn’t easy. The fundamental media problem is understanding how to sort through billions of ad placement opportunities appearing in front of hundreds of millions of users and use the limited budget to find the combination of these opportunities, users and ad frequency that produce the best result for the advertiser.

Only in extremely limited cases is a single signal and a rigid segment based on that behavior sufficient to use as a filter to narrow the number of possible opportunity-user-frequency combinations to a manageable number and produce great outcomes. The only time this technique works is in the presence of both a single highly predictive signal and a very small user population.

That scenario describes only one common tactic: retargeting. The only situation where this relatively simplistic technique works with reliability is with retargeting. This explains why site retargeting was the first widely adopted programmatic use case.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

In all other cases, the simplistic technique fails and data alone, of any type or scale, will not deliver the marketing outcome because the data signal alone is not sufficiently predictive to overcome the noise that is inherent in the programmatic landscape.

Noise comes in many forms. Fraud and brand safety are the obvious ones, but there are many others. They include publishers that load pages with lots of ad units. Publishers with content so eye-catching that it overwhelms any ads placed on the page. Users who seem to have nothing better to do than to surf the web and explore everything, making their observed behaviors of marginal importance. Users who share a family computer with their teenage daughter and therefore present a confusing bag of behaviors.

Marketers who bought into the idea of raw data being the most important or only determining factor in media placement are now discovering this to their dismay. A DSP software license combined with data from third-party aggregators, or even just with large quantities of first-party data, cannot produce great results in the absence of models that interpret that data in the context of the specific marketing problems to be solved.

Just like weather forecasting, marketers need programmatic forecasting models that score each of the billions of possible opportunity-user-frequency combinations according to each business use case and make a prediction about what is going to happen if the advertiser’s media is placed in a given opportunity.

And, it turns out, the quality of these models and how they are built adds more value to the marketer than the reliance on mere data alone. Recall that in weather forecasting everybody has access to the same data, yet the predictive results are anything but the same.

Data isn’t unimportant, to be sure. False signals will mislead any model. Some signals are more predictive than others. Raw data inputs do matter.

But, when it comes to programmatic excellence, data isn’t the only thing that matters. Raw data deployed without a model to sort through the billions of opportunity-user-frequency combinations is really just a baby step up from the often-derided spray-and-pray techniques. Models matter, too, and marketers should be as focused on how their data is going to be interpreted and used as they are on the data inputs.

Oh, and that snowstorm? Whimper. The Canadian model got it right. NOAA got it wrong. Those who cancelled their vacations expecting the wallop lost out.

Same data, different prediction, different outcome. The model matters.

Follow ownerIQ (@ownerIQ) and AdExchanger (@adexchanger) on Twitter.

Must Read

Google Rolls Out Chatbot Agents For Marketers

Google on Wednesday announced the full availability of its new agentic AI tools, called Ads Advisor and Analytics Advisor.

Amazon Ads Is All In On Simplicity

“We just constantly hear how complex it is right now,” Kelly MacLean, Amazon Ads VP of engineering, science and product, tells AdExchanger. “So that’s really where we we’ve anchored a lot on hearing their feedback, [and] figuring out how we can drive even more simplicity.”

Betrayal, business, deal, greeting, competition concept. Lie deception and corporate dishonesty illustration. Businessmen leaders entrepreneurs making agreement holding concealing knives behind backs.

How PubMatic Countered A Big DSP’s Spending Dip In Q3 (And Our Theory On Who It Was)

In July, PubMatic saw a temporary drop in ad spend from a “large” unnamed DSP partner, which contributed to Q3 revenue of $68 million, a 5% YOY decline.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Paramount Skydance Merged Its Business – Now It’s Ready To Merge Its Tech Stack

Paramount Skydance, which officially turns 100 days old this week, released its first post-merger quarterly earnings report on Monday.

Hand Wipes Glasses illustration

EssilorLuxottica Leans Into AI To Avoid Ad Waste

AI is bringing accountability to ad tech’s murky middle, helping brands like EssilorLuxottica cut out bots, bad bids and wasted spend before a single impression runs.

The Arena Group's Stephanie Mazzamaro (left) chats with ad tech consultant Addy Atienza at AdMonsters' Sell Side Summit Austin.

For Publishers, AI Gives Monetizable Data Insight But Takes Away Traffic

Traffic-starved publishers are hopeful that their long-undervalued audience data will fuel advertising’s automated future – if only they can finally wrest control of the industry narrative away from ad tech middlemen.