Home Data-Driven Thinking Your Custom Model Didn’t Work… Now What?

Your Custom Model Didn’t Work… Now What?

SHARE:

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Asya Takken, senior data scientist at Alliant.

Data driven marketing is at its best when powered by predictive analytics. Many brands hum along on big platform machine learning algorithms, generating lookalike audiences for acquisition. Others go beyond this lightweight approach, harnessing their first party data, enhancing it with second- or third-party data and optimizing with custom models.

This human touch can yield exceptional results, but occasionally the time and resources invested into custom modeling don’t bear the juiciest of fruit. This leaves marketers asking their data scientists: What went wrong?

While the human element – which adds marketing knowledge and hand-edited variables –makes custom models powerful, it can also create risk. But there are ways to minimize risk and ways to course correct, ultimately driving the results brands look for.

Minimize risk and manage expectations

Most elements of a successful custom model are established in the early planning phases.  Unlike black box solutions, a custom model requires a partnership between marketers and data scientists.

The most important element is a consensus on the model’s success metric. It’s also pivotal to agree on a dependent variable to judge model performance, as well as core audience elements, such as age, geography and affinity. Decide on these elements when building the model, and ensure they are consistent across the audience for future deployments, aligning with each use case.


If you build the model based on a seed that has either specific or common attributes – for example, women customers from East Coast states –  you’ll get the best results when implementing those same core audience elements for each model deployment.

Building a model on one audience but applying it to another will not work very well. In this example, if you model an audience of women from a larger geographic region, or an audience of men and women, then the variables the model was built around will shift, skewing the results of the campaign. This doesn’t mean the model is ineffective, only that it was applied to an audience other than the one it was built for.

Similarly, it’s crucial to establish a dependent variable across stakeholders to help manage expectations and ensure results are reviewed consistently. Measuring success against other KPIs may leave you sorely disappointed in your model.

Say you have a model built to predict response. Your modeled audience gave the expected response, but new customers were not repeat buyers. You might think the model failed, but in this case, the model did what it was designed to do: drove response. It was not built to predict lifetime value, and in fact there is often a negative correlation between response and lifetime value. Brands after long-term customers should consider a different dependent variable.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

Analyze what went wrong

Even with the clearest intentions, models sometimes do not meet the goal. There isn’t always a smoking gun, so some detective work is required.

During review, work together with a data scientist to answer a few questions: How far off was the campaign – a little, or totally off? Are the negative effects on upfront campaign response, or more in the back end with few repeat purchases, or too many returns? Did my campaign goal, audience and creative align with the model objective?

One of the best ways to analyze results is to execute a back test by scoring the audience with the model and comparing the predicted behaviors with the actual results. Doing so will show whether the model predicted accurately, or it will locate the flaws.

For example, a brand looking for larger acquisition audiences ran four models together. It generated one large audience from the intersection of the top-ranking customers from each model. The campaign did not perform as expected. A back test highlighted that one of the four models significantly underperformed compared to the other three algorithms, skewing performance. This insight enabled the campaign to focus on the three strongest models to improve results.

Misbehaving variables

Sometimes, a group of variables misbehaves. Variables can be volatile in a dynamic market with changing consumer behavior. Your consumer base has likely changed drastically in the past year. So monitor variables, making sure they update regularly. Real-time re-estimation based on the latest data is great, but should be reviewed. When variables are mis-aligned they can drastically skew affinity and results.

Review your variables with the data scientist who built the model. Understanding which predictors made it in, which didn’t, and what might be shifting will be important. Cutting a geographic predictor may seem like the right idea, but it will have an impact in other ways. Everything is related.

Build a custom library

Fortunately for brands, all of these common mistakes can be avoided. Defining and maintaining key model inputs and KPIs during the model build and audience delivery stages will create consistent results that deliver on investment.

If the model isn’t working, or there’s a belief that it can be stronger, it’s okay to move the goal posts and push for better results. Brands that want more repeat customers or want to go after a slightly different audience need only regroup with their data scientist, and either adjust the model or build a new one.

Follow Alliant (@alliantdata ) and AdExchanger (@adexchanger) on Twitter.

Must Read

Google Rolls Out Chatbot Agents For Marketers

Google on Wednesday announced the full availability of its new agentic AI tools, called Ads Advisor and Analytics Advisor.

Amazon Ads Is All In On Simplicity

“We just constantly hear how complex it is right now,” Kelly MacLean, Amazon Ads VP of engineering, science and product, tells AdExchanger. “So that’s really where we we’ve anchored a lot on hearing their feedback, [and] figuring out how we can drive even more simplicity.”

Betrayal, business, deal, greeting, competition concept. Lie deception and corporate dishonesty illustration. Businessmen leaders entrepreneurs making agreement holding concealing knives behind backs.

How PubMatic Countered A Big DSP’s Spending Dip In Q3 (And Our Theory On Who It Was)

In July, PubMatic saw a temporary drop in ad spend from a “large” unnamed DSP partner, which contributed to Q3 revenue of $68 million, a 5% YOY decline.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Paramount Skydance Merged Its Business – Now It’s Ready To Merge Its Tech Stack

Paramount Skydance, which officially turns 100 days old this week, released its first post-merger quarterly earnings report on Monday.

Hand Wipes Glasses illustration

EssilorLuxottica Leans Into AI To Avoid Ad Waste

AI is bringing accountability to ad tech’s murky middle, helping brands like EssilorLuxottica cut out bots, bad bids and wasted spend before a single impression runs.

The Arena Group's Stephanie Mazzamaro (left) chats with ad tech consultant Addy Atienza at AdMonsters' Sell Side Summit Austin.

For Publishers, AI Gives Monetizable Data Insight But Takes Away Traffic

Traffic-starved publishers are hopeful that their long-undervalued audience data will fuel advertising’s automated future – if only they can finally wrest control of the industry narrative away from ad tech middlemen.