Home Data-Driven Thinking Today’s Predictive Algorithms Are Still Better Than Humans

Today’s Predictive Algorithms Are Still Better Than Humans

SHARE:

jeremystanley“Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Jeremy Stanley, chief data scientist at Sailthru.

Predicting the future is not easy.

Yet predictive algorithms are commonly criticized because they fail to perfectly foresee very rare events. That is because rare events are just that: uncommon and subtle. Asking a predictive algorithm to perfectly identify the 1% of consumers who will purchase a specific product is a wildly unrealistic expectation.

Keep in mind that, even with all the big data available today, these solutions are trying to predict human behavior. Humans are complicated, and there are billions of us all behaving in increasingly interconnected ways. Despite what Hollywood might lead you to believe, setting the expectation that an algorithm can predict our future behavior with anything near complete certainty is a fool’s errand.

So rather than looking to predictive algorithms to make definitive predictions, we should instead ask how much better is an algorithm at identifying these rare events than random guessing alone?

Consider this example of a predictive algorithm that identifies users likely to make a specific purchase for an ecommerce site. Group A, identified by an algorithm, represents 5% of consumers with a 20% average chance of purchasing a product. Group B is the other 95% of consumers with a 0.0001% chance of purchasing the same product. Random selection of 5% of users from the entire population would only generate a group with a 1% chance of purchasing, so in this example the predictive algorithm generates 20 times lift (20% / 1%) through its selection of the 5% most likely to purchase.

In other words, it found the 5% of users who are 20 times more likely to purchase than the average consumer, even though it’s still only going to be right two out of five times. With the right data and sciences, generating this kind of lift is well within the capabilities of an ecommerce predictive model.

You might critique this approach because it represents a low bar for a standard – the algorithm just had to beat random guessing. You might think that surely, a better measure would be to compare the algorithm against a human’s ability to identify these optimal users. In some domains where there are highly trained individuals who can analyze every single consumer with great care to predict outcomes, such as health care, that is a valid point.

However, in general, there are several key differences between algorithms and humans that land in favor of the use of algorithms:

Cognitive bias: Humans are horrible at making predictions. We are often blinded by an exhaustive list of cognitive biases, such as bandwagoning, self-serving bias, illusion of validity, stereotyping, the empathy effect and suggestibility.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

Scalability: Let’s say an expert is able to make a meaningful prediction every half hour. However efficient or knowledgeable he or she may be, this is still unscalable when thousands or hundreds of thousands of predictions are required in complex ecommerce and media organizations.

Time to predictions: Training an expert can take years or even decades before they can make valuable predictions. With modern computing and algorithms, training an accurate predictive model can be done in minutes.

Self-awareness: Not only do algorithms provide meaningful lift in the accuracy of their predictions, but they can also tell us how certain they are, such as whether a particular consumer is within the segment with a 13% chance of purchasing. Humans are terrible at estimating how certain they are. We almost always overestimate our certainty by wide margins.

In the end, we need to better understand and have realistic expectations of the abilities of predictive algorithms. They can be an incredibly powerful tool. When combined with software automation and accurate user data, they can provide highly personalized experiences to users that are far superior to a one-size-fits-all or a human expert-curated approach.

Just because they aren’t perfect, we shouldn’t fool ourselves into thinking they aren’t incredibly valuable.

Follow Jeremy Stanley (@jeremystan), Sailthru (@sailthru) and AdExchanger (@adexchanger) on Twitter.

Must Read

Nielsen and Roku Renew Their Vows By Sharing Even More Data With Each Other

Roku’s streaming data will now be integrated into Nielsen’s campaign measurement and outcome tools, the two companies announced on Monday,

Lionsgate Enters The Ads Biz With An Exclusive Ad Server

The film and TV studio Lionsgate has chosen Comcast’s FreeWheel as its exclusive ad server to help manage and sell the growing volume of ad inventory Lionsgate creates with new FAST channels.

Layoffs

The Trade Desk Lays Off Staff One Year After Its Last Major Reorg

The Trade Desk is cutting its workforce. A company spokesperson confirmed the news with AdExchanger. The layoffs affect less than 1% of the company.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

A Co-Founder Of DraftKings Wants To Help Creators Monetize Content

One of the DraftKings founders now leads HardScope, parent of FaZe Clan, aiming to bring FaZe’s content and distribution magic to creators beyond gaming.

APIs Have Had Their Moment, But MCPs Reign Supreme In The Agentic Era

On Tuesday, Infillion launched fully agentic media execution platform built on MCP, marking a shift from the programmatic to the agentic era.

Albertsons Launches New Off-Site Click-to-Cart Tech

The grocery chain Albertson’s is trying to reduce the time and number of clicks it takes to add an item to an online shopping cart. It’s new click-to-cart product should help.