Home Mobile How Voltari Reaps Profits From Automated Predictive Analytic Models

How Voltari Reaps Profits From Automated Predictive Analytic Models

SHARE:

VoltariVoltari, a public mobile marketing and data analytics firm backed by Icahn Enterprises, is the latest reincarnation of a company that started off providing software and content for mobile devices in 2001.

The New York City-based company recently expanded its operations in Canada and is eyeing opportunities in the UK. AdExchanger spoke with Voltari CEO Rich Stalzer and CTO Dave Castillo about the company’s latest pivot from powering online portals for AT&T and Verizon towards targeted mobile advertising.

AdExchanger: What was the catalyst behind the pivot and what does Voltari specialize in?

RICH STALZER: When I joined Motricity about a year and a half ago, the writing was on the wall that our core business, providing portals and storefronts for companies like AT&T and Verizon, was going away. We needed to pivot the company in a different direction. Before we built anything, Dave [Castillo] and I went to agencies across the country asking them what are the issues they’re struggling with? What we kept hearing is mobile is exploding, but there’s no way to target ads on mobile devices. And so, Dave and I went back to Phoenix and built the Voltari-Connect platform, which essentially targets off four data points: the time of day, location, device type and content.

We take those four data points and we put it into our propensity modeling. We’ve got about 40,000 attributes, and then we make a prediction of what ad to serve to which person at that time.

The Motricity [restructuring] was officially done as of June 30. We’re now Voltari, a digital media and marketing company with 10 years of experience in mobile. Before Motricity, we owned a company called Generation 5, which is a data analytics firm that’s been around for 15 years for direct mail customers. We believe that having a direct mail background is more valuable in mobile than having a desktop background.

DAVID CASTILLO: We wanted to leverage some of the technologies and learnings that we have from our carrier business and a couple of acquisitions around email and online activity. And so we approached it from a very data driven perspective and built propensity models. With our direct mail background, we understand the value of first-party data, and so we integrate with first party data without revealing any PIIs [personal identifiable information].

Most people implement a segment-based approach and we believe segments are useful for describing profiles of people, but we believe that they’re too coarse-grained to take advantage of the power of mobile. The platform has been active since the end of August 2012 and we have over 150 customers.

Can you give me a use case example of how your platform works?

DC: One of the key pillars of what we do is use media efficiently. We don’t waste impressions. If we predict that the level of engagement is going to be low, we’ll save that impression for another opportunity where we can predict a higher level of engagement.

The way that we do that is when we start a campaign, we’ll consider the demographics and some of the inputs that the agency or brand will give us. If they insist on using that as a starting point, we’ll do that, but what we’ve typically found is if we can do a contextual relevancy or look-alike model, we can take the data, aggregate it from all the events and transactions that have happened in our system and create a “cold start.”

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

For example, we were asked to help with the targeting for a campaign for Texas Chainsaw Massacre last year.  The agency told us that the demographics were 18-year-old males. We have a real-time audience profile and by the third day or so, the engine showed a shift towards females ages 24 to 35. It was interesting that after the first weekend of the 10-day campaign, which was about 4 or 5 days into the campaign, Yahoo did an exit poll on the audiences that came out of this movie in southern California and it matched what we had predicted.

The same theatre later came back and asked if we could work on The Last Stand. Even though it’s a different genre from the Texas Chainsaw Massacre, there were still some learnings and propensity models that we could leverage to make this campaign more efficient. The Texas Chainsaw Massacre campaign wound up having a click-through rate of about 0.9% and the other campaign had a CTR of over 2.0%, which were quite good.

What is the accuracy rate of your predictive modeling?

RS: When we run a predictive model, we have a 20% holdout that we grade against the model and so we can see the deviation of the real data from the predictive data.  The holdout curve should track the predictive curve if it’s a good fit and almost overlay it. Usually nine out of 10 times, those curves are exactly alike.

In other cases, we’ve had bad models and that means there’s not enough data there from the sample you’re taking. To fix that you can link to another campaign for more data, even if it’s not data specific to your location, but just to have more data so you can make the model a little stronger.

The other thing is, when we look at impressions, it gives us a million impressions to target an audience but we don’t just say, let’s serve 200,000 impressions for five days and hope we reach the target. These models break down the chances for a match all the way to 20 decile points. If an impression falls within one decile point or three, that’s considered a high value asset. So we’ll serve an impression there. If it’s below the curve and around decile points 5 or 7, we don’t serve it because there’s a low propensity for the customer to engage.

What kind of criteria are you using to make sure you have a match?

RS: What constitutes a propensity model are a number of things: demographic data, your click behavior, location, the time of day you’ve responded. It’s a multivariate process that looks at location, demographic and psychographic data and what we’ve tracked in terms of earlier transactions. All that is considered in models that generate these score cards that are used in real time to figure out which ad campaigns to match to which users.

What lessons or technology have you taken from your past business?

RS: On the AT&T storefront, we built a recommendation engine, so we were looking at 100 million unique AT&T customers, and 75,000 different products. When you went onto AT&T’s site and downloaded a ringtone, we used our engine to make a recommendation to serve the right ring tone recommendations to the right person. We tracked this and saw the CTR dropped but the revenue per user went through the roof. In many ways, fewer clicks may be better because it’s going to be more efficient. That was the genesis of the [ad targeting] platform that we built.

Another thing to note is that all this is automated. What we see in other companies is an army of people on the backend, trying to move inventory around to get a better performance. We’re seeing anywhere from 10% to 25% of staff optimizing on the backend.

With us, it’s all machine learning. We don’t have anyone on the backend saying we’ve got to get the performance up, how do we do that? Our system is learning constantly. With every impression, whether you click or you don’t click, the system is learning and getting smarter. We believe that with real-time bidding and programmatic buying that margins are continuing to be compressed, so everything we’re doing is around automation and making things transparent.

So you don’t see a need for data scientists at all?

DC: All the adjustments are made by software, but we have a small team of data scientists who are responsible for building those algorithms and implementing them. We continually refine things as needed, but while we’re running an operation, it’s all hands off.

Is the purpose to keep your rates, such as your CPMs, low?

RS: We could go low if we wanted to, but we’re not competing on that. We’re competing by driving relevant advertising, and down the road there could be a lot of cost benefits to doing things this way because we’re able to take out a whole layer of the organization and that can be put into data science for building out better algorithms or putting more salespeople in front of customers.

Are there any plans to build your own ad exchange?

RS: There are a lot of different ways we could take the platform. Our guiding mission when we look at acquisitions or products is what we call our P&L vision—personalization and localization. We’re striving every day to make that ad or that marketing experience as personable as we can and localize it.  Inventory is a big part of it and we’re working with exchanges, ad networks and directly with publishers. We’ve cast a wide net to evaluate and test different pools of inventory, but right now that’s enough for us.

Must Read

In 2019, Google moved to a first-price auction and also ceded its last look advantage in AdX, in part because it had to. Most exchanges had already moved to first price.

Unraveling The Mystery Of PubMatic’s $5 Million Loss From A “First-Price Auction Switch”

PubMatic’s $5 million loss from DV360’s bidding algorithm fix earlier this year suggests second-price auctions aren’t completely a thing of the past.

A comic version of former News Corp executive Stephanie Layser in the courtroom for the DOJ's ad tech-focused trial against Google in Virginia.

The DOJ vs. Google, Day Two: Tales From The Underbelly Of Ad Tech

Day Two of the Google antitrust trial in Alexandria, Virginia on Tuesday was just as intensely focused on the intricacies of ad tech as on Day One.

A comic depicting Judge Leonie Brinkema's view of the her courtroom where the DOJ vs. Google ad tech antitrust trial is about to begin. (Comic: Court Is In Session)

Your Day One Recap: DOJ vs. Google Goes Deep Into The Ad Tech Weeds

It’s not often one gets to hear sworn witnesses in federal court explain the intricacies of header bidding under oath. But that’s what happened during the first day of the Google ad tech-focused antitrust case in Virginia on Monday.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: What Else? (Google, Jedi Blue, Project Bernanke)

Project Cheat Sheet: A Rundown On All Of Google’s Secret Internal Projects, As Revealed By The DOJ

What do Hercule Poirot, Ben Bernanke, Star Wars and C.S. Lewis have in common? If you’re an ad tech nerd, you’ll know the answer immediately.

shopping cart

The Wonderful Brand Discusses Testing OOH And Online Snack Competition

Wonderful hadn’t done an out-of-home (OOH) marketing push in more than 15 years. That is, until a week ago, when it began a campaign across six major markets to promote its new no-shell pistachio packs.

Google filed a motion to exclude the testimony of any government witnesses who aren’t economists or antitrust experts during the upcoming ad tech antitrust trial starting on September 9.

Google Is Fighting To Keep Ad Tech Execs Off the Stand In Its Upcoming Antitrust Trial

Google doesn’t want AppNexus founder Brian O’Kelley – you know, the godfather of programmatic – to testify during its ad tech antitrust trial starting on September 9.