This is the sixth in a series of interviews with vendors combating the problem of ad fraud. Other companies participating in this series include Moat, Telemetry, Sizmek, comScore, Dstillery and Asia RTB. Read previous interviews with DoubleVerify, Forensiq, Integral Ad Science, PubChecker and Videology.
When it comes to catching bots, higher walls and better locks aren’t going to cut it.
So says Michael Tiffany, CEO of White Ops, which snagged $7 million in series A funding back in June. Tiffany told AdExchanger several months ago that he’s on a “messianic mission” to help the industry truly understand the threat posed by online ad fraud.
Online fraudsters do what they do because it’s lucrative. If digital ad fraud becomes less lucrative, the bad actors will move onto something else. That’s the core conviction by which White Ops operates.
“The bad guys are motivated by financial rewards,” Tiffany said. “We’re exploring what happens when we decrease the dollar amount available to these criminals.”
“By now we’ve encountered a large number of bots and we’ve created what we term ‘bot zoos,’ which are like our own little internal collection of bots that we’ve discovered in the wild,” Tiffany said. “We’ve trained these bots so we can run them in a test environment and make sure we’re able to catch every bot out there that we’ve already encountered.”
Regarding how much fraud is actually out there, it’s been hard for the industry to get a true read. That was part of the motivation behind a joint study between White Ops and the Association of National Advertisers (ANA), the goal of which was to track digital ad fraud among a representative sample of the ANA’s membership. The study concluded at the end of August. Results will be made available in either September or October.
Tiffany spoke with AdExchanger about bots, banks and bogus browsers.
ADEXCHANGER: What is the White Ops philosophy?
MICHAEL TIFFANY: Online security breaks down because the risk/reward in online crime is different from most other crimes in history. Broadly, the response over the last 20 years has just been to make the defenses better. In other words, higher walls and wider moats for everyone. But we think there’s another move. Our idea is to directly disrupt the profit center – to make the rewards smaller.
Does this tie back to your roots in bank fraud protection?
In online bank fraud, the bad guys aren’t breaking into the banks directly; they’re accessing a bank’s customers to steal money from their bank accounts. That’s where the basis of our technology comes from. This type of fraud is so pernicious because the malware understands the web and how to integrate with pages. It lies in wait until after a user has logged into a bank and gone through all of the validation procedures. After the user has logged in, the crime-ware actually changes itself and filters the web page in the browser so that what users think they’re doing is not what the bank thinks is happening. The malware can then change the account information for later account takeover.
What are the implications?
Well, it’s wickedly difficult to catch. From the bank’s perspective, it can’t trust that it’s actually communicating with a real customers – even if the bank knows it’s communicating with a customer’s computer that has the right username and password. Banks are typically on the forefront of online security and tend to be early adopters of new defensive technology, but they’ve been struggling for a few years now with the fact that because compromised computers are so pervasive, they can’t trust that they’re really talking to their own customers.
And how does this relate back to the advertising world?
Unfortunately, advertising is in nearly the same position as the banks. With the rise of online targeting, fraud has become way more sophisticated. Targeting, which uses third-party data to try and get to the right person, has introduced a huge security vulnerability because it makes the very subtle, but pervasive, assumption, that a cookie represents a particular human being.
But that’s not the case?
Those cookies are not truly authenticated. Basically, the entire industry is implicitly assuming and trusting that they’re talking to computers that haven’t been compromised. It’s the same place the banks are in, but the ad industry is not able to deal in the same way, because botnets are made up of real people who have computers and browsers and metadata. The botnets buy stuff. They know how to be good consumers.
So, targeting enables fraud?
Fraud was considered to be a solved problem because it was widely known that bots don’t buy anything. At the same time, advertisers are getting better at targeting and efficiency and measuring their campaigns. Together, this should naturally squeeze out all the bots, but that didn’t happen because it occurred to absolutely nobody that bots could be clones of real human beings that get targeted.
So the bot operators are pretty clever.
The fraudsters completely understand the ad ecosystem and they’re using the world of targeting systems against the advertisers. This is no passive adversary. They’re smart and they bring a lot of resources to this. We paint a bleak picture because the incentives are so out-of-whack. When the good guys win, they get to stay in business. When the bad guys win, they get millions of dollars.
Is there a silver lining?
It would be incredibly difficult to add new authentication layers to every ad system in the world so that cookies are somehow always trustable and not copyable. Collectively, that would be a multi-billion dollar engineering effort. The good news is that the problem completely goes away when we stop paying for it, and that’s actually achievable. The advertisers – the guys pouring money into the top of the funnel – need more transparency. Even if all they can do with that information is stop spending on lower quality places, that will start to turn the needle.
Cut off the money, cut off the fraud?
Our ultimate goal is to drive these guys out of the ad bot business, but it’s very unlikely once that happens that they’ll suddenly decide the game’s truly up and it’s time to get a day job. They’ll just pivot into another crime model – and that’s fine. We’ll follow them where they go next.
And where’s that?
What we see now is people pivoting out of bank fraud to ad fraud because, in a way, it’s a ‘better’ crime. You don’t need money mules. Ad fraud is a more attractive crime because it looks more sustainable and has fewer moving parts. When we make ad fraud unsustainable, they’ll have to shift back to riskier endeavors.
How does the White Ops technology work?
Essentially, we’re squeezing the profits of the botnet operators until we get them down to zero. To do that we borrowed a solution pattern from cryptanalysis that we call ‘side channel analysis.’ This refers to all of the ways in which any system at all, in our case, computer systems, leak information. We found that there are previously undiscovered side channels in HTTP that allow us to differentiate between regular browsers driven by actual people, browsers compromised by crime-ware, and automated or remote-controlled web browsers.
What’s a practical example of this?
How does that trip up the fraudsters?
It’s incredibly easy to spot a fake browser. They don’t want to stick out like a sore thumb, so they have to use a real browser as the basis for their bots. There are only so many HTML browsers to choose from – Internet Explorer, Chrome, Safari and Firefox. Writing your own is a $200 million proposition, and no one’s doing that. I’ve painted a picture of smart hackers, but not matter how smart you are, the moment you have to build something using a giant code base, you’re in trouble. But we’ve hired browser engineers who have either contributed code to a browser project or audited the security of one or all of the browsers, so they know a lot of very deep, subtle things about how the browsers operate.
You’ve referred to fraud detection as an “arms race.” What do you mean by that?
All races are about resource depletion. It’s a step-wise game where the winner is the one who outlasts everyone else. We don’t let the bad actors know if they’ve succeeded or failed because we don’t stomp on the session in any way. We don’t block ads or provide them with any real-time signals of their ‘success.’ There’s a delay before they know whether their traffic ends up getting counted and paid for, at least a day and sometimes even up to 30 days. The trick for us is that we’re adapting on our side, but before they can tell if they’re won round one, they already have to be playing round two, and if that keeps up forever, it tilts the playing field in our favor tremendously.
That said, these guys are strongly motivated to defeat us. We assume that we’re in a continuous arms race against their reverse engineering.
So, like a shark, fraud detection vendors have to keep moving or die?
If a fraud detection company comes on the scene and says, ‘We have an awesome new technique’ – and let’s assume it works – the bad guys will just ignore it until you start impacting their profits. There are a lot of different criminals out there with varying levels of sophistication and they don’t all compare notes. The sophisticated guys figure out how to defeat the new technique and as a result the fraud numbers appear to go down and it feels like you’ve solved the fraud problem when, in fact, the opposite is true.
How much fraud is actually out there?
We’re all obsessed with this question. We’ve gone into this with our eyes open, knowing that it can feel like you’re winning when you’re actually losing, and we don’t want to do that. You can get away with a lot of voodoo in the ad tech space right now, but the top-tier security world is always deeply cynical.