"Data-Driven Thinking" is written by members of the media community and contains fresh ideas on the digital revolution in media.
Today’s column is written by Ted McConnell, an independent consultant in the digital marketing space.
Many Internet industry players are chiming in about how to fight ad fraud, but the scope, scale and capability of the bad guys demand something more strategic and harder-hitting than the suggestions I’ve seen to date.
With as much as $14 billion a year being stolen from advertisers, we need more than publishers adding controls to their infrastructures or a few advertisers buying antifraud services. Way more. Net neutrality is not a solution and freedom isn’t free.
How Big And How Bad?
For perspective, there were about 5,000 bank robberies in the US in 2011, with an average yield of $7,600, according to the FBI. The estimated size of fraud against advertisers varies, but $14 billion per year is the minimum estimate by Augustine Fou, a digital forensics expert. That would be more than 5,000 bank robberies per day, with a getaway car that travels at the speed of light.
It’s massive and unprecedented. The government would declare martial law if its banks were robbed 5,000 times per day. Your civil liberties would be toast.
Why hasn’t someone done something? First, waste is endemic to advertising, so advertisers don’t miss the money. Second, advertisers don’t get a lot of sympathy in Washington. Third, it’s very hard to detect, prevent, prove and count. Add to that the fact that media companies make money from it – innocently, for the most part, but who will look a gift horse in the mouth?
Computers acting like humans are at the root of much of this. They click, they “view,” they search, they build websites and then they visit them. Bots. They are fast, ubiquitous and can find a home on almost any computer. Like a smart virus, they don’t kill the host. They wait patiently for instructions.
Recently, someone found a bot living in a “smart” refrigerator! But not all robots are bad. Google, for example, depends on a robot. It is behavior or intention rather than mere existence that defines bots as nefarious or not. Don’t hate them because they’re silicon. Hate them if they rip you off.
When you have an enemy that’s shape-shifting, agile, belligerent, invisible, greedy, fast and brilliant, you have a problem. Welcome to what military strategy people call asymmetrical warfare. It looks like terrorism. They lie about their identity. They only have to be right once. There are no lines in the sand. You can’t tell them from the good guys. They adapt. They upend your freedom by abusing freedom. How should we respond?
As technology becomes more deeply integrated into our lives via the “Internet of Things,” the means available to commit this kind of crime are becoming more numerous. Whac-A-Mole strategies won’t cut it. We need to throw the entire arsenal at it: Control access and identity, make law, deter motivations, expose ways and means, punish the guilty, develop special weapons and so on. We might even have to give up some freedoms, such as the freedom to have a bot-infected device in our pocket.
Universities don’t let students on their networks without up-to-date antivirus systems. Access to public resources always carries responsibilities, like a driver’s license. Where’s the Wyatt Earp for that Wild West vision for the Internet?
I’m entitled to my imagination, so here’s what I would do if I could. There’s no single strategy that will work, but a cocktail might do the trick.
1. Accountability: Create accountability by passing legislation that forces nonhuman entities to declare their intentions, verify identity and associate themselves with a real human. Fraud is basically deception for gain, so force the deception to be explicit. While we’re at it, lets make it illegal to obfuscate or subvert web protocols. These practices are completely unnecessary.
2. Tighten up Internet access: Bots use the Internet. Make sure an antivirus is in place as a precondition of access. It’s no different than requiring trucks to be inspected for safety.
3. Control: Do as ad industry groups currently suggest and implement controls in media exchanges, media platforms and publishers.
4. Make tools to create evidence:Most US jurisdictions require fraud to be proved with “clear, cogent and convincing evidence,” according to Wikipedia. Who can create that cogent evidence? You? And who will pay you to do it?
5. Make better captchas: Many are breakable now. Machine learning and image recognition are now well understood by fraudsters, and they routinely penetrate turf reserved for humans only.
6. White lists: Make a publicly provided “white list” of humans, accessible as a service to all transactions. This idea is not foolproof because bots can show up with the identity of an actual human, but it’s one more deterrent.
7. Detection: Put serious brainpower into detection and distribute signatures of bot behavior. Detection alone does not stop them. It’s an arms race, so as they get smarter, we have to get smarter at the same rate.
8. Enlist Internet service providers: They can put filters in the “pipes.”
We are not the bad guys. We are the only people who totally get this stuff and have the technical prowess to deal with it. What we need now is leadership – the roll-up-your-sleeves variety – and money. Advertisers know that despite all this, Internet marketing works. We implemented 3MS because we knew that with financial returns, brand dollars would come to the ecosystem. That is happening. This is no different, but viewability was low-hanging fruit. Fighting fraud and waste are a huge opportunity for things to work even better.
The Internet was designed to be robust enough to withstand a nuclear blast. It is, but we’ve built a fragile layer of commerce on top of it. Advertisers getting defrauded are just canaries in the coal mine, and the feds can’t arrest your refrigerator.
Follow AdExchanger (@adexchanger) on Twitter.