"Data-Driven Thinking" is written by members of the media community and contains fresh ideas on the digital revolution in media.
Today’s column is written by Augustine Fou, digital strategist and independent ad fraud researcher.
Most clients and partners, trade associations and agencies are aware of digital ad fraud by now. And some have deployed fraud and bot detection services to measure it.
But beyond knowing how much bot activity there is on websites or in media buys, it is also important to know how such fraudulent actions negatively impact analytics and measurement.
Bots create ad fraud by visiting websites, clicking on ads and passing fake data to cover their tracks. Faithfully recorded by analytics platforms, this fraudulent activity can artificially skew data up or down, making the measurements unreliable.
Worse, if optimizations and business decisions are made based on this faulty data, the fraud problem is further amplified for the digital advertising industry.
Bot Traffic Skews Conversion Rates Lower
There are several examples from the field. When bots come to a website, for instance, their visits are registered by web analytics platforms. Also noted are “conversion events” that happen on the site, such as ecommerce transactions or other measurable steps used as proxies for conversion.
But bots don’t convert – at least not yet. So for the same number of conversion events that are accomplished by human visitors, there is a higher-than-normal denominator of web traffic due to bots visiting. As a result, conversion rates (conversions per visit) are artificially depressed.
Bot Clicks Make Click-Through Rates Appear Better
When fraud bots click on stuff, including cost-per-click (CPC) ads, they skew click-through rates higher than they should be. A large amount of such activity makes the rolled-up average click-through rates for certain campaigns look really good. This may lead advertisers and their media agencies to shift more dollars to these supposedly better-performing campaigns and sites.
That, unfortunately, sends more dollars to the bad guys, whose bots created those fraudulent clicks in the first place.
Lower Average Ad Blocking Is Not Necessarily Better
Many in the industry measure ad blocking rates, some more extensively than others. But a single, averaged number may not tell the whole story.
Fraud bots that deliberately cause ad impressions to load obviously don’t use ad blockers, so the average ad-blocking rates may be low because of bots. That’s not a good thing.
In mobile environments, where standard measurement technologies and techniques may not be able to measure ad blocking, a lower rate may simply mean incomplete measurement, instead of less ad blocking.
Higher-Than-Normal Viewability Is A Bad Thing
The fraudulent cash-out sites that carry a lot of ads will also trick viewability measurement systems by cheating with stacked ads, where they load all their ads above the fold, one behind another.
Furthermore, these sites often show ads in hidden iframes or small 0x0- or 1x1-pixel windows.
They don’t play by the rules that normal, mainstream publishers have to play by. So the bad guy sites will appear to have higher average viewability than the good publishers. If advertisers optimize for viewability and allocate more budget to it, then they could be spending more on fraudulent sites.
Any Declared Variable Is More Easily Gamed Than Detected Parameters
Cyber-criminals are smart, so they instruct their bots to cover their tracks. Much of this can be done quite simply by passing fake parameters, such as “utm_source=espn.com.” When this is done, the analytics programs record the visit as coming from ESPN, when it is clearly not.
If they can lie about one parameter, you can be certain that they will lie about all parameters. In fact, it should be assumed that all variables that can be declared or simply passed to the destination are subject to fraud. Other parameters detected are harder to fake and so are relegated to the more advanced bots.
This type of deliberate misattribution can ruin the reputation of legitimate publishers if they get blamed for 100% bot traffic but it actually didn’t come from them.
Bots Collect Cookies To Appear More Human And Earn Higher CPMs
Bots pretend to be humans by visiting mainstream sites and collecting cookies to create fake cookie profiles. For example, if a bot visits Smokeybear.com, REI.com and a few other outdoorsy sites, the bot will appear to be an outdoor enthusiast.
When that bot visits cash-out sites, advertisers who are desperate to get their ad in front of them will pay higher premiums to win the retargeting. This earns more money for the fraudulent site and siphons ad revenue away from legitimate publishers that would have earned that CPM if the ad were shown on their sites.
Sourced Traffic Is Virtually Guaranteed To Be Fraudulent
There aren’t a whole bunch of humans sitting around with nothing to do but hit the specific webpages that they are told to go visit. So while there is some credence to content discovery services that trick unsuspecting consumers to click on link-bait, it is far more likely that the traffic is simply created by using bots. It’s way easier than waiting for the handful of humans to fall victim to the click-bait.
Or entire target webpages are loaded in small, hidden or invisible iframes, so the traffic that was paid for appears to be generated, but no human actually arrived.
These examples illustrate how various forms of fraud are not only polluting analytics but also creating data that is skewed and unreliable. Advertisers, publishers and everyone in between should be aware of this and take steps to detect and correct for these aberrations.