Home The Sell Sider Engagement Metrics Can Help Publishers Detect Ad Fraud

Engagement Metrics Can Help Publishers Detect Ad Fraud

SHARE:

mannypuentessellsiderThe Sell Sider” is a column written by the sell side of the digital media community.

Today’s column is written by Manny Puentes, chief technology officer at Altitude Digital.

Ad fraud is present across all layers of the advertising ecosystem, but there is one behavioral factor that is more likely to predict the presence of fraudulent bots than any other: third-party traffic sourcing.

Fifty-two percent of sourced traffic was bot fraud in a recent study [PDF] by White Ops and the Association of National Advertisers (ANA). This should raise a red flag for publishers, whose use of paid traffic-generating sources has increased as they seek to generate more impressions, fulfill advertising minimums and grow their audiences. As a result, botnet operators have stepped in to take advantage of the dollars funneling through these channels.

Publishers, however, can combat fraudulent bots by keeping a close eye on their third-party partners, diving into metrics most likely to indicate ad fraud and proactively cutting out underperformers and suspicious sources. The time-on-site metric may be one of the most powerful measures to help publishers combat bot-based fraud.

Bot traffic is becoming more sophisticated and human-looking every day, so using a combination of third-party verification, Google Analytics and big data resources is essential to catch evolving sources of fraud. As a starting point, analyzing a few key metrics in Google Analytics and associating the data points by referring domain can provide early indicators for identifying questionable traffic.

Page Depth And Browser Behavior

The practice of purchasing traffic is common among publishers of all sizes, even premium publishers, which often have dedicated audience acquisition budgets. But the practice is rife with potential pitfalls. This isn’t to say that publishers will or should stop their traffic acquisition efforts, since many services provide legitimate ways of acquiring new audiences and real readers.

For many years, it was relatively easy to spot bot traffic. Offending referring domains would often reveal a session length of just one page viewed per visit. In comparison, a typical site average is at least 1.1 pages viewed per visit but usually higher, as real humans played in the mix.

Today’s bots tend to be more sophisticated and can generate lots of page views per visit to avoid instant detection. However, many times, those views will be generated in a shorter period of time compared to the time it would take a real human to see the same amount of pages.

manny1

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

Within the referral channel grouping, Google Analytics’ comparison graph highlights outliers in pages per session. All graphics courtesy of Manny Puentes.

Bots are also much more common in older browsers than newer ones, as older versions are more susceptible to hijacking and malware. The White Ops/ANA study showed that a disproportionate amount of impressions generated by Internet Explorer 6 and 7 were bots – 58% and 46% respectively.

If a referring domain shows a browser makeup that’s markedly different from the overall site average, it’s worth digging into other potentially high-risk metrics and seeing if that source is problematic and possibly fraudulent.

manny2

Suspicious traffic sources can show higher-than-average use of Internet Explorer when compared to the overall site average.

Time On Site

While other session-based signals can surface in  instances of questionable traffic, time on site can be the most powerful metric to combat bot-based fraud, because of its importance to both publishers and advertisers. The metric is among the most meaningful to all parties when it comes to identifying truly engaged – and reliably human – audiences.

A session lasting a few seconds isn’t going to be inherently valuable to a publisher or advertiser, whether that session is produced by a bot or a human. Yet impression-based revenue models, notably cost per mille, have driven the growth of third-party traffic sources aimed solely at providing as many impressions per dollar as possible, with no consideration of actual reader engagement.

manny3

Find suspicious traffic domains by diving into the average session duration per source.

Some publishers are experimenting with transacting on the idea of time spent on site instead of traditional impressions, especially as native content and video become more meaningful revenue sources. Most notably, the Financial Times recently announced it would sell display ads based on time spent on site by charging a fixed amount for every second that a visitor actively engages with the content. The thought is that high-quality content and loyal readers will result in more time spent engaging with the publisher content and brand creative, leading to more long-term value for advertisers.

The time-on-site metric also plays strongly into viewability and the number of seconds that a reader is visually exposed to a brand’s message – both increasingly vital performance measures for digital advertisers.

As part of their extensive recommendations, The White Ops/ANA study suggested that advertisers maintain the right to not buy impressions based on sourced traffic. While it is yet to be seen if advertisers will take this to heart, publishers need to proactively clean up their third-party traffic sources, working to eliminate any potential for fraud.

By sourcing traffic with higher overall engagement metrics and terminating those with below-average performance, publishers can provide real audiences that meet the metrics that matter to advertisers.

Follow Manny Puentes (@epuentes), Altitude Digital (@AltitudeDP) and AdExchanger (@adexchanger) on Twitter.

Must Read

Advertible Makes Its Case To SSPs For Running Native Channel Extensions

Companies like TripleLift that created the programmatic native category are now in their awkward tween years. Cue Advertible, a “native-as-a-service” programmatic vendor, as put by co-founder and CEO Tom Anderson.

Mozilla acquires Anonym

Mozilla Acquires Anonym, A Privacy Tech Startup Founded By Two Top Former Meta Execs

Two years after leaving Meta to launch their own privacy-focused ad measurement startup in 2022, Graham Mudd and Brad Smallwood have sold their company to Mozilla.

Nope, We Haven’t Hit Peak Retail Media Yet

The move from in-store to digital shopper marketing continues, as United Airlines, Costco, PayPal, Chase and Expedia make new retail media plays. Plus: what the DSP Madhive saw in advertising sales software company Frequence.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: Ad-ception

The New York Times And Instacart Integrate For Shoppable Recipes

The New York Times and Instacart are partnering for shoppable recipe videos.

Experian Enters The Third-Party Data Onboarding Business

Experian entered the third-party data onboarder market on Tuesday with a new product based on its Tapad acquisition.

Albertsons Takes Its First Steps Into Non-Endemic Advertising, Retail Media’s Next Frontier

Albertsons is taking that first step into non-endemic advertising next week via a partnership with Rokt to serve ads to people who have already purchased groceries.