Home The Sell Sider Engagement Metrics Can Help Publishers Detect Ad Fraud

Engagement Metrics Can Help Publishers Detect Ad Fraud

SHARE:

mannypuentessellsiderThe Sell Sider” is a column written by the sell side of the digital media community.

Today’s column is written by Manny Puentes, chief technology officer at Altitude Digital.

Ad fraud is present across all layers of the advertising ecosystem, but there is one behavioral factor that is more likely to predict the presence of fraudulent bots than any other: third-party traffic sourcing.

Fifty-two percent of sourced traffic was bot fraud in a recent study [PDF] by White Ops and the Association of National Advertisers (ANA). This should raise a red flag for publishers, whose use of paid traffic-generating sources has increased as they seek to generate more impressions, fulfill advertising minimums and grow their audiences. As a result, botnet operators have stepped in to take advantage of the dollars funneling through these channels.

Publishers, however, can combat fraudulent bots by keeping a close eye on their third-party partners, diving into metrics most likely to indicate ad fraud and proactively cutting out underperformers and suspicious sources. The time-on-site metric may be one of the most powerful measures to help publishers combat bot-based fraud.

Bot traffic is becoming more sophisticated and human-looking every day, so using a combination of third-party verification, Google Analytics and big data resources is essential to catch evolving sources of fraud. As a starting point, analyzing a few key metrics in Google Analytics and associating the data points by referring domain can provide early indicators for identifying questionable traffic.

Page Depth And Browser Behavior

The practice of purchasing traffic is common among publishers of all sizes, even premium publishers, which often have dedicated audience acquisition budgets. But the practice is rife with potential pitfalls. This isn’t to say that publishers will or should stop their traffic acquisition efforts, since many services provide legitimate ways of acquiring new audiences and real readers.

For many years, it was relatively easy to spot bot traffic. Offending referring domains would often reveal a session length of just one page viewed per visit. In comparison, a typical site average is at least 1.1 pages viewed per visit but usually higher, as real humans played in the mix.

Today’s bots tend to be more sophisticated and can generate lots of page views per visit to avoid instant detection. However, many times, those views will be generated in a shorter period of time compared to the time it would take a real human to see the same amount of pages.

manny1

Within the referral channel grouping, Google Analytics’ comparison graph highlights outliers in pages per session. All graphics courtesy of Manny Puentes.

Bots are also much more common in older browsers than newer ones, as older versions are more susceptible to hijacking and malware. The White Ops/ANA study showed that a disproportionate amount of impressions generated by Internet Explorer 6 and 7 were bots – 58% and 46% respectively.

If a referring domain shows a browser makeup that’s markedly different from the overall site average, it’s worth digging into other potentially high-risk metrics and seeing if that source is problematic and possibly fraudulent.

manny2

Suspicious traffic sources can show higher-than-average use of Internet Explorer when compared to the overall site average.

Time On Site

While other session-based signals can surface in  instances of questionable traffic, time on site can be the most powerful metric to combat bot-based fraud, because of its importance to both publishers and advertisers. The metric is among the most meaningful to all parties when it comes to identifying truly engaged – and reliably human – audiences.

A session lasting a few seconds isn’t going to be inherently valuable to a publisher or advertiser, whether that session is produced by a bot or a human. Yet impression-based revenue models, notably cost per mille, have driven the growth of third-party traffic sources aimed solely at providing as many impressions per dollar as possible, with no consideration of actual reader engagement.

manny3

Find suspicious traffic domains by diving into the average session duration per source.

Some publishers are experimenting with transacting on the idea of time spent on site instead of traditional impressions, especially as native content and video become more meaningful revenue sources. Most notably, the Financial Times recently announced it would sell display ads based on time spent on site by charging a fixed amount for every second that a visitor actively engages with the content. The thought is that high-quality content and loyal readers will result in more time spent engaging with the publisher content and brand creative, leading to more long-term value for advertisers.

The time-on-site metric also plays strongly into viewability and the number of seconds that a reader is visually exposed to a brand’s message – both increasingly vital performance measures for digital advertisers.

As part of their extensive recommendations, The White Ops/ANA study suggested that advertisers maintain the right to not buy impressions based on sourced traffic. While it is yet to be seen if advertisers will take this to heart, publishers need to proactively clean up their third-party traffic sources, working to eliminate any potential for fraud.

By sourcing traffic with higher overall engagement metrics and terminating those with below-average performance, publishers can provide real audiences that meet the metrics that matter to advertisers.

Follow Manny Puentes (@epuentes), Altitude Digital (@AltitudeDP) and AdExchanger (@adexchanger) on Twitter.

Must Read

How Advertisers Can – And Cannot – Get In Front Of Chatbot Shoppers

Brands have plenty of ways to boost search visibility—paid, organic, and earned. But if a CEO demands presence in customer journey recommendation engines and is ready to pay, what can a marketer do?

Northbeam Adds The Third Leg Of The Attribution Stool With Incrementality Testing

There’s MMM and MTA, but no single ad measurement works for brands with multiple points of sale. On Tuesday, Northbeam launched an incrementality tool to complete what it calls “the trifecta of digital attribution.”

Comic: The Great Online Privacy Battle

What Regulators Talk About When They Talk About Ad Tech

If you want to know what privacy regulators think about online advertising, it’s not a mystery. Just listen to what they’re saying.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Keyword Blocking Demonetized More Than Half Of Reuters’ Brand-Safe Stories

The effect wasn’t just limited to news content. The Reuters.com/lifestyle vertical also had some of its brand-suitable pages blocked.

The Agentic Marketplace Is Here. Where Does That Leave DSPs and SSPs?

Swivel and Olyzon’s new partnership brings buy-side and sell-side agents together as early examples of an agentic marketplace.

Comic: Causal Meets Casual

Jones Road Beauty Is Using A New Type Of MMM To Reset Its Media Measurement

Inside how Jones Road Beauty is trying to turn messy, conflicting measurement signals into a single testing roadmap for its media mix.