“Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.
Today’s column is written by Jay Friedman, COO at Goodway Group.
Measuring the viewability of ads is now a reality that’s not going away.
The intent behind this is good and the reason why we should measure impressions is valid. The current mechanisms and standards, though, are untested and unreliable.
We didn’t think this through and now we’re in a real mess.
If all websites had the quality and reputation of a cnn.com or yahoo.com, we may not have ever needed to measure viewability.
Unfortunately for our industry, there are tons of sites that display multiple ad units on each side of the page, most of which are unlikely to ever be seen. That’s a legitimate cause for advertiser concern.
To fix the problem, the industry took a two-pronged approach. First, companies used technology to determine whether or not an ad was in view. The technology took a few years to evolve to the point of being more reliable than not, but are we really there yet? I don’t think most MRC-certified viewability vendors are.
The recent certifications by the ABC, the UK’s version of the Media Rating Council, explicitly show that only MOAT passed all of the tests it was given, yet three others were certified. How did this happen? This isn’t school, where a score of 90 is an A. Billions of dollars of ad spending are at stake.
Second, the IAB set a standard for defining what “in view” actually means: It chose 50% of the ad pixels in view for one second or more. From a marketing standpoint, this is great. “Fifty and one” is so easy to remember. But was this truly tested and is it the right standard?
The viewability standard doesn’t seem to have been tested much. I’ve seen results showing that an ad being viewable at 50% or more does indeed improve the likelihood of a prospect taking a meaningful action on an advertiser’s website later. Those results also suggest the length of time the ad is in view does not play a role in conversion. But frequency unsurprisingly plays a significant role in influencing the likelihood of a user converting.
I find it very hard to believe the right standard is exactly 50% and one second. What if an advertiser prefers 90% and .3 seconds? Now you need to get your viewability vendor to code a custom standard.
What a pain. It didn’t have to be this way.
I wrote a blog post years ago titled something like “Why aren’t publishers accepting my eighth party ad call?” I was flippantly commenting on the state of ad servers and how poorly they kept up with the needs of advertisers to measure and collect information, such as verification, viewability and audience data. It seems that every time ad servers catch up and integrate a feature, another newer “must-have” measurement arises and we’re back to tag wrapping, discrepancies and wasted money.
Some agencies try to impress their clients by reassuring them they’ve added a viewability and fraud-monitoring vendor and forced their “partners” to pay for it. Some demand 100% viewability at a time when that simply isn’t realistic. Looking beyond the fact that “forcing a partner” is an oxymoron, this situation causes even more friction and lost spending in the digital ecosystem with major discrepancies between certain viewability and fraud-monitoring vendors and ad servers.
We keep hearing how little friction there is in traditional media buying and that there’s too much in digital. Yet clients and agencies both keep approving the addition of more friction.
Where We’ll End Up
When you peel the duct tape off of something, you’re left with that sticky residue. This seems to be where we’re headed.
In the next few years, publishers will move to infinite scroll and ad rotation design. Ad units will always be in view and able to be counted well beyond the one-second mark, before being rotated so publishers can make more money. It will be Goodhart’s law – “When a measure becomes a target, it ceases to be a good measure” – at work yet again.
In this scenario we will indeed get to very high viewability levels if we’re still using the current standards of 50% and one second of viewability. Ninety percent viewability or higher isn’t unreasonable. Yet the viewability vendors that have introduced significant friction into the marketplace will leave their technological footprint, like the plastic in our landfills that will remain long after humans are gone.
In the coming years, do your own testing. Every product is different and will require different standards. Identify sites and environments in which you’re comfortable placing your ads and use viewability technology as needed. But when it’s no longer needed, remember to unwrap that tag and remove friction where it’s not required.