MRC Aims To Bridge The Viewability Reconciliation Gap – But Mobile Remains An Open Question

MRCelephantViewability vendors mostly speak the same language – but they don’t always provide brands and publishers with the same results.

It’s a major pain point and one that the Media Rating Council (MRC) has been attempting to tackle in a three-part series of reconciliation tests, which began back in 2013, to understand why there are marked discrepancies between the viewable impression numbers reported by various MRC accredited vendors looking at the same traffic.

The final results were released late Wednesday.

The MRC acknowledged in its reconciliation study that more than one-third of the campaigns it looked at had an unacceptably high percentage of discrepancy between vendors – higher than both of its previous phases of reconciliation testing, a matter it plans to investigate.

Although for the majority of the campaigns under the MRC’s microscope (63%) the variance observed between vendors fell within what the MRC regarded as “an acceptable range” – a less than 10% difference, with an average weighted difference of 4.1% per campaign – the remaining 37% of campaigns clocked in with differentials of over 10% and an average weighted difference of 34% per campaign.

Mobile appears to be a big part of the issue.

In drilling down into the differences, the need for more robust guidance on mobile viewability was conspicuous. The MRC found that 54% of reconciliation issues were due to vendors treating mobile viewable impressions differently than desktop impressions in their reporting.

No vendor has yet been accredited for a mobile-specific viewability product.

From the MRC’s perspective, impressions served to mobile environments need to be clearly segregated from desktop impressions for the purposes of viewability reporting, a point the council made in the interim mobile guidelines it released in early May. The MRC also recommended separate reporting for mobile web impressions and for in-app impressions.

“We support the MRC in their efforts – but at the same time, there are still a lot of questions that need further attention, especially around how mobile impressions are treated,” said Michael Krauss, Integral Ad Science’s director of product management, who was present on a joint call hosted by the Interactive Advertising Bureau (IAB) and the MRC as they discussed the results on Wednesday.

Mobile aside, several other factors were also found to impact viewability measurement harmony, including the fact that some vendors handle pixel tags served by other vendors differently (responsible for 13% of the discrepancy), as well as an incongruity around how various vendors account for multi-ad campaigns (28%).

In the case of multi-ad units – say a sponsorship package that includes a large banner ad and several other smaller ads placed around a page – it’s particularly difficult to determine which ad or ads should “count” toward delivery of a campaign, Integral’s Krauss said.

Rounding out the MRC’s conclusions, 2% of discrepancies were found to be the result of variability between how different vendors filter unverified traffic, while 3% were attributed to non-human traffic, human error and other related factors.

The MRC said it plans to address most of those issues in a forthcoming update to its viewability guidelines “in the near future.” No specific date was provided.

According to research released by digital consultancy 614 Group in April, vendor to vendor disparity was named as one of the top challenges facing both the buy side and the sell side with regards to measuring viewability.

“Everyone in the industry must ask how we can reasonably expect to transact media based on a standard where no two measurements are the same,” the report stated tersely.

For its part, although the IAB supported the MRC’s efforts, it also had this to say in its public statement on the matter: “Clearly, the MRC analysis demonstrates the need to move faster in solving for the root causes of measurement disparity and inadequacies.”

Because there’s a lot of money at stake, Krauss said.

“The market wants to be able to trust the vendors who are MRC certified to, at the very least, come up with similar measurements for campaigns,” he said.

The MRC’s third and final reconciliation report was based on campaign data from Q1 and Q2 2015. The council examined nearly 4 billion served ad impressions across both video and display. Site, placement and creative type were all taken into consideration.

Enjoying this content?

Sign up to be an AdExchanger Member today and get unlimited access to articles like this, plus proprietary data and research, conference discounts, on-demand access to event content, and more!

Join Today!

1 Comment

  1. Although it’s been obvious to everyone in the industry, it’s finally good to see the MRC is officially acknowledging this and working towards something.

    The next round should reset accreditations for everyone and have a much better technically savvy process that requires all vendors to have the same results to pass, something that can easily be done with a deterministic test suite created by software engineers. Discrepancy can and should be brought within 1% or less with a better approach.