“Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.
Today’s column is written by Kathy Leake, CEO at Qualia.
There’s a race in our industry to be “right” about the state of cross-device data science. Instead, we should be the smartest about how we’ve not quite arrived and say so publicly.
Some will say the reason we’ve not gotten it right is because we reduce the argument to one about “probabilistic vs. deterministic,” and that we are totally imprecise about what the definitions of these data sets even mean. Some will say the science just isn’t there yet or that blind spots remain in the post-cookie next generation of data-driven marketing. Some would say we are just not rigorous enough in our standards for assessing our own accuracy and quality.
I say it’s a bit of all of these but, first and foremost, our imperfect state boils down to a fundamental issue endemic to our industry community: We are not telling the truth about the state of our capabilities.
We lack a collective authenticity and integrity – and often are not proactive about addressing the ignorance that prevails in areas key to our business. That falseness impacts the way we pursue the science itself, and has us falling short on how we service the market and deliver value and relevant experiences to the consumer. This has to change.
Unfortunately, not everyone in the cross-device brood is aiming for a standard of data excellence. Instead, we are collectively just getting by. There’s a lot of inaccuracy that still goes unchecked, for which neither the buy side nor industry groups hold us accountable. The byproduct of such unchecked accuracy is a state of uncertainty and unrealized potential for all.
The only way that we will rise together to optimize the science for marketers is to universally commit to data quality through accountability to a standard of accuracy and quality – and to stay true to marketers on what constitutes this quality.
More than cracking the right equation on probabilistic vs. deterministic, more than solving the post-cookie quandary, more than better self-assessment – a philosophical shift toward quality standards is required.
Specifically, there are three specific areas where we have trouble and where very few people are honest.
Dumbing Down And Overreliance On Definitions
If we rest on the presumed accuracy of deterministic data – or even a deterministic and probabilistic blend – our applied science will fall short.
Reliance on an oversimplified data definition and the processes to manage that data will fail us. We end up marketing with bad data, and the consumer has a terrible experience with off-target ads. Our media economy becomes fraught with near misses and consumer disenchantment.
The “truth” point here is this: You’ve got to augment deterministic data because it is not a silver bullet. There’s nothing wrong with being honest about that. Google and Facebook probably know this but there’s no imperative for them to promote it since they are already getting nearly 80% of the revenue in the ecosystem.
Post-Cookie Reality Has Not Fully Arrived
Our cross-device data picture remains flawed because mobile data still has holes. This is largely because we are not there yet on the unique identifier. Web and app environments are entirely different from an identification standpoint, as are iOS and Android. A simple act for the marketer to show an ad on app and measure conversion on browser is near impossible with standard identification techniques.
Cross-device graphs can help bridge these identity gaps, but not all of our graphing solutions are equipped to solve this issue. The unspoken fact remains: Most device graphs are built on IP and cookies. It’s not that this is dishonest or disingenuous – it’s that we should be focused on more stable and reliable options.
Quality And Accuracy Assurance: A Tall Order Being Shortcut
There is a rampant pretense of self-assessment in the cross-screen space, but the assessment is faulty because it hangs its hat on an egregious shortcut. Most studies conducted are precision-only and executed by using a very small sample of the cross-device graph matched with a small panel from the validating company. The result is an extremely small sample that can be cherry-picked for the test, which also happens to be statistically insignificant.
A true self-audit is precise, continuous and significant. It does not focus on a moment-in-time snapshot of a select sample from the graph but rather gathers data across a longer timeframe and across a broader subsection of the graph.
If we collectively want to recognize the consumer’s cross-screen reality and, therefore, the opportunity it represents to deliver powerful, creative data-driven marketing that enthralls and engages consumers, we need to own up to our own shortcomings.
Let’s not pretend that deterministic data amounts to truth and overrepresent its validity to marketers. Let’s not ignore the blind spots in the cross-screen data picture. And let’s not pretend to effectively assess our own quality and accuracy.
Living the truth as an industry, it’s time to drop the cover-up and operate with integrity.