Home Data-Driven Thinking All Hail the V.A.D.A.R.

All Hail the V.A.D.A.R.

SHARE:

Data-Driven ThinkingData Driven Thinking” is a column written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by David Soloff, CEO, Metamarkets, a publisher analytics company.

Around the time he was busy founding a retail bank, my good friend Josh Reich offered the insight that ‘data trades inversely to liquidity.’ This presents two related use cases, relevant to all electronic trading markets including the one for online ads.

Use Case the First; in which the trade itself is the shortest distance to discovering price

In fast-moving, liquid markets, the best way to discover the value of a given asset is to trade it. This works beautifully in standardized markets like equities, options or commodities futures, reliant as they are on screen-based trading and standardized instrumentation. In these more advanced market structures, trade capture, high speed information distribution and clearing services are all tightly inter-networked with the result that in the selfsame moment a bid has been hit, price is revealed. The direct relationship holds true in all but the most broken of markets: trade these standardized assets at higher volumes and lower latency in order to get a more precise and reliable snapshot of valuation. None of this is to say that trading markets go even part-way toward perfectly determining price according to the philosophical definitions of ‘perfect’.

In liquid markets, market data services will lag trade execution as a mechanism for price discovery. Which isn’t to say that market data services don’t have a critical role to play in fast-moving markets. No trading desk or electronic broker would outline an execution strategy without the data necessary to provide a solid analytic basis for developing a winning trading strategy. In this use case, market data provides a solid signal against which to execute: any HFT (High Frequency Trading) shop relies on multiple high-quality, high-frequency, neutrally-aggregated market data sets to feed the models that form the basis for the day’s trade execution.

In these markets, data is thrown off by the transacting parties with such volume and velocity that the need to keep pace with capture, analytics and display of financial markets data has been the driver of technological innovation in the big data and anomaly detection industries for over a generation. In such environments, transaction data is plentiful and cheap: it’s easy to come by a trading signal to approximately or precisely value an asset you may hold. As a result, the financial value the market places on these data sets trades inversely to the liquidity of the market that generates the data. Trading is the primary mechanism that market actors use to assimilate perfectly all information. In liquid markets, price discovery is best effected through the transaction. In some regard this is perhaps a reflection of capital allocation: either actors in a market allocate capital to the build out of low-latency clearing infrastructure, or to trade capture and data distribution. This is the lagging use case in which market data validates, tests and optimizes a principal trading strategy.

Use Case the Second; in which the market itself conspires against price discovery

In more difficult, lumpy and opaque markets, great difficulty frequently attends discovering the value of, and finding the appropriate buyer for an asset. Pricing is a seemingly impossible problem –  and as a result, illiquidity begets illiquidity as traders sit on assets, either for fear of selling the asset short, or triggering a collapse upon revaluation of the asset as confirmed by the transaction. The only answer here is lots and lots of corroborating or triangulating data. Data trades inversely to liquidity. In the absence of a transaction flow to dip one’s toes into, data sets and valuation frameworks are the only mechanism to discover how to value an asset. This is the case in emerging markets fixed income, distressed debt, CMBS/CDOs, rare earths metals or special situations stocks.

This need for data presents a major business opportunity. Capitalism abhors a vacuum, and all manner of public and private data sets have been created to help seller and buyer determine a fair trade. In illiquid and opaque markets, capital has been allocated very efficiently to help buyers and sellers of the asset learn how to fairly value that asset. Extremely large and profitable businesses have grown up around the creation and delivery of these information products: FactSet, Markit, RiskMetrics, Moody’s, S&P, to name a few. The system of independent information brokers works pretty well, despite some periodic cataclysmic hiccups, and especially taking into account the amount of capital and risk flowing through these various systems’ pipes. Ironically and inevitably, transactional liquidity is a very natural consequence of the introduction of this data to illiquid markets as actors get their sea legs and trade with increased confidence.

The best form of information vendor to these illiquid markets are typically contributory data networks: individual parties contribute their transaction set to the common pool, in exchange for access to the aggregated data set, stored and attended to by a consortium-owned data broker. Mike Driscoll, has coined the acronym V.A.D.A.R. to describe these “value-added data aggregators and redistributors.” These are the businesses that turn straw into gold. They can be for-profit, benefitting the data contributors in more ways than one via revenue share prorated according to volume of contributed data. When well-executed, in many regards they are perfect businesses: eminently scalable; increasing quality of product as more contributors join the pool; low marginal cost for distribution of data to new customers; opportunities for multiple derivative products and business lines.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

Peter Borish, another friend with financial markets pedigree, says that for the VADAR to create meaningful products, the mechanism for data capture and processing must display certain key traits:

  1. Capture must be repeatable
  2. Output must be functional
  3. Information products must be rational with demonstrable interrelationships
  4. Data transformation can be opaque but not impossibly black box

When these businesses gain traction, they are things of operational beauty:

  1. Markit: $1B in 2010 sales
  2. CoreLogic: $2B in 2010 sales
  3. IMS Health: $2B in 2009 sales
  4. GfK Group: $1.4B in 2009 sales
  5. Information Resources (IRI, now Symphony IRI): $660m in 2009 sales
  6. Westat Group: $425m in 2009 sales

These businesses act as information aggregators for specific verticals, and by virtue of the scale and quality of the data they collect, they become the de facto gold standard for their industry. Nobody conducts business, either buying assets or pricing product, without consulting a data set or information product delivered by these companies. These companies are duty-bound to protect and keep secure the data that has been entrusted to them. By the same token, contributors recognize that only by allowing their data to be analyzed and statistically combined with other data can value as information be derived.

Postscript

Perhaps the time is now for such a business to emerge to serve the electronic media vertical. Perhaps the absence of information products is holding the big money on the sidelines? Maybe the introduction of triangulating data will enable buyers to more confidently participate in these opaque and illiquid markets. Perhaps this business may offer two product lines to suit both use cases: information products to suit low-latency auction markets on the one hand, and more opaque and hard-to-value assets such as contract-based inventory on the other. Perhaps the rule of efficient allocation of capital dictates that this happen sooner rather than later?

Follow David Soloff (@davidsoloff), Metamarkets (@metamx) and AdExchanger.com (@adexchanger) on Twitter.

Tagged in:

Must Read

Comic: Header Bidding Rapper (Wrapper!)

Microsoft To Stop Caching Prebid Video Files, Leaving Publishers With A Major Ad Serving Problem

Most publishers have no idea that a major part of their video ad delivery will stop working on April 30, shortly after Microsoft shuts down the Xandr DSP.

AdExchanger's Big Story podcast with journalistic insights on advertising, marketing and ad tech

Guess Its AdsGPT Now?

Ads were going to be a “last resort” for ChatGPT, OpenAI CEO Sam Altman promised two years ago. Now, they’re finally here. Omnicom Digital CEO Jonathan Nelson joins the AdExchanger editorial team to talk through what comes next.

Comic: Marketer Resolutions

Hershey’s Undergoes A Brand Update As It Rethinks Paid, Earned And Owned Media

This Wednesday marks the beginning of Hershey’s first major brand marketing campaign since 2018

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: Header Bidding Rapper (Wrapper!)

A Win For Open Standards: Amazon’s Prebid Adapter Goes Live

Amazon looks to support a more collaborative programmatic ecosystem now that the APS Prebid adapter is available for open beta testing.

Gamera Raises $1.6 Million To Protect The Open Web’s Media Quality

Gamera, a media quality measurement startup for publishers, announced on Tuesday it raised $1.6 million to promote its service that combines data about a site’s ad experience with data about how its ads perform.

Jamie Seltzer, global chief data and technology officer, Havas Media Network, speaks to AdExchanger at CES 2026.

CES 2026: What’s Real – And What’s BS – When It Comes To AI

Ad industry experts call out trends to watch in 2026 and separate the real AI use cases having an impact today from the AI hype they heard at CES.