Home Data-Driven Thinking All Hail the V.A.D.A.R.

All Hail the V.A.D.A.R.

SHARE:

Data-Driven ThinkingData Driven Thinking” is a column written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by David Soloff, CEO, Metamarkets, a publisher analytics company.

Around the time he was busy founding a retail bank, my good friend Josh Reich offered the insight that ‘data trades inversely to liquidity.’ This presents two related use cases, relevant to all electronic trading markets including the one for online ads.

Use Case the First; in which the trade itself is the shortest distance to discovering price

In fast-moving, liquid markets, the best way to discover the value of a given asset is to trade it. This works beautifully in standardized markets like equities, options or commodities futures, reliant as they are on screen-based trading and standardized instrumentation. In these more advanced market structures, trade capture, high speed information distribution and clearing services are all tightly inter-networked with the result that in the selfsame moment a bid has been hit, price is revealed. The direct relationship holds true in all but the most broken of markets: trade these standardized assets at higher volumes and lower latency in order to get a more precise and reliable snapshot of valuation. None of this is to say that trading markets go even part-way toward perfectly determining price according to the philosophical definitions of ‘perfect’.

In liquid markets, market data services will lag trade execution as a mechanism for price discovery. Which isn’t to say that market data services don’t have a critical role to play in fast-moving markets. No trading desk or electronic broker would outline an execution strategy without the data necessary to provide a solid analytic basis for developing a winning trading strategy. In this use case, market data provides a solid signal against which to execute: any HFT (High Frequency Trading) shop relies on multiple high-quality, high-frequency, neutrally-aggregated market data sets to feed the models that form the basis for the day’s trade execution.

In these markets, data is thrown off by the transacting parties with such volume and velocity that the need to keep pace with capture, analytics and display of financial markets data has been the driver of technological innovation in the big data and anomaly detection industries for over a generation. In such environments, transaction data is plentiful and cheap: it’s easy to come by a trading signal to approximately or precisely value an asset you may hold. As a result, the financial value the market places on these data sets trades inversely to the liquidity of the market that generates the data. Trading is the primary mechanism that market actors use to assimilate perfectly all information. In liquid markets, price discovery is best effected through the transaction. In some regard this is perhaps a reflection of capital allocation: either actors in a market allocate capital to the build out of low-latency clearing infrastructure, or to trade capture and data distribution. This is the lagging use case in which market data validates, tests and optimizes a principal trading strategy.

Use Case the Second; in which the market itself conspires against price discovery

In more difficult, lumpy and opaque markets, great difficulty frequently attends discovering the value of, and finding the appropriate buyer for an asset. Pricing is a seemingly impossible problem –  and as a result, illiquidity begets illiquidity as traders sit on assets, either for fear of selling the asset short, or triggering a collapse upon revaluation of the asset as confirmed by the transaction. The only answer here is lots and lots of corroborating or triangulating data. Data trades inversely to liquidity. In the absence of a transaction flow to dip one’s toes into, data sets and valuation frameworks are the only mechanism to discover how to value an asset. This is the case in emerging markets fixed income, distressed debt, CMBS/CDOs, rare earths metals or special situations stocks.

This need for data presents a major business opportunity. Capitalism abhors a vacuum, and all manner of public and private data sets have been created to help seller and buyer determine a fair trade. In illiquid and opaque markets, capital has been allocated very efficiently to help buyers and sellers of the asset learn how to fairly value that asset. Extremely large and profitable businesses have grown up around the creation and delivery of these information products: FactSet, Markit, RiskMetrics, Moody’s, S&P, to name a few. The system of independent information brokers works pretty well, despite some periodic cataclysmic hiccups, and especially taking into account the amount of capital and risk flowing through these various systems’ pipes. Ironically and inevitably, transactional liquidity is a very natural consequence of the introduction of this data to illiquid markets as actors get their sea legs and trade with increased confidence.

The best form of information vendor to these illiquid markets are typically contributory data networks: individual parties contribute their transaction set to the common pool, in exchange for access to the aggregated data set, stored and attended to by a consortium-owned data broker. Mike Driscoll, has coined the acronym V.A.D.A.R. to describe these “value-added data aggregators and redistributors.” These are the businesses that turn straw into gold. They can be for-profit, benefitting the data contributors in more ways than one via revenue share prorated according to volume of contributed data. When well-executed, in many regards they are perfect businesses: eminently scalable; increasing quality of product as more contributors join the pool; low marginal cost for distribution of data to new customers; opportunities for multiple derivative products and business lines.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

Peter Borish, another friend with financial markets pedigree, says that for the VADAR to create meaningful products, the mechanism for data capture and processing must display certain key traits:

  1. Capture must be repeatable
  2. Output must be functional
  3. Information products must be rational with demonstrable interrelationships
  4. Data transformation can be opaque but not impossibly black box

When these businesses gain traction, they are things of operational beauty:

  1. Markit: $1B in 2010 sales
  2. CoreLogic: $2B in 2010 sales
  3. IMS Health: $2B in 2009 sales
  4. GfK Group: $1.4B in 2009 sales
  5. Information Resources (IRI, now Symphony IRI): $660m in 2009 sales
  6. Westat Group: $425m in 2009 sales

These businesses act as information aggregators for specific verticals, and by virtue of the scale and quality of the data they collect, they become the de facto gold standard for their industry. Nobody conducts business, either buying assets or pricing product, without consulting a data set or information product delivered by these companies. These companies are duty-bound to protect and keep secure the data that has been entrusted to them. By the same token, contributors recognize that only by allowing their data to be analyzed and statistically combined with other data can value as information be derived.

Postscript

Perhaps the time is now for such a business to emerge to serve the electronic media vertical. Perhaps the absence of information products is holding the big money on the sidelines? Maybe the introduction of triangulating data will enable buyers to more confidently participate in these opaque and illiquid markets. Perhaps this business may offer two product lines to suit both use cases: information products to suit low-latency auction markets on the one hand, and more opaque and hard-to-value assets such as contract-based inventory on the other. Perhaps the rule of efficient allocation of capital dictates that this happen sooner rather than later?

Follow David Soloff (@davidsoloff), Metamarkets (@metamx) and AdExchanger.com (@adexchanger) on Twitter.

Must Read

play button with many coins isolated on blue background. The concept of monetization of the video. Making money on video content. minimal style. 3d rendering

Exclusive: Connatix And JW Player Merge To Create A One-Stop Shop For Video Monetization

On Wednesday, video monetization platforms Connatix and JW Player announced plans to merge into a new entity called JWP Connatix. The deal was first rumored in July.

HUMAN Raises $50 Million

HUMAN plans to build a deterministic ID from its tracking of more than 20 trillion digital signals per week across 3 billion devices, which will aid attribution for ecommerce.

Buyers Can Now Target High-Attention Inventory In The Trade Desk

By applying Adelaide’s Attention Unit scoring, buyers can target low-, medium- and high-attention inventory via TTD’s self-serve platform.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

How Should Advertisers Navigate A TikTok Ban Or Google Breakup? Just Ask Brian Wieser

The online advertising industry is staring down the barrel of not one but two potential shutdowns that could radically change where brands put their ad dollars in 2025, according to Madison and Wall’s Brian Weiser and Olivia Morley.

Intent IQ Has Patents For Ad Tech’s Most Basic Functions – And It’s Not Afraid To Use Them

An unusual dilemma has programmatic vendors and ad tech platforms worried about a flurry of potential patent infringement suits.

TikTok Video For Open Web Publishers? Outbrain Built It.

Outbrain is trying to shed its chumbox rep by bringing social media-style vertical video to mobile publishers on the open web.