Visualizing Big (Free) Data At Visualized

Visualized launched its inaugural data visualization event in New York City this week at The Times Center. Though not exclusively a business conference, the business implications of Visualized were clear – the visualization of big, bad data is critical as marketers, agencies, publishers, data and ad tech companies sift for actionable insights.

For the Visualized audience, storytelling with data was the unifying theme.

On tap in the eight and final session of the two-day conference:

WNYC’s Data News Team began as John Keefe and Steve Melendez took the audience through datasets and visualizations of the recent “Superstorm Sandy.” (This storm was a data storm!) Keefe said his team tried to anticipate the questions that might come from the storm before it happened. He and his team ended up creating six embeddable widgets using free, U.S. government data.

Examples included a “Know Your Zone” map for those wondering if the storm’s surge could overrun any address input by the user. Storm surge flood zone maps pulling data from the city and Army Corp of Engineers.  And, data from the Metropolitan Transit Authority provided the data team’s popular Transit Tracker which answered, “What’s going on with my subway train?” or tunnel or bus or other favored transit vehicle.

From here, the underlying data could easily be of use to clever, location-based or geo-targeted ad campaigns.

Next, Burt’s Gustav von Sydow discussed “information design for doers” and stressed for the audience the need for simplicity in design of analytics applications, making it fun, and nice to look at.  “Doers” for von Sydow’s presentation were media salespeople.  And this meant creating information design that drives more customers and more money.  Supporting the workflow was key:  develop the offer, persuade the customer, and deliver “happiness.”  A screenshot of the Burt interface showed how data visualizations can “sell.”

Von Sydow added that he never uses the term “analytics” with salespeople because their eyes glaze over.  He closed by adding that Burt’s design-centric app had the kind of engagement with its B2B audience that one might expect from a consumer app.  And, that it’s important to remember the design is for “doers” – not designers and you need to respond accordingly.

Next, Alexis Lloyd from The New York Times’ R&D unit described how her team has started to innovate more broadly than the limited timeline of the hard news cycle – the NYT’s core business. She shared examples of internal prototypes inspired by “the Quantified Self” movement which became known internally as the “Quantified News Organization” movement.

One prototype showed the intersection of traffic and content in an effort to find “rhythm and patterns.”  Over time, the shape (content) of the traffic changed as visualizations of the recent storm, the election and other recent events were displayed.

Article visualization used word count, content section and traffic measurement to make circular visualizations that looked like dandelions. Presumably, a quick glance across many of these graphics showed spikes of interest that would make the data viz viewer look deeper at the consumption of news content.  (It would seem this would all eventually get looped into the proverbial “holistic sales/yield management” machine at the NYT.)  Lloyd suggested that answers could be derived  for “When is it best to publish content?” or “What is the impact of our audience?”

Lloyd said that the New York Times’ uses a tag system for articles that has been in use for over 100 years (take that tag managers!) – it’s a five digit code that might indicate topics such as terrorism, airlines and airplanes, health topics, and so on. A visualization of the use of the Soviet Union tag versus the Russia tag showed the steep drop for the Soviet Union once it was broken apart. But as the data viz showed, Russia only partially replaced the tag count as editorial was split between the different, former Soviet republics. Another insight – the use of “decor” dropped off after 2001… who knew?

Looping back to the intent of the original project – the Quantified News Organization – she recommended best practices to the audience that included using as much data as possible, start small and iterate, and talk to people. The last one means – don’t assume you know the data, ask those that have been creating the data what the data means.

Finally, the guardian’s Simon Rogers took the stage as he made clear he’s a “data journalist,” not a designer or scientist. He began with a news publication’s data visualization from 1821 that included a simple table of the attributes of boys and girl schools in Manchester, England. Today, his “Data Store” blog is decidedly more enhanced. Rogers emphasized the need to keep up with news cycle in almost real-time as well as appending stories to data points. Once again, a slide of data viz from “superstorm Sandy” was on display. (Was Sandy the biggest public data creator ever?)

As he showed infographics, Rogers said, “A PDF is where data goes to die” of the popular document format. There was definitely a sense that data visualization needs to be interactive, and reactive, to the latest data.

Rogers breezed through an example of how he and his team dealt with the recent London Olympics. He emphasized that anyone can do something interesting with data, which suggested the importance of an open, social platform for data visualization. Google appeared to offer the public toolkit of choice for Rogers. (Hello, Microsoft Office product team?)

Overall, there was an emphasis on the availability of useful free data and data tools. Does this mean delivering ads or targeting ads over time gets cheaper, more efficient? It would seem so. It also points to the increased importance of proprietary, first-party data.

Enjoying this content?

Sign up to be an AdExchanger Member today and get unlimited access to articles like this, plus proprietary data and research, conference discounts, on-demand access to event content, and more!

Join Today!