Can Redlining Offer Clues To Balancing Big Data And Privacy?

By
  • Facebook
  • Google Plus
  • Twitter
  • LinkedIn

raykingman"Data-Driven Thinking" is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Ray Kingman, CEO at Semcasting.

On May 1, the President’s Council of Advisors on Science and Technology (PCAST) delivered its analysis of the current and future impact of big data on consumer privacy. Unlike some other recent government reviews on big data, data brokers and privacy, this 57-page report actually provides a realistic assessment of how data is collected, analyzed and used today. The PCAST accomplishes a very important task by distinguishing between the key components of the big data enterprise: collection, analytics and usage.

The PCAST report explores the complexity of data collection and analytics – and where it could undermine the White House’s proposed 2012 Privacy Bill of Rights. The council concludes that the rapid evolution of data collection and more sophisticated analytics is likely to end-run any current standard like opt-out and de-identification to safeguard privacy.

Data usage represents the inflection point where the potential for harm and regulation meet. With no intent to diminish or dilute the standing legislation, it seems to me that the concept of redlining can serve as a useful construct for understanding how a balanced approach to big data can be achieved without compromising privacy.

The Fair Housing Act of 1968 prohibited the practice of redlining based on race, religion, sex, familial status, disability or ethnic origin. The act prevents the use of sensitive individual or geodemographic attributes for the expressed purpose of withholding goods or services from an individual or group.

Three curtailments on the “use” of data come to mind as candidates for similar “redlining” prohibitions in digital advertising.

The first category would be first-party customer data that is collected every day by organizations. The economy depends on this information being available to maintain robust customer relationships. To protect against inappropriate usage of consumer data, however, online links, facts or personal first-party relationship data should not be brokered for redistribution without a consumer’s explicit consent every time their information is used. Global opt-in/opt-out methods are designed for the legal protection of the data brokers and are increasingly easy to manipulate.

The second curtailment includes third-party prospecting data from data brokers, compilers, ad networks, DMPs and others. This data should adhere to a stronger “de-identification” standard. Organizations and data collectors should follow the Census Bureau’s lead on how to aggregate personal data online and offline so that it protects the personal identity and sensitive information about the individual. This would provide marketers with the benefit of adequate consumer segmentation and insight without consumers risking being tracked.

Finally, the third curtailment covers private facts. The PCAST report makes a compelling case that consumers have the right to “the reasonable expectation of privacy.” A prohibition on collecting private facts for public exposure or tracking an individual online are obviously candidates for regulation, but as our digital lives include search histories, locations, social networks and shared content, the risk of inadvertently exposing sensitive information increases. In addition to regulating obvious sensitive private facts, such as credit scores and children’s ages, a redlining case could be made that Do Not Track and cookie collection beyond the first-party relationship should be opt-out by default.

Big data offers a wealth of benefits, efficiencies and opportunities for consumers and businesses alike. In order to maintain a balance between those benefits and the reasonable expectation of privacy, clear rules of engagement must exist:

• Any information about an individual that is collected and shared should be for the benefit of that individual.

• Private communications should remain private until shared by the individual.

• Online activities and physical locations should not be monitored, tracked or redistributed.

• First-party information should not be used or redistributed by any organization without the knowledge and control of the individual.

Redlining is fundamentally designed around the principle of “do no harm.” To establish reasonable standards of engagement around one’s online identity, tracking and the misappropriation of personal information, “do no harm” seems like a good place to draw a line.

Follow Semcasting (@Semcasting) and AdExchanger (@adexchanger) on Twitter.

  • Facebook
  • Google Plus
  • Twitter
  • LinkedIn

Email This Post Email This Post

By on at

Leave a Reply