Home Data-Driven Thinking Why The Digital Advertising Industry Needs An Ethics-By-Design Framework

Why The Digital Advertising Industry Needs An Ethics-By-Design Framework

SHARE:
Alessandro De Zanche

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Alessandro De Zanche, an audience and data strategy consultant.

After AdExchanger reported how Rocket Fuel questionably used client data for its user profiling and segmentation tools, I was struck by the cynical reaction from some execs and influential industry people, who seemed to shrug and suggest those types of shenanigans happen all the time.

And while it was encouraging that Google, Adobe, The Trade Desk and other companies contributed to the recent FBI operation that charged eight people for a multimillion-dollar ad fraud operation, how morally proactive is the industry in preventing, detecting and denouncing illegal behavior?

Craig Silverman, BuzzFeed’s media editor, recently declared in an interview: “I have never been lied to by more people in my career than since I’ve been dealing with people in digital advertising. It’s unbelievable.”

The silence of the industry as a whole on the topic of ethics is deafening. Technical solutions about subjects like a common ID, header bidding, viewability and even fraud are often discussed, but the underlying moral element is rarely, if ever, approached.

There are individual companies that are committed to true value and transparency, but it is evidently not enough. It is time to develop an ethics-driven framework to improve the industry’s foundational principles, beyond single companies’ initiatives.

The subject takes on a whole new dimension when looking at how advertising technology and business models have incentivized and bred fake news and the proliferation of all sorts of content that dangerously undermines our societies and democracies.

I anticipate the immediate reaction to hearing this suggestion will go something like this: fraud is hard to spot, it’s quickly evolving and mutating, tricky to prove, etc. Focusing exclusively on these technical considerations makes it easy to gloss over the moral element — and avoid addressing the broader intersection between technology and ethics.

Moreover, a technology-focused debate doesn’t justify the lack of a coordinated and consistent ethical approach at all levels of seniority, trickling down (or up) from boards and CEOs to junior hires and back. We need a framework based on principles that everybody signs on to.

Maybe we could improve our moral approach by … stealing?

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

There are huge moral and philosophical questions surrounding artificial intelligence (AI). As in digital advertising, these questions are tightly intertwined with its technological and data dimensions. We can learn and take inspiration from a lot of work being done with AI.

Mariarosaria Taddeo and Luciano Floridi from the Digital Ethics Lab at the University of Oxford’s Internet Institute write: “The effects of decisions or actions based on AI are often the result of countless interactions among many actors, including designers, developers, users, software and hardware. This is known as distributed agency. With distributed agency comes distributed responsibility. Existing ethical frameworks address individual, human responsibility, with the goal of allocating punishment or reward based on the actions and intentions of an individual. They were not developed to deal with distributed responsibility.”

That’s one of the big obstacles – and sadly, also an excuse – when tackling illegal behavior within digital advertising: the complexity of the system and the many different “actors” involved.

That’s when and why a new concept for the industry ­– a “distributed moral responsibility” model – would be beneficial.

“This model separates responsibility of an agent from their intentions to perform a given action or their ability to control its outcomes and holds all agents of a distributed system, such as a company, responsible,” Floridi writes. “This is key when considering the case of AI, because it distributes moral responsibility among designers, regulators and users. In doing so, the model plays a central role in preventing evil and fostering good, because it nudges all involved agents to adopt responsible behaviors.”

We need to shift our view. Digital advertising is the result of distributed actions, which call for distributed responsibility. This does not mean that since the burden is shared among different people, everybody is a little less responsible.

Rather, distributed responsibility brings everyone under the spotlight and makes everyone an equally responsible agent in the system. The alternative is an omertà, a term in the Italian mafia’s underworld where the failure of people to speak up about what they might know or have witnessed impedes the discovery and prosecution of crimes, in which they may have no direct role.

These are just some elements to start an industrywide conversation ­– and an example of how AI is debating ethics in a way that also aligns with how digital advertising works.

It would be naïve to think that this approach would singlehandedly stop fraud or magically solve the industry’s transparency issues. But it would be a bold first step that would send a strong message externally and, much more importantly, internally. Once defined, companies would have to publicly pledge to abide by the framework and make it a core element of their policies, agreements and training programs.

As newly enforced privacy regulations require a privacy-by-design approach, it has also come time for a “morality-by-design” framework. Hopefully we have learned our lesson; it would be embarrassing if it took another FBI investigation or other government response to force the industry to formally and practically accept its own (shared and distributed) responsibilities.

Follow Alessandro De Zanche (@fastbreakdgtl) and AdExchanger (@adexchanger) on Twitter.

Must Read

Paramount Skydance Merged Its Business – Now It’s Ready To Merge Its Tech Stack

Paramount Skydance, which officially turns 100 days old this week, released its first post-merger quarterly earnings report on Monday.

The Arena Group's Stephanie Mazzamaro (left) chats with ad tech consultant Addy Atienza at AdMonsters' Sell Side Summit Austin.

For Publishers, AI Gives Monetizable Data Insight But Takes Away Traffic

Traffic-starved publishers are hopeful that their long-undervalued audience data will fuel advertising’s automated future – if only they can finally wrest control of the industry narrative away from ad tech middlemen.

Q3: The Trade Desk Delivers On Financials, But Is Its Vision Fact Or Fantasy?

The Trade Desk posted solid Q3 results on Thursday, with $739 million in revenue, up 18% year over year. But the main narrative for TTD this year is less about the numbers and more about optics and competitive dynamics.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: He Sees You When You're Streaming

IP Address Match Rates Are a Joke – And It’s No Laughing Matter

According to a new report, IP-to-email matches are accurate just 16% of the time on average, while IP-to-postal matches are accurate only 13% of the time. (Oof.)

Comic: Gamechanger (Google lost the DOJ's search antitrust case)

The DOJ And Google Sharpen Their Remedy Proposals As The Two Sides Prepare For Closing Arguments

The phrase “caution is key” has become a totem of the new age in US antitrust regulation. It was cited this week by both the DOJ and Google in support of opposing views on a possible divestiture of Google’s sell-side ad exchange.

create a network of points with nodes and connections, plain white background; use variations of green and grey for the dots and the connctions; 85% empty space

Alt Identity Provider ID5 Buys TrueData, Marking Its First-Ever Acquisition

ID5 bought TrueData mainly to tackle what ID5 CEO Mathieu Roche calls the “massive fragmentation” of digital identity, which is a problem on the user side and the provider side.