As AI Use Expands, We Need Standards To Identify Content Not Created By Humans

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Krish Sailam, global senior vice president of programmatic solutions at DWA, a Merkle company.

Some may think artificial intelligence (AI) and machine learning (ML) personalization is all about altering banners and landing pages. However, AI can also create high-quality video, audio and images – and tune them to the emotions of individuals.

Some major newspapers are using AI to generate content. AI is powerful, fast and has already begun to alter the very definition of a fact.

AI-generated content will mark a massive step forward. But it may also do great damage. For that reason, Facebook recently implemented a “deepfakes” policy outlining how the platform would remove media that is manipulated by AI or machine learning to appear authentic.

Now is the time for the publishing and advertising industries to develop standards like this for AI-generated content. If we do not, the government will.

I could see a few basic classifications to start:

  • Fully machine-generated
  • Machine-assisted, human-generated + human-edited
  • Machine-generated + human-edited
  • Human-generated + human-edited
  • Human-generated + machine-edited

These standards are rather rudimentary to begin with; a simple declaration for content that is generated by a machine as the “author” or “creator” would help provide transparency. A field within the Open RTB spec could also declare if the content is machine generated or human generated, which may help programmatic platforms decide if they should bid against that inventory.

Creating these standards would spur a discussion about what the official definition of an author should be moving forward as well as who or what can be given a copyright. If something is created by a machine – to the best of my knowledge, a machine cannot own a copyright – is that content considered part of the public domain?

If we dig further, uploading machine-generated content to platforms such as YouTube might test their current copyright ownership policies and therefore require Google to redefine what type of content can be monetized.

The benefits of customized content are so great that it would be silly not to invest in it.

Congress has never been able to move at the speed of new technology, and we can’t expect that pattern to improve, although there have been some developments on this front.

Sen. John Thune, R-SD, has proposed the Filter Bubble Transparency Act to enable users to learn about, and opt out of, algorithm-driven personalization.

The bill is simple, but it moves in the right direction. We do want companies to open their algorithms to ensure that they’re not being used for immoral purposes. And we want some kind of standards in place to protect citizens. The challenge is to define “immoral” while honoring free speech.

The issue of advanced AI and personalization is not isolated to the web. It will manifest across smart home devices, work, finances, healthcare, IoT, insurance and so on.

We’ve already seen evidence of machine-generated political mayhem, with voters inundated by ads targeted by machines. Stories written by machines but passed off as human could do even more to convince voters that false stories are true.

The industry needs to develop standards for AI-generated content soon, preferably within the next six to 12 months. The IAB should start by calling on all publishers to label AI-generated content. The cost of waiting could very well exceed such modest initial efforts.

Follow DWA (@dwaTechMedia) and AdExchanger (@adexchanger) on Twitter.

 

Add a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>