Home AI IAB’s New AI Regulations Give Advertisers A Starting Point – But Plenty Of Questions Remain

IAB’s New AI Regulations Give Advertisers A Starting Point – But Plenty Of Questions Remain

SHARE:
Comic: Gen AI Pumpkin Carving Contest

It’s not exactly a secret that many advertisers are using generative AI in their marketing, from producing copy to editing images.

What isn’t always as obvious is exactly when those tools are being used, and when advertisers ought to make AI usage clear to their audiences.

Last month, the IAB launched a new framework to standardize when AI in ads should be disclosed. The standards aim to establish consumer trust without creating “label fatigue,” said Caroline Giegerich, VP of AI at the IAB.

In many ways, the framework is great news for an industry with no unified standard for when to mandate AI disclosure. But it also brings up a lot of new questions that are proving difficult to answer.

Setting the standard

One of the thorniest of those questions is just how visible AI usage should be to consumers.

The IAB cautions against “universal labeling,” said Giegerich, because it could mislead consumers about how much human prompting and editing was involved in an ad labeled as AI. Using some degree of AI-generated creative in marketing, she said, is “almost … a necessity” at this point.

Giegerich described the current US regulatory landscape for AI disclosure in advertising as “a free-for-all of ‘maybe we label it, maybe we don’t.’” Massachusetts, for example, has very conservative disclosure guidelines, she said, whereas other states have none at all.

To stay on the safe side, she said, advertisers often default to “the most restrictive” regulation. But an overarching view on AI, like the one laid out in the framework, would eliminate the challenge brands currently face of keeping up with dozens of different standards, she noted.

Of course, the IAB isn’t a regulatory body, and can’t enforce the adoption of standards. Its rulebook is designed to complement, rather than replace, legal compliance.

But the framework urges advertisers to adopt its standards to “avoid fragmentation and establish trust.” It also suggests that publishers and platforms flag ads for review and remediation if they don’t contain the recommended disclosures (including embedded metadata) giving advertisers another, less altruistic, reason to comply.

Agree to disagree

At its core, the IAB’s AI disclosure rule is straightforward.

Any ad that could plausibly “deceive” someone – like, for example, using an AI-generated person in fabricated before-and-after shots for wrinkle cream – must be labeled.

But what if someone uses AI to generate an image of a celebrity drinking a cup of Starbucks coffee, when the actual human despises Starbucks? Is that considered deception?

That depends on the opinion of a human reviewer, and could vary. Three different people could have three different perspectives, said Giegerich.

Some of the guidelines are pretty clear, like the requirement to flag any potentially misleading content. For instance, AI depictions of deceased individuals must be labeled, as must AI-generated voices of living people making demonstrably false statements unrelated to standard product endorsements.

If, say, a clothing brand used Taylor Swift’s authorized likeness in an ad to state how much she loves their products, the brand wouldn’t need to disclose that it was AI-generated. However, if she made a categorically false statement (like saying she wore that brand’s clothing during her last concert, and that didn’t happen), then the ad would need to flag the use of AI.

Advertisers also need to disclose any image that was fully generated by AI in which human input was limited to refinement, editing or compositing. That is, unless the final product is “obviously non-realistic,” as per the framework, like a cartoonish image or a fictional creature.

On the other hand, advertisers don’t need to divulge AI usage if it’s only for what the framework refers to as “standard production techniques” that don’t alter “authenticity,” like audio enhancement or AI-generated voice-overs where the speaker’s identity isn’t relevant or stated.

All semantics

But there’s a limit to how much transparency is useful before it starts to lose meaning.

If every AI-generated creative was labeled as such, as both South Korea and the EU have advocated for, consumers will get overwhelmed by the number of labels they’re seeing, Giegerich said, and they’ll carry less weight.

Advertisers don’t label every Photoshop filter they use, Giegerich pointed out, nor every time a cereal ad uses glue for the milk. (“They do that!” she said.) But some use cases aren’t so clear cut. “In one document, we’re not going to be able to list every potential possibility of use,” she added, referring to the framework.

Determining the line between acceptable enhancement and actual deception is the “core tension” in developing disclosure standards, said Giegerich.

Currently, the framework notes that human reviewers need to “evaluate AI involvement against IAB materiality thresholds” and record the determination within metadata, but doesn’t address how difficult the evaluation process can be in practice.

Plus, she added, “consumer perceptions of AI” are constantly changing, which means transparency guidelines will have to keep up. According to recent IAB data, 34% of Gen Zers see generative AI ads as creative, while 30% see it as inherently “inauthentic.”

Unfortunately for advertisers, there isn’t a monolithic way to earn universal consumer trust.

A fine line

And even seemingly straightforward examples start to blur once you see them in context.

Determining whether an image was fully AI-generated (i.e., “text-to-image generation,” as the framework puts it) sounds clear. But, apparently, background images don’t count, like a brand using an AI-generated nature background for a product shot, since that falls under “background alteration.”

One of the most nuanced aspects of the framework is the requirement to disclose “digital twins of living individuals depicted in specific events, scenarios or locations that never occurred, as distinct from standard product endorsements or brand representation.” (The IAB defines a digital twin as an AI-generated replica of a real person.)

Basically, that means advertisers need to disclose the use of AI if a real person is being depicted in a “fabricated event,” said Giegerich, like someone running a marathon at a sub-four-minute pace.

Sounds simple enough, right?

But consider a political ad where a candidate’s avatar walks around a park picking up trash, even though the actual human being has never done that in reality.

It might pass as believable, but that doesn’t make it true or an accurate representation of this person and their policies.

No matter how precise disclosure standards try to be, there will still be a “necessity of human judgment,” said Giegerich.

Must Read

AI Helps Manscaped Trim Social Chatter Down To The Bare Essentials

Meet Clamor, a new social listening product that pulls cultural insights from online conversations in real time. Clamor helped Manscaped freshen up its marketing, including for this year’s Super Bowl.

A man talking to a robot

How Red Roof Is Bringing In More Customers With Zeta’s Voice-Activated AI Agent

Hotel chain Red Roof is using Zeta’s new voice-activated AI agent to guide its campaign creation, deployment timing and audience development.

Jean-Paul Schmetz, Chief of Ads, Brave

Why Ad-Blocking Browser Brave Introduced Its Own Ads

Brave’s chief of ads Jean-Paul Schmetz on competition in the search and browser markets, the fallout from the Google Search antitrust ruling and whether AI search will help smaller upstarts compete with Big Tech.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Vizio Helps Walmart Cut A Bigger Slice Of The CTV Ad Pie

Walmart and Vizio announced at NewFronts that unified account logins are coming to smart TVs using Vizio’s operating system.

Comic: CTV Tracking

Carl’s Jr. And Hardee’s Marketing Goes Regional With Amazon Ads’ Streaming Media

The age-old question for streaming TV advertisers is, how to target the viewers they want while reaching the scale their businesses need. The quick-serve restaurant operator CKE, which owns Carl’s Jr. and Hardee’s, sought an answer in a case study with Attain and Amazon Ads.

Cartoon of a woman in an apron cooking vegetables on a stovetop, holding a ladle as if to taste her creation

America’s Test Kitchen Puts Direct And Programmatic Access On Its Menu

America’s Test Kitchen introduced direct and programmatic buying for its free ad-supported TV channels – marking the first time it’s selling ad inventory as a standalone package.