Home AI IAB’s New AI Regulations Give Advertisers A Starting Point – But Plenty Of Questions Remain

IAB’s New AI Regulations Give Advertisers A Starting Point – But Plenty Of Questions Remain

SHARE:
Comic: Gen AI Pumpkin Carving Contest

It’s not exactly a secret that many advertisers are using generative AI in their marketing, from producing copy to editing images.

What isn’t always as obvious is exactly when those tools are being used, and when advertisers ought to make AI usage clear to their audiences.

Last month, the IAB launched a new framework to standardize when AI in ads should be disclosed. The standards aim to establish consumer trust without creating “label fatigue,” said Caroline Giegerich, VP of AI at the IAB.

In many ways, the framework is great news for an industry with no unified standard for when to mandate AI disclosure. But it also brings up a lot of new questions that are proving difficult to answer.

Setting the standard

One of the thorniest of those questions is just how visible AI usage should be to consumers.

The IAB cautions against “universal labeling,” said Giegerich, because it could mislead consumers about how much human prompting and editing was involved in an ad labeled as AI. Using some degree of AI-generated creative in marketing, she said, is “almost … a necessity” at this point.

Giegerich described the current US regulatory landscape for AI disclosure in advertising as “a free-for-all of ‘maybe we label it, maybe we don’t.’” Massachusetts, for example, has very conservative disclosure guidelines, she said, whereas other states have none at all.

To stay on the safe side, she said, advertisers often default to “the most restrictive” regulation. But an overarching view on AI, like the one laid out in the framework, would eliminate the challenge brands currently face of keeping up with dozens of different standards, she noted.

Of course, the IAB isn’t a regulatory body, and can’t enforce the adoption of standards. Its rulebook is designed to complement, rather than replace, legal compliance.

But the framework urges advertisers to adopt its standards to “avoid fragmentation and establish trust.” It also suggests that publishers and platforms flag ads for review and remediation if they don’t contain the recommended disclosures (including embedded metadata) giving advertisers another, less altruistic, reason to comply.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

Agree to disagree

At its core, the IAB’s AI disclosure rule is straightforward.

Any ad that could plausibly “deceive” someone – like, for example, using an AI-generated person in fabricated before-and-after shots for wrinkle cream – must be labeled.

But what if someone uses AI to generate an image of a celebrity drinking a cup of Starbucks coffee, when the actual human despises Starbucks? Is that considered deception?

That depends on the opinion of a human reviewer, and could vary. Three different people could have three different perspectives, said Giegerich.

Some of the guidelines are pretty clear, like the requirement to flag any potentially misleading content. For instance, AI depictions of deceased individuals must be labeled, as must AI-generated voices of living people making demonstrably false statements unrelated to standard product endorsements.

If, say, a clothing brand used Taylor Swift’s authorized likeness in an ad to state how much she loves their products, the brand wouldn’t need to disclose that it was AI-generated. However, if she made a categorically false statement (like saying she wore that brand’s clothing during her last concert, and that didn’t happen), then the ad would need to flag the use of AI.

Advertisers also need to disclose any image that was fully generated by AI in which human input was limited to refinement, editing or compositing. That is, unless the final product is “obviously non-realistic,” as per the framework, like a cartoonish image or a fictional creature.

On the other hand, advertisers don’t need to divulge AI usage if it’s only for what the framework refers to as “standard production techniques” that don’t alter “authenticity,” like audio enhancement or AI-generated voice-overs where the speaker’s identity isn’t relevant or stated.

All semantics

But there’s a limit to how much transparency is useful before it starts to lose meaning.

If every AI-generated creative was labeled as such, as both South Korea and the EU have advocated for, consumers will get overwhelmed by the number of labels they’re seeing, Giegerich said, and they’ll carry less weight.

Advertisers don’t label every Photoshop filter they use, Giegerich pointed out, nor every time a cereal ad uses glue for the milk. (“They do that!” she said.) But some use cases aren’t so clear cut. “In one document, we’re not going to be able to list every potential possibility of use,” she added, referring to the framework.

Determining the line between acceptable enhancement and actual deception is the “core tension” in developing disclosure standards, said Giegerich.

Currently, the framework notes that human reviewers need to “evaluate AI involvement against IAB materiality thresholds” and record the determination within metadata, but doesn’t address how difficult the evaluation process can be in practice.

Plus, she added, “consumer perceptions of AI” are constantly changing, which means transparency guidelines will have to keep up. According to recent IAB data, 34% of Gen Zers see generative AI ads as creative, while 30% see it as inherently “inauthentic.”

Unfortunately for advertisers, there isn’t a monolithic way to earn universal consumer trust.

A fine line

And even seemingly straightforward examples start to blur once you see them in context.

Determining whether an image was fully AI-generated (i.e., “text-to-image generation,” as the framework puts it) sounds clear. But, apparently, background images don’t count, like a brand using an AI-generated nature background for a product shot, since that falls under “background alteration.”

One of the most nuanced aspects of the framework is the requirement to disclose “digital twins of living individuals depicted in specific events, scenarios or locations that never occurred, as distinct from standard product endorsements or brand representation.” (The IAB defines a digital twin as an AI-generated replica of a real person.)

Basically, that means advertisers need to disclose the use of AI if a real person is being depicted in a “fabricated event,” said Giegerich, like someone running a marathon at a sub-four-minute pace.

Sounds simple enough, right?

But consider a political ad where a candidate’s avatar walks around a park picking up trash, even though the actual human being has never done that in reality.

It might pass as believable, but that doesn’t make it true or an accurate representation of this person and their policies.

No matter how precise disclosure standards try to be, there will still be a “necessity of human judgment,” said Giegerich.

Must Read

For Super Bowl First-Timers Manscaped And Ro, Performance Means Changing Perception

For Manscaped and Ro, the Big Game is about more than just flash and exposure. It’s about shifting how audiences perceive their brands.

Alphabet Can Outgrow Everything Else, But Can It Outgrow Ads?

Describing Google’s revenue growth has become a problem, it so vastly outpaces the human capacity to understand large numbers and percentage growth rates. The company earned more than $113 billion in Q4 2025, and more than $400 billion in the past year.

BBC Studios Benchmarks Its Podcasts To See How They Really Stack Up

Triton Digital’s new tool lets publishers see how their audience size compares to other podcasts at the show and episode level.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: Traffic Jam

People Inc. Says Who Needs Google?

People Inc. is offsetting a 50% decline in Google search traffic through off-platform growth and its highest digital revenue gains in five quarters.

The MRC Wants Ad Tech To Get Honest About How Auctions Really Work

The MRC’s auction transparency standards aren’t intended to force every programmatic platform to use the same auction playbook – but platforms do have to adopt some controversial OpenRTB specs to get certified.

A TV remote framed by dollar bills and loose change

Resellers Crackdowns Are A Good Thing, Right? Well, Maybe Not For Indie CTV Publishers

SSPs have mostly either applauded or downplayed the recent crackdown on CTV resellers, but smaller publishers see it as another revenue squeeze.