Home Data-Driven Thinking The AI Chat Ad Frontier: What LLMs Change About Brand Safety And Control

The AI Chat Ad Frontier: What LLMs Change About Brand Safety And Control

SHARE:
Kevin Gentzel, Global President, Channel Factory

ChatGPT ads are here, and brands are testing the waters. This marks the true start of LLMs as a new advertising channel, reshaping the customer journey and introducing a new surface for brand safety and suitability. It’s a real inflection point for advertising. 

Impressions, searches and even transactions that once traveled through websites will increasingly start and end inside LLMs’ context windows (i.e., the conversational surface between a prompt and an answer). As that behavior grows, brand suitability inside LLMs will become as important for advertisers as it was when social platforms emerged as dominant media channels.

But LLM environments differ from social platforms or any channel that preceded them. They introduce new dimensions to safety and suitability that the industry has not previously encountered. Understanding these differences is the starting point for brands and agencies testing ChatGPT ads.

1. The advertising context is conversational and dynamic

In most digital environments, the content surrounding an ad exists before the impression is served. A social video, a short clip or a publisher article can be analyzed and categorized in advance.

LLM responses are generated in real time. The surrounding context changes dynamically based on a user’s prompt and the model’s interpretation of available information. The conversation itself can evolve significantly as the user continues interacting with the system, creating a context window that is, in theory, limitless. That matters for advertising; where there is sustained, expanding attention at scale, new formats, smarter targeting and dedicated ad slots will follow.

Suitability decisions, therefore, require continuous evaluation of conversational context rather than a single pre-impression classification. Real-time semantic analysis becomes essential for understanding the environment in which an ad may appear.

2. The answer carries perceived authority

Another important difference is how users perceive the LLM interface itself. The system functions more like a trusted advisor presenting a coherent response to a question. That’s not really how LLM technology works. But the user’s perception that they’re speaking to an all-knowing AI imbues the answers with a level of perceived authority. 

A response from ChatGPT carries persuasive power that no passively consumed media environment can match. It also raises the stakes on the model’s inevitable errors and hallucinations.

When a model produces an inaccurate or hallucinated answer, the risk for advertisers is not simply adjacency to questionable content. The misinformation is generated within the same interface that hosts the advertisement. In those cases, a nearby brand may seem to endorse or validate the response. In an expansive and ongoing conversation, that exposure compounds.

This means suitability systems will need to incorporate signals that do not exist in traditional media environments.

Topic sensitivity is one. Conversations involving health conditions, financial advice or legal guidance carry higher reputational exposure and may require stricter suitability thresholds.

Model confidence is another. When responses are generated with lower certainty or rely on probabilistic synthesis across multiple sources, advertisers may choose to suppress advertising altogether rather than risk adjacency to inaccurate information.

Advertisers may also need to establish category-level exclusions for domains where hallucination risk carries regulatory or reputational consequences. Pharmaceutical, financial services and insurance brands are obvious examples.

Context, confidence and sensitivity together determine whether an advertising placement is appropriate.

3. The provenance of the answer is not always clear

A third difference lies in how LLM responses are constructed.

Traditional content has identifiable authorship and clear provenance. A news article, video or podcast originates from a known creator or publisher, and advertisers can evaluate that environment accordingly.

LLM responses are different. They are synthesized probabilistically from multiple sources, not all of which are transparent. A model may draw from publisher content, structured data sets and other inputs before generating a rewritten answer. 

Even when reputable sources are involved, the final output remains the model’s interpretation of those materials. That synthesis can subtly change the context, and important nuance can be lost. Inaccuracies can appear even when the underlying material is credible. Therefore, provenance is another signal to consider when evaluating suitability.

Operationally, advertisers will look for greater transparency into how responses are constructed and where information originates. That includes visibility into citation practices, the sources used to generate responses and the categories of content feeding the model’s outputs.

Platforms will also need to provide clearer safeguards around how advertising is separated from AI responses and how advertisers can control adjacency to sensitive or uncertain information environments. Without that visibility, it becomes difficult for brands to determine whether the context surrounding their advertising meets their suitability standards.

Shared responsibility

Over time, the complexity of this new suitability surface is likely to push LLM platforms toward the same pattern seen across other digital channels: walled gardens opening selectively to independent partners to provide an extra measure of third-party accountability around safety and suitability. This is what advertisers now expect and have grown accustomed to thanks to the example set by the major platforms.

If conversational AI is to become a durable and endemic media channel, platforms will need safeguards that extend beyond the model itself, including hallucination mitigation, clear separation between AI responses and paid messages and transparency into the contexts where ads appear. The context window does not just create new advertising opportunities; it creates a new surface area that existing verification infrastructure was never built to cover.

Until those controls exist, whether natively within the platforms or through partnerships with independent verification providers, the responsibility rests with advertisers themselves. That starts with recognizing that the suitability frameworks built for static media environments will not automatically translate to the LLM, where attention and dollars are likely to flow.

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Follow Channel Factory and AdExchanger on LinkedIn.

For more articles featuring Kevin Gentzel, click here.

Must Read

Upfronts Day Two: Dancing And Data

TelevisaUnivision and Disney took over Day Two of upfronts week in New York City on Tuesday, and the throughline was data quality.

Warner Bros. Discovery’s Upfront Was All About Performance

Warner Bros. Discovery used its upfront stage to announce two new ad measurement efforts, including that it’s joining a CAPI-focused initiative led by OpenAP.

Upfronts Day One: Publishers Jostle For Position As Performance Drivers

AdExchanger Senior Editor Alyssa Boyle and Associate Editor Victoria McNally traversed the island of Manhattan on Monday to scope out upfront presentations by NBCUniversal, Fox and Amazon.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Viant Sees A Growth Wave Coming, But First Marketers Must Really Ditch Walled Garden Ad Tech

Viant’s modest growth story took a backseat to a far louder claim: that fed-up advertisers are finally ready to ditch the rigged economics of Big Tech’s walled gardens.

Amazon’s Interactive CTV Ad Suite Now Includes Creative Optimization

Amazon Ads expects this year’s television upfronts to be an outcomes-focused affair. That may explain why the company preempted its Monday evening presentation by announcing the launch of a new ad product called Dynamic TV Creative.

Is Agentic Commerce An Oasis Or Mirage?

For companies like Shopify, Criteo and Instacart – and even for giants like Amazon and Walmart – figuring out if the agentic oasis is real or a mirage is their priority No. 1.