Home Data-Driven Thinking AI Disclosure Requirements: Navigating State Laws And Platform Rules

AI Disclosure Requirements: Navigating State Laws And Platform Rules

SHARE:
Richard Eisert, partner and co-chair of the advertising + marketing and privacy, technology + data security practice groups, Davis+Gilbert
Amy Marcus, associate, Davis+Gilbert

As AI quickly advances and legislators struggle to keep up, it can feel like the Wild West when choosing whether and how to disclose the use of AI in advertising and other communications. 

However, even though the number and scope of formal mandates under existing U.S. law that require advertisers to disclose the use of AI are limited, there are some restrictions and guidelines that advertisers must follow to ensure their compliance. 

Most notably, some states have passed—and others are actively considering—legislation governing AI disclosures in certain specific circumstances. Social media platforms, including Meta (Facebook and Instagram), TikTok, YouTube and even Reddit, have also set their own rules surrounding the type of AI advertising content that requires disclosures and how those disclosures must be structured.

Limited regulations by the states

While there is currently no AI-specific federal regulatory requirement for disclosure in advertising or media, a handful of states have enacted or are attempting to pass limited legislation that is relevant to the AI discussion. 

California, New Jersey and Utah all require customer service chatbots or similar communication tools fueled by generative artificial intelligence to clearly disclose to users that they are communicating with a bot rather than a real human, especially when a purchase or voting is involved.

Moreover, many states have enacted legislation pertaining specifically to the use of AI-generated or manipulated content in political ads. California, Florida, Hawaii, Idaho, Indiana, Michigan, New York, Nevada, North Dakota, Oregon, Utah, Washington and Wisconsin have all passed laws in the last two years mandating disclosures when the content of political advertisements or other political communications or content includes AI-generated content, including but not limited to the use of deepfakes.

Other states also require these types of disclosures, but only within a certain amount of time before an election. Several more states now have proposed and pending legislation that pertains to the use of AI in political ads.

Broader proposed state legislation

In June 2025, the New York State Legislature became the first to take AI legislation a step further by passing the Synthetic Performer Disclosure Bill, which requires that advertisers make clear and conspicuous disclosures whenever an advertisement includes AI-generated talent, or a “synthetic performer.” The bill is still awaiting Governor Kathy Hochul’s signature. However, the action taken by the New York State Legislature represents a major additional step in the regulatory landscape of AI labeling and disclosure in advertising.

Other states, including Georgia, have introduced bills requiring disclosures whenever AI-generated content is used in advertising or commerce. Massachusetts has gone even further with its proposed legislation. The Massachusetts Artificial Intelligence Disclosure Act, introduced in February 2025, would require any generative AI system used to create or modify content within the state to automatically include a clear, conspicuous and permanent disclosure identifying the content as AI-generated.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

Social media guidelines

While state and federal governments slowly move towards proposing and advancing more comprehensive AI labeling and disclosure requirements, social media platforms are filling the gaps with their own rules. 

Many of these rules are directed specifically at deceptively realistic AI-generated video, audio and imagery, and include penalties for failing to disclose AI’s role in creating this type of content that can result in such content being removed from the platform. 

Penalties on many of these platforms also increase for repeated offenses. For example, YouTube states that creators who consistently choose not to make the requisite AI disclosures may be suspended from YouTube’s Partner Program.

Advertisers that routinely rely on social media platforms to distribute advertisements must be familiar with the following guidance from major social media platforms regarding labeling and disclosures on AI-generated or altered content, or risk incurring penalties:

  • Meta (e.g.,Facebook, Instagram) regulates the labeling of digitally generated or altered photorealistic video or realistic-sounding audio. Users must label the content (whether in post, story, or Reel form) through their settings before posting. Meta is currently also rolling out a feature in which any ads generated using Meta’s generative AI creative features will automatically be labeled “AI info.” 
  • TikTok requires posters to disclose the use of AI relating to realistic images, video and audio. The platform further strongly encourages that disclosures be added to fully AI-generated or significantly AI-edited content, beyond the use of filters or minor retouching.
  • Reddit does not have its own set of comprehensive AI disclosure requirements.  However, many individual subreddits have strict rules regarding the use of AI-generated content enforced by active moderators. Moderators can regulate and remove content as needed if it violates the subreddit’s rules.
  • YouTube mandates that creators include a disclosure for any content that is meaningfully altered or synthetically generated and appears realistic. Creators must add this disclosure during the upload process. YouTube reserves the right to apply the disclosure retroactively if the creator fails to include it.

The bottom line

Just because U.S. federal and state regulations have been slow to catch up to the AI boom does not mean that advertisers always have carte blanche to use AI systems or AI-generated content without notifying consumers. 

While legislators debate the correct path forward to ensure that consumers have appropriate transparency regarding the use of AI, some relevant laws have already been enacted and others are on the way. At the same time, advertisers must keep in mind the multitude of still-developing labeling and disclosure requirements imposed on AI-generated content by social media platforms.

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Follow Davis+Gilbert and AdExchanger on LinkedIn.

For more articles featuring Richard Eisert, click here.

Must Read

Wall Street Turned Against Ad Tech – But May Learn To Love It Again

What can pureplay ad tech companies do to clean up their rep on the Street?

Glenniss Richards, senior director of digital media, Bayer

How Bayer Wrote Its Prescription For Programmatic

Bringing media buying in-house is “chaotic and disruptive” – but totally worth it, according to Glenniss Richards, Bayer’s senior director of digital media.

AppsFlyer and Roku’s New SRN Integration Will Shed Light On CTV Campaign Impact

Roku and AppsFlyer announced the launch of a new self-reporting network (SRN) integration between both companies, which will allow mobile app advertisers to more effectively measure their streaming video campaigns

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: Gamechanger (Google lost the DOJ's search antitrust case)

DOJ v. Google: How Judge Brinkema Seems To Be Thinking After Week One

Where the DOJ v. Google ad tech antitrust trial stands after one week’s worth of remedies arguments.

Swish, A Company That's Bringing Programmatic to Product Sampling, Announces Seed Funding

Swish, a startup that partners with retailers to provide product full-size CPG samples to people doing their grocery shopping online, announces $2.3 million in seed funding.

DOJ v. Google: During Opening Arguments, The DOJ And Google Battle Over An AdX Divestiture

Court is back in session. And the fate of  the open internet is in the balance.