Home Data-Driven Thinking Marketers Beware: AI Will Accelerate Online Misinformation This Election Cycle

Marketers Beware: AI Will Accelerate Online Misinformation This Election Cycle

SHARE:
Richard Raddon, Co-Founder & Co-CEO, Zefr

With the 2024 presidential campaign already underway, the year ahead is shaping up to be unlike any previous election cycle. 

The rapid growth, nearly ubiquitous use, and public interest around generative AI – alongside a shifting social media landscape and divisive political issues – will present new challenges for voters, platforms, media and marketers. In particular, the use of generative AI to create and disseminate mis- and disinformation is likely to have profound impacts. 

For brands, advertising amidst these challenges comes with renewed concern around platform trust and brand safety. 

How and where will AI-generated misinformation thrive? What, if anything, are the tech platforms and governments doing to help curb AI-powered misinformation? And what are the political topics likely to drive misinformation? 

Generative AI and politics

The release of OpenAI’s ChatGPT has opened the floodgates for new AI tools and solutions. With options available for text, image, audio and even video, the results from AI tools are increasingly sophisticated, the use cases endless and the access limitless. 

Already, Zefr has identified over 1.32 billion views of political AI content across social platforms, with political-focused misinformation content specifically increasing by 129.6% from Q1 2023 to Q2 2023.

Considering how quickly and widely content can be amplified across popular UGC-dominant social platforms, anyone in a political capacity has the ability to create and spread AI-powered content to sow confusion or distraction, build uncertainty, or persuade and manipulate. 

While there is upside potential to use AI for automating processes like scaling voter outreach and fundraising faster and cheaper, the potential for campaigns, political organizations or bad actors to create convincing deepfake images, audio, video or websites as competitive tactics is deeply troubling. If unabated, it will only get more difficult for people to discern fact from fiction.

We’ve already seen instances of these information wars powered by AI around the world – from deepfake videos of Ukrainian President Volodymyr Zelensky surrendering to Russia to Pro-China bots sharing AI-generated videos of fake Chinese news anchors and outlets promoting falsehoods about the government. 

Stateside, Florida Gov. Ron DeSantis’s presidential campaign earlier this year released three realistic, AI-generated false images of former President Trump with Dr. Anthony Fauci in a campaign video. In addition, there has been a substantial increase from June to July 2023 of fringe content representing 66% of overall misinformation content. This content type delves into more obscure and ominous subjects, such as illuminati conspiracies or other pseudoscientific beliefs. All of this can be used in training models as inputs for generating fake content. 

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

The rollback of regulations

There is also growing consensus across the industry that platforms and governments aren’t doing enough to safeguard against AI-generated misinformation. 

Claims of fraud in the 2020 presidential election – despite broad, substantiated debunking – remain online for voter consumption. Political accounts known to disseminate misinformation have been allowed back onto social media as well.

Government regulations have also not yet kept pace with the speed of AI development and its implications for misinformation in politics. There are currently no federal requirements for disclosing when political campaign content is AI-generated, though there is proposed legislation in Congress and by some states.

Without the guidance of disclaimers, public viewers are left to discern for themselves what is real vs. what is fake – and this is only getting more difficult. A recent University of Cambridge study found that Gen Z and millennial Americans were more susceptible to misidentifying news content as real vs. fake, and that viewers spending more time on social media in general were less likely to correctly identify AI-generated fake content vs. authentic news stories.

2024 news narratives to watch

The most recent YouGov poll for top voter issues ranked inflation, followed by health care, jobs and the economy, climate and the environment, then abortion and reproductive rights as the top five most important categories voters are following. 

An analysis of viewership trends across social media for these political topics found they reflect the growing voter engagement, particularly younger generations, in increasing reliance on social media platforms as their primary news and information sources. 

For brands and advertisers, coverage of these topics will continue to show up across media outlets and social platforms in both authentic and false news contexts. There is little debate that the acceleration happening in AI right now will impact the 2024 elections in both anticipated and in unexpected ways. 

Foremost for advertisers, protecting audiences through verified content adjacencies and ensuring responsible digital practices against the harms of misleading election-related news content will be more important than ever. 

With so much at stake, there must be a cross-industry coordinated effort, along with government regulation, when it comes to how AI is moderated and regulated. This should be of vital importance in safeguarding our democratic process and election integrity.

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Follow Zefr and AdExchanger on LinkedIn.

Must Read

Google Rolls Out Chatbot Agents For Marketers

Google on Wednesday announced the full availability of its new agentic AI tools, called Ads Advisor and Analytics Advisor.

Amazon Ads Is All In On Simplicity

“We just constantly hear how complex it is right now,” Kelly MacLean, Amazon Ads VP of engineering, science and product, tells AdExchanger. “So that’s really where we we’ve anchored a lot on hearing their feedback, [and] figuring out how we can drive even more simplicity.”

Betrayal, business, deal, greeting, competition concept. Lie deception and corporate dishonesty illustration. Businessmen leaders entrepreneurs making agreement holding concealing knives behind backs.

How PubMatic Countered A Big DSP’s Spending Dip In Q3 (And Our Theory On Who It Was)

In July, PubMatic saw a temporary drop in ad spend from a “large” unnamed DSP partner, which contributed to Q3 revenue of $68 million, a 5% YOY decline.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Paramount Skydance Merged Its Business – Now It’s Ready To Merge Its Tech Stack

Paramount Skydance, which officially turns 100 days old this week, released its first post-merger quarterly earnings report on Monday.

Hand Wipes Glasses illustration

EssilorLuxottica Leans Into AI To Avoid Ad Waste

AI is bringing accountability to ad tech’s murky middle, helping brands like EssilorLuxottica cut out bots, bad bids and wasted spend before a single impression runs.

The Arena Group's Stephanie Mazzamaro (left) chats with ad tech consultant Addy Atienza at AdMonsters' Sell Side Summit Austin.

For Publishers, AI Gives Monetizable Data Insight But Takes Away Traffic

Traffic-starved publishers are hopeful that their long-undervalued audience data will fuel advertising’s automated future – if only they can finally wrest control of the industry narrative away from ad tech middlemen.