Brands don’t want their ads to turn up next to content spouting conspiracy theories, propaganda and misinformation, but programmatic advertising makes it easy for brands to end up next to this content.
An onslaught of AI-generated misinformation is only worsening the situation ahead of this year’s elections in the US and abroad.
To equip advertisers with accurate information about where they should direct their ad spend, NewsGuard, an organization that rates the trustworthiness of news outlets and licenses an inclusion list of trusted publishers, launched its 2024 election misinformation tracking center on Thursday.
NewsGuard’s team of journalists monitors misinformation online and scores news sources such as websites, social media accounts, podcasts and CTV and linear TV based on their truthfulness and reliability. With the help of AI tools, the team is monitoring 963 websites and 793 social media accounts and video channels that have published false or misleading election statements.
The organization catalogs local news impersonator sites, which includes 1,162 URLs. These sites publish “partisan, misleading content,” NewsGuard General Manager Matt Skibinski said. Some post on behalf of a political campaign or group, while others post “just for ad dollars and clicks,” he said. And 725 of these sites make AI-generated false claims.
Expose the impostors
Programmatic ad tech struggles to tell real and fake news publishers apart. Due to the confusion, the impersonator sites “often have programmatic ads,” Skibinski said. Conversely, a true local news outlet might be “choked of ad revenue entirely.”
Local news sites with political ties tend to have legitimate-sounding names and the look and feel of a news site. Take the Pennsylvania Independent and the Philly Leader, Skibinski said. The Independent is affiliated with a Democratic political action committee, while the Leader is funded by Metric Media, a conservative publisher with various undisclosed political and corporate ties. Sites like these have stepped into the void left by gutted local newsrooms and are on track to surpass the number of actual daily newspapers this year.
There are also AI-generated news sites produced with minimal or no human oversight. Since generative AI tools can churn out error-ridden or entirely fabricated content in seconds, they can be super spreaders of misinformation.
To illustrate the problem, Skibinski entered a prompt into ChatGPT asking the chatbot to write a news article about a local election for a made-up local newspaper, the Madison Times Index. ChatGPT churned out a story about how election certification was on pause due to the discovery of major irregularities. It included invented quotes from county election officials, specific numbers for the ballots and an editor’s contact information at the bottom of the piece.
Scores of these sorts of articles are already out there, waiting for advertisers.
NewsGuard has seen “false narratives” in the primaries about voter counts, who’s eligible to be on state ballots, election fraud and candidates who are discouraging people from voting, Skibinski said. The organization focuses on “election integrity claims,” he said, such as voter eligibility, when and where voting can take place and how ballots are counted.
Block the blocklist
Advertisers craving brand safety might be tempted to throw up their hands and give up on advertising against news. Skibinski doesn’t think blocking all the news is the answer. If anything, the decision to block keywords like Biden, Trump, Congress and Senate can inadvertently lead brands to violate their own brand safety standards. High-quality news outlets exist across the political spectrum, and brands would be better off advertising against these publishers’ inventory.
One Fortune 500 client that previously blocked news content used NewsGuard’s data to add more than 1,200 highly trusted news sites to their list via a PMP. Because the brand was bidding on a “broader set of inventory,” it increased its campaign reach, lowered its CPMs and saw higher click-through rates, Skibinski said.
Besides, excluding the news doesn’t protect advertisers from brand safety and suitability risks because of the sheer scale of programmatic advertising. There are too many impressions to account for, and some of them end up in disreputable places, as last year’s push to rid the industry of made-for-advertising content made clear.
A Comscore study that drew from NewsGuard’s data found that $2.6 billion in ad revenue is routed to misinformation publishers annually. And new fake news sources pop up all the time.
For example, NewsGuard has seen 742 big brands advertising on misinformation-filled content about the Israel-Hamas war. The company found hundreds of brand ads – including ads from an insurance brand, a streaming service, an online shopping platform, two retail store brands and a hotel chain – that appeared adjacent to content speculating that both 9/11 and the October 7 Hamas attack on Israel were inside jobs, Skibinski said.
This type of failure does not bode well for the current election season.
But through “proactive monitoring and detection of emerging risks,” Skibinski said, brands can do their best to protect themselves this season – without pulling out of news entirely.