Home Platforms Russia’s Invasion Of Ukraine Highlights Big Tech’s Struggle To Moderate Content At Scale

Russia’s Invasion Of Ukraine Highlights Big Tech’s Struggle To Moderate Content At Scale

SHARE:
The ongoing Russian invasion of Ukraine is yet another example that content moderation will never be perfect.
Bachevsk. Ukraine. October 2021: Control sign at the entrance to the Ukrainian checkpoint from Russia. Text translation: Ukraine

All of the large social platforms have content moderation policies.

No belly fat ads, no ads that discriminate based on race, color or sexual orientation, no ads that include claims debunked by third-party fact-checkers – no ads that exploit crises or controversial political issues.

No graphic content or glorification of violence, no doxxing, no threats, no child sexual exploitation, nothing that promotes terrorism or violent extremism. And on and on.

The policies sound good on paper. But policies are tested in practice.

The ongoing Russian invasion of Ukraine is yet another example that content moderation will never be perfect.

Then again, that’s not a reason to let perfect get in the way of good.

For now, the platforms are mainly being reactive – and, one could argue, moving slower and with more caution than the evolving situation on the ground calls for.

For example, Meta and Twitter (on Friday) and YouTube (on Saturday) made moves to prohibit Russian state media outlets, like RT and Sputnik, from running ads or monetizing accounts. But it took the better part of a week for Meta and TikTok to block online access to their channels in Europe, and only after pressure from European officials. Those blocks don’t apply globally.

As The New York Times put it: “Platforms have turned into major battlefields for a parallel information war” at the same time “their data and services have become vital links in the conflict.”

When it comes to content moderation, the crisis in Ukraine is a decisive flashpoint, but the challenge isn’t new.

We asked media buyers, academics and ad industry executives: Is it possible for the big ad platforms to have all-encompassing content and ad policies that handle the bulk of situations, or are they destined to be roiled by every major news event?

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

  • Joshua Lowcock, chief digital & global brand safety officer, UM
  • Ruben Schreurs, global chief product officer, Ebiquity
  • Kieley Taylor, global head of partnerships & managing partner, GroupM
  • Chris Vargo, CEO & founder, Socialcontext

Joshua Lowcock, chief digital & global brand safety officer, UM

The major platforms are frequently caught flatfooted because it appears they spend insufficient time planning for worse-case outcomes and are ill-equipped to act rapidly when the moment arrives. Whether this is a leadership failure, groupthink or lack of diversity in leadership is up for debate.

At the heart of the challenge is that most platforms misappropriate the concept of “free speech.”

Leaders at the major platforms should read Austrian philosopher Karl Popper and his work, “Open Society and Its Enemies,” to understand the intolerance paradox. We must be intolerant of intolerance. The Russian invasion of Ukraine is a case in point.

Russian leadership has frequently shown it won’t tolerate a free press, open elections or protests – yet platforms still give Russian state-owned propaganda free reign. If platforms took the time to understand Popper, took off their rose-colored glasses and did scenario planning, maybe they’d be better prepared for future challenges.

Ruben Schreurs, global chief product officer, Ebiquity

In moments like these, it’s painfully clear just how much power and impact the big platforms have in this world. While I appreciate the need for nuance, I can’t understand why disinformation-fueled propaganda networks like RT and Sputnik are still allowed to distribute their content through large US platforms.

Sure, “demonetizing” the content by blocking ads is a good step (and one wonders why this happens only now), but such blatantly dishonest and harmful content should be blocked altogether – globally, not just in the EU.

We will continue supporting and collaborating with organizations like the Global Disinformation Index, the Check My Ads Institute and others to make sure that we, together with our clients and partners, can lead to help deliver structural change. To not just support Ukraine during the current invasion by Russia, but to ensure ad-funded media and platforms are structurally unavailable to reprehensible regimes and organizations.

Kieley Taylor, global head of partnerships & managing partner, GroupM

Given the access these platforms provide for user-generated and user-uploaded content, there will always be a need to actively monitor and moderate content with “all-hands-on-deck” in moments of acute crisis. That said, progress has been made by the platforms both individually and in aggregate.

Individually, platforms have taken action to remove coordinated inauthentic activity as well as forums, groups and users that don’t meet their community standards.

In aggregate, the Global Internet Forum to Counter Terrorism is one example of an entity that shares intelligence and hashes terror-related content to expedite removal. The Global Alliance for Responsible Media (GARM), created by the World Federation of Advertisers, is another example.

GARM has helped the industry create and adhere to consistent definitions – and a methodology to measure harm – across respective platforms. You can’t manage what you do not measure. With deeper focus through ongoing community standard enforcement reports, playbooks have been developed to lessen the spread of egregious content, including removing it from proactive recommendations and searches, bolstering native language interpretations and relying on external fact-checkers.

There will be more lessons to learn from each crisis, but the infrastructure to take more swift and decisive action is in place and being refined, with the amount of work still to do based on the scale of the platform and the community of users it hosts.

Chris Vargo, CEO & founder, Socialcontext

Content moderation, whether it’s social media posts, news or ads, has always been a whack-a-mole problem. However, the difference between social media platforms and ad platforms is in codifying, operationalizing and contextualizing definitions for what is allowed on their platforms.

Twitter, for instance, has bolstered its health and safety teams, and, as a result, we have an expanded and clearer set of behaviors with definitions of what is not allowed on the platform. Twitter and Facebook both regularly report on infractions they find, and this further builds an understanding as to what those platforms do not tolerate. Today, it was Facebook saying they would not enable astroturfing and misinformation in Ukraine by Russia and its allies.

But ad tech vendors themselves haven’t been pushed enough to come up with their own definitions, so they fall back on GARM, a set of broad content categories with little to no definitions. GARM does not act as a watchdog. It does not report on newsworthy infractions. Ad tech vendors feel no obligation to highlight the GARM-related infractions they find.

It’s possible to build an ad tech ecosystem that has universal content policies, but this would require ad tech platforms to communicate with the public, to define concretely what content is allowed on its platform – and to report real examples of infractions they find.

Answers have been lightly edited and condensed.

Must Read

Comic: Header Bidding Rapper (Wrapper!)

Microsoft To Stop Caching Prebid Video Files, Leaving Publishers With A Major Ad Serving Problem

Most publishers have no idea that a major part of their video ad delivery will stop working on April 30, shortly after Microsoft shuts down the Xandr DSP.

AdExchanger's Big Story podcast with journalistic insights on advertising, marketing and ad tech

Guess Its AdsGPT Now?

Ads were going to be a “last resort” for ChatGPT, OpenAI CEO Sam Altman promised two years ago. Now, they’re finally here. Omnicom Digital CEO Jonathan Nelson joins the AdExchanger editorial team to talk through what comes next.

Comic: Marketer Resolutions

Hershey’s Undergoes A Brand Update As It Rethinks Paid, Earned And Owned Media

This Wednesday marks the beginning of Hershey’s first major brand marketing campaign since 2018

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: Header Bidding Rapper (Wrapper!)

A Win For Open Standards: Amazon’s Prebid Adapter Goes Live

Amazon looks to support a more collaborative programmatic ecosystem now that the APS Prebid adapter is available for open beta testing.

Gamera Raises $1.6 Million To Protect The Open Web’s Media Quality

Gamera, a media quality measurement startup for publishers, announced on Tuesday it raised $1.6 million to promote its service that combines data about a site’s ad experience with data about how its ads perform.

Jamie Seltzer, global chief data and technology officer, Havas Media Network, speaks to AdExchanger at CES 2026.

CES 2026: What’s Real – And What’s BS – When It Comes To AI

Ad industry experts call out trends to watch in 2026 and separate the real AI use cases having an impact today from the AI hype they heard at CES.