"Data-Driven Thinking" is written by members of the media community and contains fresh ideas on the digital revolution in media.
Today's column is written by Allison Schiff, senior editor at AdExchanger. It's part of a series of perspectives from AdExchanger's editorial team.
Mark Zuckerberg sees himself as a defender of free speech. His detractors consider him feckless at best, unscrupulous at worst.
Both can be true. By standing his ground, Zuckerberg is shirking his larger responsibility to ban or at least label speech that promotes violence.
As protests roil America in the wake of George Floyd’s death, Facebook and Twitter have taken very different approaches to handling moderation on their respective platforms.
Twitter decided to do something it’s never done before. It placed warning labels on tweets from public figures, including President Trump, that violate its policies.
An inflammatory tweet posted by Trump on Friday as the crisis deepened in Minnesota declaring that “when the looting starts, the shooting starts” was almost immediately flagged for glorifying violence.
An identical message that was posted to Facebook on the same day remains untouched.
During an internal meeting with employees on Tuesday to explain the company’s decision, Zuckerberg claimed to have separated his personal opinion on the matter from Facebook’s policies, according to audio of the videoconference obtained by several media outlets.
“This decision has incurred a massive practical cost for the company to do what we think is the right step,” he reportedly told employees on the call.
But Facebook’s policies clearly forbid hate speech and content that could lead to imminent violence or physical harm.
Civil rights activists are flummoxed by Facebook’s inflexible stance. So are many Facebook employees. Several hundred staged a virtual walkout earlier this week, and a handful have even resigned.
There’s an irony in Zuckerberg’s stalwart attitude on what he considers to be free speech. Most of Facebook’s own workforce doesn’t feel free to express their opinions.
Fifty-six percent of Facebook employees say they aren’t comfortable talking about George Floyd or the Black Lives Matter movement with their colleagues, according to Blind, a company that conducts anonymous employee surveys.
At Facebook’s annual shareholders meeting last week – which was two days after Floyd was killed and one day after unrest first began to stir in Minneapolis – Zuckerberg was handy with the usual talking points on free speech.
He talked about the nuance of where to draw the line and how giving people a voice is good for society in the long term, even if you don’t like what they have to say. He trotted out the oft-cited “you still can’t yell ‘fire’ in a crowded room” principle.
But the line that Facebook draws isn’t consistent.
Facebook was quick to tamp down on misinformation related to COVID-19. In April alone, Facebook says it’s put warning labels on 50 million pieces of misleading coronavirus content.
And so, the fact is, if Mark Zuckerberg wanted to crack down on or at least curb incendiary speech on Facebook, he could.
Twitter didn’t delete Trump’s posts; it added context. There’s nothing stopping Facebook from making a policy change and putting a warning label in front of problematic posts other than being obstinate.
Facebook’s users and its advertisers are looking to Zuckerberg for moral leadership. They want to believe in his commitment not only to “connection” and “community” – but also to the well-being of its users.
Follow Allison Schiff (@OSchiffey) and AdExchanger (@adexchanger) on Twitter.