Home Data-Driven Thinking Free Reach, Fact-Checking And Platform Responsibility

Free Reach, Fact-Checking And Platform Responsibility

SHARE:

Joshua Lowcock headshotData-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Joshua Lowcock, chief digital and global brand safety officer at UM Worldwide.

Twitter’s decision to flag one of President Trump’s tweets with a fact-check has set a precedent for how social media companies can and should respond to false information published by accounts with significant audience reach. Twitter CEO Jack Dorsey, to his credit, took responsibility for the decision.

It’s important to call out that Twitter’s decision should not be cast in the context of freedom of speech. The rules are and have always been different for private companies, particularly those with terms of service (ToS), as XKCD so eloquently points out. To some extent, the issues that are now playing out for the platforms are self-inflicted given the historically slow and inconsistent enforcement of ToS across all users, including public figures.

Mark Zuckerberg has taken a different perspective than Twitter. In a blog post and CNBC interview, he argued that “Political speech is one of the most sensitive parts in a democracy, and people should be able to see what politicians say.”

Zuckerberg misses a key point, namely that Twitter didn’t delete the tweet – people can still see what Trump said on Twitter – and it is necessary to, in the words of Twitter’s CEO, “connect the dots of conflicting statements and show the information in dispute so people can judge for themselves.”

There’s an old and difficult to attribute saying, “A lie can travel halfway around the world while the truth is putting on its shoes.” This is important to consider in the era of social media. There is an absolute need for all platforms to consider the implications of unfettered free reach that they provide to all of those in the public eye regardless of political affiliation or status. It is difficult for the truth to get out if someone shares falsehoods and benefits from free reach and automated algorithmic amplification. The facts must be given a chance for a proportional response.

There was a time when journalism served a key and primary role as the vigorous defender and protector of truth and could offer a response to falsehoods. However, the news industry is under siege with both structural and revenue declines, the latter accelerated by the dominance of large tech platforms. The pandemic has made things worse, with newsroom layoffs occurring daily and local newsrooms closing. With an increasing percentage of Americans getting their news from social media, it’s a fair question to ask what platforms are doing to surface news to counter misinformation.

But misinformation has allies that do not value facts and truth or share democratic ideals. There is demonstrable evidence that bad actors are using social platforms for coordinated inauthentic behavior. These information warfare campaigns have the sole intent of promoting disinformation to undermine democracy. In some cases, the bad actors are using bots to inflame tensions and stoke division, something in which the platforms are alleged to thrive.

All of this means we live in an era where the truth and facts need help.

In response to Twitter’s decision to fact-check a tweet, Trump issued an executive order that specifically targets Section 230 of the 1996 Communications Decency Act. To be fair, Section 230 has been criticized by both sides of the US political spectrum for different reasons. The big risk right now is that the debate on Section 230 will be about political bias rather than the challenges of free reach and misinformation. It’s also likely to lead to a debate about political filter bubbles when the research on this topic is inconclusive.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

In a world facing a pandemic and a country facing deep social issues, a debate about Section 230 may seem like both a distraction and the last thing we need. I am horrified as I watch, once again, injustice and acts of violence against people of color. Platforms can be used to shine a light on this and as other social issues and help hold those responsible to account. Now is not the time for platforms to be silent, especially as we simultaneously watch those in authority and with reach use these same platforms to spread disinformation, cast doubt on the democratic process, issue veiled or direct threats of harm and intimidate citizens, all of which can perpetuate injustice.

What’s at stake right now is important and will have far-reaching repercussions for the future of democracy, society, free enterprise and the internet. In a country (and world) that needs unity and healing, a shared understanding of facts and truth is needed.

It’s my hope that in this pivotal moment in history that the CEOs of major US platforms use this as an opportunity to come together, show leadership and act responsibly, by developing a common industry standard and policy for fact-checking and content moderation that can be applied consistently across all platforms. A standard could help ensure that free reach does not become a license to spread falsehoods and cause harm and instead that social platforms can serve as a force for good.

If the leaders of the major platforms don’t come together to develop a transparent cross-platform policy and instead decide to sit this out because it’s “difficult” and “complicated,” we face real risk of regulations that could threaten the First Amendment. This would undermine a key American value that has helped define how much of the world thinks about the internet.

Opening the door to governments introducing legislation that could turn social platforms into weapons of propaganda, repression and control is something no US CEO or company should ever want.

Follow UM Worldwide (@UMWorldwide) and AdExchanger (@adexchanger) on Twitter.

Must Read

play button with many coins isolated on blue background. The concept of monetization of the video. Making money on video content. minimal style. 3d rendering

Exclusive: Connatix And JW Player Merge To Create A One-Stop Shop For Video Monetization

On Wednesday, video monetization platforms Connatix and JW Player announced plans to merge into a new entity called JWP Connatix. The deal was first rumored in July.

HUMAN Raises $50 Million

HUMAN plans to build a deterministic ID from its tracking of more than 20 trillion digital signals per week across 3 billion devices, which will aid attribution for ecommerce.

Buyers Can Now Target High-Attention Inventory In The Trade Desk

By applying Adelaide’s Attention Unit scoring, buyers can target low-, medium- and high-attention inventory via TTD’s self-serve platform.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

How Should Advertisers Navigate A TikTok Ban Or Google Breakup? Just Ask Brian Wieser

The online advertising industry is staring down the barrel of not one but two potential shutdowns that could radically change where brands put their ad dollars in 2025, according to Madison and Wall’s Brian Weiser and Olivia Morley.

Intent IQ Has Patents For Ad Tech’s Most Basic Functions – And It’s Not Afraid To Use Them

An unusual dilemma has programmatic vendors and ad tech platforms worried about a flurry of potential patent infringement suits.

TikTok Video For Open Web Publishers? Outbrain Built It.

Outbrain is trying to shed its chumbox rep by bringing social media-style vertical video to mobile publishers on the open web.