Home Platforms Zuckerberg Gets Real About Fake News At Facebook’s Annual Shareholders Meeting

Zuckerberg Gets Real About Fake News At Facebook’s Annual Shareholders Meeting

SHARE:

Investors are putting Facebook’s feet to the fire about fake news.

At the company’s annual shareholder’s meeting on Thursday, investors questioned CEO Mark Zuckerberg about how Facebook is combatting the spread of false news on its platform.

Advertisers have become increasingly sensitive to brand safety issues since the YouTube/Google Display Network debacle kicked off in earnest in March, and Facebook’s role in the dissemination of fake news is a closely related issue.

As sites replete with malicious content and crappy ads proliferate, their creators turn to Facebook to generate clicks. Facebook responded to the problem in early May with an algorithm tweak that de-prioritized links to low-quality sites in the newsfeed.

Facebook has a responsibility to keep its platform safe for users and brands, and it’s been called out, most recently by Hillary Clinton on Wednesday at the Code conference, for unduly influencing the last presidential election and the public discourse in general by spreading false news reports.

But most of the people who spread hoaxes and false news aren’t doing it for ideological reasons, Zuckerberg told investors. They’re just spammers trying to make money with clickbait.

“They know that what they’re saying isn’t true. They’re just trying to come up with the most outrageous thing they can … [to] get you to click on it because it sounds crazy … and [then] they show you ads on the landing page,” he said, noting that Facebook is focusing on “disrupting the economics for these folks.”

By applying measures that reduce the sharing of fake news, “we can make sure that these kinds of spammers aren’t using our ad systems and monetization tools to make money,” he said.

Facebook has also said that it’s planning to add 3,000 new people to its community ops and content safety teams this year, in the ongoing fight against harmful and violent content.

While that will help in the short term, the volume of content on Facebook is such that no amount of people will ever realistically be able to stem the negative flow and root out the bad stuff.

The solution, Zuckerberg declared, is technology.

“There are tens of billions of messages, comments and pieces of content that get shared through our service every day,” Zuckerberg said. “Long-term, the only way that we will get to the quality level we all want is not by adding thousands or tens of thousands more people to help review – but by building AI and technical systems that can look at this stuff more proactively.”

That’s fine, but the technology isn’t there yet, and with 1.9 billion MAUs – about one-quarter of all humans – moderation is a monster.

As The Guardian’s recent report on Facebook’s leaked content moderation policies revealed, the platform is awash with questionable content and creating rules to police it all is an ethical quagmire.

For example, consider the gradations of violence speech. “Kick a person with red hair,” or “Let’s beat up fat kids,” is OK because the hateful language is not directed at a specific person, whereas Facebook’s policies would call out “Someone shoot Trump,” or “#stab and become the fear of the Zionist,” as credible threats of violence.

“We acknowledge that we have more to do,” said Elliot Schrage, VP of global communications, marketing and public policy at Facebook. “The product we deliver and the services we provide have become increasingly popular, and as a result of that, they get more and more use – and frankly speaking, we have not been able to keep pace as much as we thought we would be able to do.”

But Schrage doggedly maintained Facebook’s status as a neutral provider of technology.

Although Facebook does render decisions on which content is and isn’t appropriate – most people would call those editorial decisions – Facebook just calls it adhering to community standards and guidelines.

“Our focus is not to be in the business of being an editor in the sense of determining what people should see,” Schrage said. “It’s to help people share what they want to share in an environment that is safe, an environment that is secure, but that also lets people express their opinions.”

Tagged in:

Must Read

Curation Platform Onetag Just Acquired This Creative Tech Startup. Here’s Why

Onetag’s acquisition of creative ad tech platform Aryel equips its curation solution with new tools for tweaking and testing interactive ad creative.

PubMatic Is All In On Agentic AI

PubMatic says adoption of its AgenticOS, combined with strong CTV and mobile demand, set the stage for double digit growth in the second half of this year.

Comic: Always Be Paddling

The Trade Desk Faces Headwinds As Investors Reconsider The Thesis Of Objective Indie Ad Tech

The Trade Desk, once a Wall Street darling, now faces the challenge of rebuilding goodwill across the investor community and the ad tech industry.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Other Than Buying Warner Bros. Discovery, Paramount Skydance’s Priority Is Streaming Revenue Growth

While the outcome of Paramount Skydance’s bid for Warner Bros. Discovery hangs in the balance, Paramount is laser-focused on driving streaming growth.

TV Media Buyers Want Outcomes – So Nielsen Is Introducing More Advanced Audiences

On Wednesday, and in time for the upfronts, Nielsen added more than 200 advanced audience segments in Nielsen ONE, its cross-platform analytics dashboard.

Why Dow Jones Prioritizes Direct Deals To Protect Its Audience Value

In pursuit of ad revenue, Dow Jones is betting on a tried-and-true strategy: direct relationships, first‑party audiences and a disciplined approach to using data to enrich ad campaigns.