Zuckerberg Gets Real About Fake News At Facebook’s Annual Shareholders Meeting

Investors are putting Facebook’s feet to the fire about fake news.

At the company’s annual shareholder’s meeting on Thursday, investors questioned CEO Mark Zuckerberg about how Facebook is combatting the spread of false news on its platform.

Advertisers have become increasingly sensitive to brand safety issues since the YouTube/Google Display Network debacle kicked off in earnest in March, and Facebook’s role in the dissemination of fake news is a closely related issue.

As sites replete with malicious content and crappy ads proliferate, their creators turn to Facebook to generate clicks. Facebook responded to the problem in early May with an algorithm tweak that de-prioritized links to low-quality sites in the newsfeed.

Facebook has a responsibility to keep its platform safe for users and brands, and it’s been called out, most recently by Hillary Clinton on Wednesday at the Code conference, for unduly influencing the last presidential election and the public discourse in general by spreading false news reports.

But most of the people who spread hoaxes and false news aren’t doing it for ideological reasons, Zuckerberg told investors. They’re just spammers trying to make money with clickbait.

“They know that what they’re saying isn’t true. They’re just trying to come up with the most outrageous thing they can … [to] get you to click on it because it sounds crazy … and [then] they show you ads on the landing page,” he said, noting that Facebook is focusing on “disrupting the economics for these folks.”

By applying measures that reduce the sharing of fake news, “we can make sure that these kinds of spammers aren’t using our ad systems and monetization tools to make money,” he said.

Facebook has also said that it’s planning to add 3,000 new people to its community ops and content safety teams this year, in the ongoing fight against harmful and violent content.

While that will help in the short term, the volume of content on Facebook is such that no amount of people will ever realistically be able to stem the negative flow and root out the bad stuff.

The solution, Zuckerberg declared, is technology.

“There are tens of billions of messages, comments and pieces of content that get shared through our service every day,” Zuckerberg said. “Long-term, the only way that we will get to the quality level we all want is not by adding thousands or tens of thousands more people to help review – but by building AI and technical systems that can look at this stuff more proactively.”

That’s fine, but the technology isn’t there yet, and with 1.9 billion MAUs – about one-quarter of all humans – moderation is a monster.

As The Guardian’s recent report on Facebook’s leaked content moderation policies revealed, the platform is awash with questionable content and creating rules to police it all is an ethical quagmire.

For example, consider the gradations of violence speech. “Kick a person with red hair,” or “Let’s beat up fat kids,” is OK because the hateful language is not directed at a specific person, whereas Facebook’s policies would call out “Someone shoot Trump,” or “#stab and become the fear of the Zionist,” as credible threats of violence.

“We acknowledge that we have more to do,” said Elliot Schrage, VP of global communications, marketing and public policy at Facebook. “The product we deliver and the services we provide have become increasingly popular, and as a result of that, they get more and more use – and frankly speaking, we have not been able to keep pace as much as we thought we would be able to do.”

But Schrage doggedly maintained Facebook’s status as a neutral provider of technology.

Although Facebook does render decisions on which content is and isn’t appropriate – most people would call those editorial decisions – Facebook just calls it adhering to community standards and guidelines.

“Our focus is not to be in the business of being an editor in the sense of determining what people should see,” Schrage said. “It’s to help people share what they want to share in an environment that is safe, an environment that is secure, but that also lets people express their opinions.”

Enjoying this content?

Sign up to be an AdExchanger Member today and get unlimited access to articles like this, plus proprietary data and research, conference discounts, on-demand access to event content, and more!

Join Today!