Facebook Used AI To Identify 95% Of The Hate Speech It Removed In Q3

Mark Zuckerberg

Facebook rejected millions of ad submissions in the months leading up to Election Day for failing to complete its required authorization process.

In a call with reporters on Thursday, Facebook outlined its recent efforts to protect the integrity of the 2020 election and shared a status update on its ongoing battle to stop the spread of hate speech and false information related to COVID-19.

Both Facebook and Twitter have been criticized for enabling the dissemination of hate speech on their platforms while simultaneously being accused of censorship by conservatives.

Bring in the robots

According to Facebook’s latest community standards report for Q3 – which was released two days after Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey appeared before the Senate Judiciary Committee to get grilled about their content moderation practices – Facebook is increasingly relying on automated systems to deal with hate speech.

Between July and September, Facebook – which first began reporting on metrics for hate speech in 2017 – used its AI to proactively root out and remove 95% of the hate speech on both Facebook and Instagram before it was reported, compared to 81% last year.

“This means that all of the content that we removed for violating our hate speech policies, 95% of it was first detected by our systems,” said Guy Rosen, Facebook’s VP of integrity.

But the real question, he acknowledged, is what the company has missed.

“That’s where prevalence comes in, and it’s why we consider it to be the most important metric,” he said. “We periodically sample content that’s viewed on Facebook to calculate what percent violates our policies, and we focus on how much content is seen, not how much sheer content is out there that violates our rules. That’s important because a small amount of content can go viral and get a lot of distribution in a very short span of time.”

In Q3 2020, hate speech prevalence was between 0.10% to 0.11%, or 10 to 11 views of hate speech for every 10,000 views of content.

“Our metrics around enforcement, such as how much content we act on and how proactive we would find it, are indications of the progress we have made on catching harmful content,” Rosen said.

Dealing with the election

From March 1 through Election Day, more than 265,000 pieces of content were removed from Facebook and Instagram in the United States for violating the tech giant’s voter interference policies. In that same period, Facebook displayed warnings on more than 180 million pieces of content viewed by people in the United States that were debunked by third-party fact-checkers.

“We also rejected ad submissions before they could be run about 3.3 million times for targeting the US with ads about social issues, elections and politics without having completed the required authorization process,” Rosen said. “All of these efforts were part of our goals of, first, protecting the integrity of the election by fighting foreign interference, disinformation and voter suppression and second, helping more Americans register to vote.”

In June, Zuckerberg announced that the company was implementing higher standards for hateful content that appeared in ads. Specifically, Facebook expanded its ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others.

At the time, Facebook also expanded its policies to crack down on ads suggesting that immigrants, migrants, refugees and asylum seekers are inferior and ads that express contempt, dismissal or disgust directed at these groups.

In September, Facebook said it would block political ads in the week leading up to the election in an effort to suppress misinformation and formally prohibited political advertising when the polls closed on Nov. 3. Just last week, Facebook announced that it was extending the ban imposed on some ads around the election, a move that is expected to continue for another month.

Keeping up with COVID

The election was a hotbed for misinformation sharing – but don’t forget about the pandemic.

With COVID cases surging, Rosen said Facebook removed more than 12 million pieces of content on its core platform and Instagram between March and October for containing misinformation that “may have led to imminent physical harm, such as content relating to fake preventative measures or exaggerated cures.”

Facebook also displayed warnings on about 167 million pieces of content by pointing to articles written by fact-checking partners debunking false claims about COVID-19.

Enjoying this content?

Sign up to be an AdExchanger Member today and get unlimited access to articles like this, plus proprietary data and research, conference discounts, on-demand access to event content, and more!

Join Today!

 

Add a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>