How I Would Change Facebook’s Algorithm

Kunal Gupta headshotData-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Kunal Gupta, CEO at Polar.

I believe Facebook's course ahead will have to start with Mark Zuckerberg and Sheryl Sandberg showing some humility and accepting that they were wrong in their handling of hate speech on the platform and are ready to now make changes. They seem more concerned about being seen as always on the right side of history and defending their past decisions.

Hate speech exists in many places on the internet today – it always has and, unfortunately, it likely always will. The issue with hate speech on Facebook is that it is a platform designed to make content spread in unimaginable ways. It does not discriminate against posts about cat videos, what my uncle ate for breakfast, my friend who had a baby or hate speech. The platform is designed to boost content efficiently, effectively and in real time.

In Facebook’s defense, it is difficult, if not unreasonable, to expect 100% of hate speech to be removed from the platform given the scale at which it operates. Much of Facebook’s recent public statements have focused on their efforts to improve their ability to remove hate speech from the platform. From Sheryl Sandberg: “We know what a big responsibility Facebook has to get better at finding and removing hateful content.”

I would like instead for Facebook to focus on how it will stop the spread of hate speech, misinformation and divisive content, knowing that Facebook cannot remove all of it.

Facebook’s algorithms need to be reined in, at least for the balance of 2020 to help us get through the presidential election in the United States and it can continue to design the future of its civil rights policies. Facebook simply cannot afford another disastrous election. This time, “effort” or “progress” won’t be sufficient. The world, rightfully, holds Facebook to a higher standard than the standard to which it holds itself. If Facebook is unsure how to stop the spread of specific harmful content, it needs to reduce the spread of all content, even if only temporarily.

Facebook’s custom audience targeting capabilities, which make the platform valuable to brands advertising products, also unfortunately make the platform valuable to bad actors who can create hate speech, misinformation and divisive content that is tailored for niche audiences.

Billions of people have been connected to the internet for decades, and there have been thousands of elections held without major concern. The reason it is now an issue is because of Facebook’s custom audience targeting capabilities. Maybe they should not exist, or should be suspended for the moment.

Facebook is the king of testing and introducing changes in smaller countries first to see the impact before global rollouts. This would be the time to direct its algorithm experiments to reduce the risks of hate speech, misinformation and divisive content.

Follow Kunal Gupta (@kunalfrompolar) and AdExchanger (@adexchanger) on Twitter.

 

Add a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>