Facebook updated its targeting policies on Wednesday in the wake of the racist and anti-Semitic ad targeting scandal, first reported by ProPublica.
COO Sheryl Sandberg clarified Facebook’s position in a post and laid out its three-pronged approach to running a tighter ship.
Following the exposé and attendant public backlash, Facebook disabled the self-reported targeting fields in its ads system until further notice, but now it’s making moves to accommodate the advertisers who rely on employer and educational information to target audiences.
First, Facebook said it will better ensure that content that contravenes community standards cannot be used to target ads, including anything derogatory related to race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender identity, disability or disease.
“Such targeting has always been in violation of our policies and we are taking more steps to enforce that now,” Sandberg wrote.
Next, Facebook will add more human review to its automated processes. The company has reinstated 5,000 of the most innocuous and “commonly used targeting terms” – think “nurse,” “teacher,” “dentistry” – after a manual review of existing targeting options. The review included an examination of job titles, employer names and education fields.
Sandberg said Facebook is also working on a program to help users report potential abuses in its ads system.
Facebook’s mea culpa comes amid another brewing scandal: Russian trolls using fake accounts and pages to place incendiary political ad buys on Facebook.
Both the anti-Semitic ad categories issue and the Russian ads affair call attention to the fact that Facebook doesn’t always know what the heck is going on within its own garden walls.
“We never intended or anticipated this functionality being used this way – and that is on us,” Sandberg wrote, referring to the ad categories controversy. “And we did not find it ourselves – and that is also on us.”