Facebook revamps efforts to combat hate, but experts say 'questions remain' on impact

Facebook revealed new updates to fight hate speech and extremism online.

September 17, 2019, 4:39 PM

Facebook on Tuesday revealed changes to policies for combating hate and extremism on its namesake platform and on Instagram, including updating its definition of terrorist organizations and revamping detection efforts.

Some online privacy experts, however, said questions remain as to whether these steps will have a major impact in stopping the spread of hate speech.

The changes come a day before execs from Facebook, Twitter and Google are set to testify before lawmakers over "digital responsibility" related to mass violence and extremism that's spread online.

People visit a memorial site for victims of Friday's shooting, in front of Christchurch Botanic Gardens in Christchurch, New Zealand, March 19, 2019.
Jorge Silva/Reuters

Facebook's updated guidelines reference the mass shooting at a mosque in Christchurch, New Zealand, that killed 50 people earlier this year and was livestreamed on Facebook, leading to calls for dramatic policy changes.

Facebook said the automatic detection systems in place missed spotting that video because they "did not have enough content depicting first-person footage of violent events to effectively train our machine learning technology."

As a result, Facebook said the company is working with law enforcement in the U.S. and the U.K. to obtain footage from firearms training programs to train their systems to spot these sorts of attacks.

Facebook app's splash screen is seen on a mobile phone, May 10, 2012, in Washington.
Getty Images, FILE

Facebook also said the new guidelines include an updated definition of terrorist organizations. The change, according to the guide lines, "more clearly delineates that attempts at violence, particularly when directed toward civilians with the intent to coerce and intimidate, also qualify."

The company also said it's adding more staff to combat hate and extremism online and giving users the resources to "leave behind hate groups."

When people search for hate terms in the U.S. they're directed to the group Life After Hate, and abroad they will be directed to other organizations and partners "where local experts are working to disengage vulnerable audiences from hate-based organizations."

'Serious questions remain'

Dipayan Ghosh, a former privacy and public policy adviser at Facebook, told ABC News in a statement that "serious questions remain over whether these steps will have any impact on blunting the spread of online hate speech."

"There is very little accountability over what Facebook is doing to contain the spread of bad content. How much are they spending on developing machine learning classifiers? How does this compare to what the company spends to develop new innovations in advertising?" added Ghosh, co-director of the digital platforms and democracy project and Shorenstein Fellow at the Harvard Kennedy School.

Ghosh said it's in Facebook's "commercial interest" to show progress in addressing terrorism and hate on its platforms, saying, "These steps will go some way in staving off governmental regulation."

Facebook must address the root of the problem, according to Ghosh.

"When you have a business model that promotes radical behavior, regulation must be pursued to treat the business model itself," he added. "The deepest concern is that it is Facebook's internal business model that promotes the hate speech problem in the first place -- and that business model will not change anytime soon absent the development of meaningful privacy, transparency and competition regulations."

Related Topics