After New Zealand mosque attacks, Facebook changes its livestream policy
It's adopting a new "one strike" rule for violent or extremist content.
Facebook officials, who have admitted their systems failed to prevent the broadcast of the New Zealand mosque massacre on their platform, have announced a new policy for livestreaming.
"We will now apply a ‘one strike’ policy to [Facebook] Live, in connection with a broader range of offenses," Facebook's vice president of integrity, Guy Rosen, wrote in a post on the company's site late Tuesday. "From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense."
Previously, the company took down posts that violated community standards. If a user continued to post content that violated the standards, Facebook temporarily blocked the user's account, removing the ability to broadcast live. More extreme posters – of terror propaganda or violations of children – would be banned altogether, Rosen wrote.
But now, violators are penalized starting with their first offense.
"For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time," Rosen said, adding that the company will also work to ban those users from placing ads in the coming weeks.
The move was praised by cybersecurity experts who often critique social media platforms for lack of action regarding hate speech.
"It’s a positive step toward curbing abuse of live streaming, and Facebook has been taking real steps on curbing hate content over the last few months," Chad Loder, CEO and founder of cybersecurity firm Habitu8, told ABC News.
The video of the Christchurch mass shooting, in which 51 people at two mosques were killed, was viewed at least 200 times live, Facebook said shortly after the attacks.
"This particular video did not trigger our automatic detection systems," Rosen wrote in the days following the attacks. The video was then viewed about 4,000 times before being taken down. The video and images of the attack were disseminated across all major social media platforms, including Twitter and YouTube.
In the 24 hours after the attacks, Facebook removed at least 1.2 million videos of the massacre as they were uploaded, but before they were viewed, according to Rosen. "Approximately 300,000 additional copies were removed after they were posted," Rosen wrote.
Part of the difficulty in detecting violent content is that videos are edited, making them harder to spot. The company said it would devote $7.5 million to partner with the University of Maryland, Cornell University and the University of California, Berkeley, to research better ways to "detect manipulated media across images, video and audio" and distinguish unwitting posters from those deliberately trying to manipulate content.
"This work will be critical for our broader efforts against manipulated media, including deepfakes (videos intentionally manipulated to depict events that never occurred). We hope it will also help us to more effectively fight organized bad actors who try to outwit our systems as we saw happen after the Christchurch attack," Rosen said.
Facebook officials announced the change as New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron met Wednesday on the sidelines of the G-7 gathering in Paris. The two leaders were signing the "Christchurch Call," a demand for the world's tech giants to take action to stop extremism on their platforms.
The U.S. declined to sign the international accord.
"While the United States is not currently in a position to join the endorsement, we continue to support the overall goals reflected in the Call," Trump Administration officials said in a statement. "The best tool to defeat terrorist speech is productive speech, and thus we emphasize the importance of promoting credible, alternative narratives as the primary means by which we can defeat terrorist messaging."