Facebook admits its AI failed to flag the New Zealand terror attack livestream

The company also said a time delay would not help to stop such broadcasts.

March 21, 2019, 7:17 PM

Facebook has admitted that the company's artificial intelligence failed to block the livestream video in which the alleged shooter filmed himself opening fire on praying Muslims at two mosques in New Zealand last week.

The company also blamed users for not flagging the video more quickly, as the social media giant continued to fend off criticism for allowing the worst terror attack in New Zealand's history to be broadcast live and in full on its platform.

"People are looking to understand how online platforms such as Facebook were used to circulate horrific videos of the terrorist attack, and we wanted to provide additional information from our review into how our products were used and how we can improve going forward," Facebook's vice president of product management Guy Rosen wrote in a blog post Wednesday night.

PHOTO: Ambulance staff take a man from outside a mosque in central Christchurch, New Zealand, Friday, March 15, 2019. A witness says many people have been killed in a mass shooting at a mosque in the New Zealand city of Christchurch.
Ambulance staff take a man from outside a mosque in central Christchurch, New Zealand, March 15, 2019. A witness says many people have been killed in a mass shooting at a mosque in the New Zealand city of Christchurch.
Mark Baker/AP

Facebook was not the only platform on which the video was uploaded or shared. Users blanketed other platforms including YouTube, Twitter, Reddit, 4chan and 8chan with the video, which made it harder for the companies to react as the content ricocheted throughout a porous digital ecosystem. Rosen's post provides the clearest timeline to date of how the shooter's video went viral.

After the shooter streamed the attack, "individuals around the world then re-shared copies they got through many different apps and services, for example filming the broadcasts on TV, capturing videos from websites, filming computer screens with their phones, or just re-sharing a clip they received," Rosen wrote. "In total, we found and blocked over 800 visually-distinct variants of the video that were circulating."

However, the video originated on Facebook, and the company's AI did not flag the livestream of the attacks, which claimed the lives of at least 50 people. That original footage was viewed almost 200 times while it was live, Facebook said.

"This particular video did not trigger our automatic detection systems," Rosen wrote. The video was then viewed about 4,000 times before being taken down, he added.

Over the next 24 hours, Facebook removed at least 1.2 million videos of the attack as they were uploaded, but before they were viewed, according to Rosen. "Approximately 300,000 additional copies were removed after they were posted," Rosen wrote.

AI technology requires "training data," "thousands of examples of content" to learn how to detect problem speech, text, images or videos, Rosen wrote. "This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems."

Social media platforms, including Facebook, have been effective in recent years at curbing terrorist content, most notably ISIS content. That success has led users to question why it was harder to crack down on the Christchurch shooting video.

Facebook's former chief security information office, Alex Stamos, told ABC after last week's attacks that the ISIS communications were disrupted in part because of the group’s communications in Telegram, an instant messaging app.

"The ISIS problem was partially cracked because the [tech] companies infiltrated all their Telegram channels. So you could grab a video and block it before the first upload attempt. No equivalent chokepoint here," Stamos said, referring to the New Zealand attack.

There are additional problems with expecting automated responses to violent content -- AI can offer up false positives and also destroy the work of activists who document human rights abuses, Sam Gregory, program director of Witness.org told ABC News in an interview from Facebook headquarters. Witness.org works with advocates and dissidents to document human rights abuses, and is advising Facebook on AI and content moderation.

"To train AI effectively you do need significant training sets, and you're still going to get a significant number of false positives," Gregory said. "The more nuanced the harder it gets to do that."

Facebook has about 15,000 people who review content, according to the company.

The company also appeared to shift blame to Facebook users for not reporting the live broadcast of the attack sooner.

"The first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended," Rosen wrote.

It's unclear if the initial report of the shooting came from New Zealand police. The police declined to comment on how they reported the video to Facebook, but did say they Facebook shortly after the attack to take down the video.

"We contacted Facebook at 2:29 p.m. Friday March 15 about the livestream," a spokeswoman for the New Zealand Police told ABC News. "The first call we received was at 1:41 p.m. on Friday, and the first armed police unit arrived at the scene at 1:47 p.m., six minutes later."

After the attacks, Rose wrote, the company used "audio-based technology which we had been building" to try to identify the video and take it down.

He added that the company tries to combat both terrorist propaganda and hate speech on its platform.

One practice Facebook will not implement is a time delay similar to what broadcast television networks use, Rosen said.

"There are millions of Live broadcasts daily, which means a delay would not help address the problem due to the sheer number of videos. More importantly, given the importance of user reports, adding a delay would only further slow down videos getting reported, reviewed and first responders being alerted to provide help on the ground," he wrote.

Related Topics