Twitter not equipped to deal with violent threats: Expert
Chad Loder funneled violent threats against Congresswoman Omar into a "Moment."
Granted, he's a cybersecurity expert, but it only took Chad Loder a couple of hours to funnel hundreds of death threats to freshman Congresswoman Ilhan Omar into a Twitter "Moment".
"Please help report this collection of threats to Ilhan Omar's safety. It is a felony under 18 U.S.C. § 871-875 to threaten US government officials. Any tweets which illegally call for violence, murder, or lynching of Ilhan Omar should be reported," Loder wrote on the curation of menacing posts.
He did this to prove a point: that Twitter's ill-equipped to deal with the hate speech and threatened violence on its platform, Loder told ABC News on Wednesday.
Last month, Omar's remarks to the Council of American-Islamic Relations (CAIR) immediately following the mosque attacks in New Zealand were taken out of context by conservative outlets to seem as if she was dismissive of the Sept. 11 attacks. In the speech, she had said “CAIR was founded after 9/11 because they recognized that some people did something, and all of us were starting to lose access to our civil liberties."
The coverage has resulted in an uptick in threats of violence aimed at Omar online.
"I just decided I wanted to highlight everything that's been going on and put it all into one place for people to see because I don't think people realize quite how bad it gets," Loder, CEO and founder of cybersecurity firm Habitu8, said.
Because algorithms are trained to introduce content to users based on what they engage with, those likely to be offended by such threats might not see them.
Over the weekend, Loder used search terms like "to the head" coupled with "Ilhan" or "string up" or "string her up" or "highest tree" or "Muslim pic and Ilhan" to find death threats to the freshman Representative from Minnesota, who is Somali-American and Muslim, and collect them in one "Moment" so that users could see them, and for Twitter to take action.
"Watching the discourse around how Twitter will admit, 'Hey we need to do more, we realize what a problem harassment is,' it gets really frustrating because when they say it's a hard problem they mostly mean expensive, not hard," Loder said.
"A lot of this stuff we need to do to police hate speech and death threats and online radicalization, is fairly simple to do if you have the will to spend the money on it. It's not really about AI (artificial intelligence) and it' s not really about machine learning. It's really about making decision on policies," Loder said. "For example, if someone threatens to lynch a black Muslim woman politician then why would you ban them and then let them back on the platform 24 hours later? Those sorts of policies don't seem to make sense."
On Tuesday, Twitter released an update to its online safety policies, stating the company now has a system to flag hate speech or offensive comment before it's reported by victims which it did not have as recently as this time last year. The company also said 100,000 accounts were suspended for creating new accounts after a suspension in the first three months of this year –– a 45% increase from the same time last year.
CEO and founder Jack Dorsey also gave a TED Talk in which he addressed the abuse on the platform
A Twitter spokesperson wrote ABC News in an emailed statement: "Death threats, incitement to violence, and hateful conduct are absolutely unacceptable on Twitter. Accounts spreading this type of material will be removed and coupled with our proactive engagement, we continue to encourage people to report this content to us. This behavior undermines freedom of expression and the values our service is based on."
Still, many of the threats Loder pointed out stayed up all weekend.
"Due to the nature of concrete threats that we're seeing, some of this content — which would have otherwise been immediately removed — was temporarily maintained to enable potential law enforcement coordination. Capitol Hill police are working on this issue," a source familiar with the situation told ABC News.
Rep. Omar retweeted one of Loder's tweets about the threats to her. Her office did not respond to multiple requests for comment from ABC News.
Loder said he understands that the humans who police the content for Twitter and other platforms are "overwhelmed" from watching child pornography, actual violence and online radicalization.
But, he added, "the trust and safety work human beings have to do is very, very expensive. When Twitter says, 'we need more AI machine learning' they're saying, 'we want to invest as little as possible in these expensive human beings.' In the meantime there's a huge gap between what the machines can do and what humans can do and in that gap is where the abuse happens. It's very expensive to invest in that."