Are Google, Twitter and Facebook doing enough to protect the 2020 election in the age of 'information disorder'?

With misinformation everywhere online, are we any better off than in 2016?

In less than a year, Americans will head to the polls in the 2020 presidential election. Last time around, the FBI says, Russian trolls, in coordination with the Russian government, engaged in a sophisticated attempt to influence the American electorate to vote for Donald Trump.

While the effectiveness of the campaign is unknown, three years later, there's no indication that Russia has relented. Former FBI director and special counsel Robert Mueller, during a hearing before Congress last July, testified to the stark reality: “They’re doing it as we sit here,” he said. “And they expect to do it during the next campaign.”

And other countries were following suit, he added.

While the inaction of the U.S. government in 2016 has drawn considerable scrutiny, part of what facilitated this was social media platforms' lack of ability and willingness to confront the issue in real time.

So what are some of the biggest social media platforms doing to help protect the U.S. in the 2020 election?

Social media platforms -- one of the main vehicles of overt Russian influence operation in 2016 -- have adjusted, albeit more slowly than some critics would have liked. Facebook, Google and Twitter have all made changes that make it more difficult for bad actors to spread misinformation, but experts say they could be doing more.

'Lack of imagination'

Mark Zuckerberg has claimed that Facebook was "caught on its heels" in 2016. In reality, this appears to have been a gross understatement. Just a few days after the November 2016 election, Zuckerberg famously said: “Personally I think the idea that fake news on Facebook, which is a very small amount of content, influenced the election in any way. I think is a pretty crazy idea."

But the platform wasn't alone: a recent Senate Intelligence Committee report found that the FBI outsourced all the work on tracking misinformation and didn’t even alert the platforms when they found it.

“What went wrong was a lack of imagination towards understanding the various threats that their platforms would be taking,” Ben Decker, lead analyst with the Global Disinformation Index, a UK non-profit organization carrying out research on Information Disorder, told ABC News.

Joan Donovan, director of the Technology and Social Change Research Project at Harvard’s Kennedy Shorenstein Center, noted the power of social media to change the political environment in 2016 through pushing a single misinformation about a single issue like immigration or pushing a false narrative about a political candidate. Social media platforms also allowed the proliferation of several conspiracy theories that thrived online, but did not get mainstream coverage. “It just goes to erode the trust and self-assurance,” she told ABC News. “This is the main purpose."

While the term "fake news" -- initially used to describe intentionally false stories -- gained currency during the 2016 election cycle, it has fallen out of use among experts somewhat because it has come to be used by President Trump, some far-right and conservative media outlets as well as strong-arm leaders around the globe to refer to stories they don’t like from mainstream media organizations.

In the coming year, you’re more likely to hear about "information disorder" -- an umbrella term encompassing disinformation (intentionally false), misinformation (false but not intentionally misleading) and malinformation (true content removed from its context with the intent to cause harm). One example of malinformation would be how Russians hacked the Democratic National Committee emails in 2016 and leaked certain material. The content was genuine but it was spread with the intent to cause harm.

What the social networks have done...and failed to do

It appears that protections are better now than they were a couple of years ago. Most platforms do "red teaming," a term borrowed from the national security space, where a team of workers tries to find ways to disrupt the system and internal security teams try to figure out how to defend against those. In 2016, by contrast, politicians, media and the platforms didn’t have a good sense of what was going on, which made it a lot easier for the perpetrators.

In addition, nearly all the platforms have vastly bolstered their security teams and they have improved their content moderation filters and algorithms.

Facebook has automated systems that remove content every day and it has made political ads more transparent. To post a political ad -- which Facebook defines as dealing with social issues, elections or politics -- the advertiser needs to provide identifying documentation and declare who is paying for the spot.

Facebook also has teams that work with law enforcement to expose bad actors on the platform. “We have made changes to our platform that make the techniques we saw in 2016 just much, much harder to do,” Nathaniel Gleicher, Head of Cybersecurity Policy at Facebook, told ABC News. “We share information regularly with the FBI. They share information back with industry partners." Over the last year, he said, the company has taken down more than 50 networks for coordinated inauthentic behavior.

The platforms also collaborate more now than they did in 2016. Facebook has alerted Twitter to threats, allowing them to expose malicious networks, and vice versa. Twitter, unlike Facebook, has also made huge swaths of data available to researchers and law enforcement. Facebook has refused to share this kind of data, citing privacy concerns.

“Social media platforms have provided really valuable data that's allowed researchers to understand much more of the epidemiology around this big picture: state sponsored disinformation,” said Decker.

Carlos Monje, Director of Public Policy at Twitter and Yoel Roth, Head of Site Integrity, both emphasized the importance of the data they make available to researchers such as Donovan and Decker. This has allowed researchers to identify networks and the methods they used, which in turn helps Twitter and others prevent these bad actors and others like them operating in future. “Most of the data was available publicly via the Twitter API at some stage, but we’ve had multiple researchers say that it was only when they had access to the database raw data that they were able to connect the dots on how the disinformation networks worked,” said Roth.

Rather than create a specific team to counter information disorder, Roth said that it has become a part of virtually every Twitter employee’s job duties. Roth also emphasized that much of the detail on specific bad actors, such as their country of origin, only comes to light through collaboration with other platforms and law enforcement agencies around the globe.

YouTube, which is owned by Google, hasn't gotten the same attention as Twitter and Facebook, but because of the autoplay feature, it is one of the most popular social networks, particularly with radical groups. “There was heightened awareness in 2016, especially around the rise of white supremacist movements," said Donovan.

Google also now tries to counteract disinformation in its search engine results by featuring a pop-up knowledge panel from Wikipedia. At Google, there is also a team which identifies inauthentic accounts much like Gleicher’s team at Facebook. For example earlier this year, 210 YouTube accounts were disabled when they were found to be part of a coordinated network spreading misinformation related to the Hong Kong protests.

Google refused to put forward someone to speak about its efforts but instead offered a statement saying it was committed to address the challenge.

'Perception hacking' and other new threats

Despite all this innovation, information disorder appears to be a massive and growing problem. According to a recent report from Oxford University researchers the number of countries with political disinformation campaigns more than doubled to 70 in the last two years. “The more sophisticated actors will continually find ways around any automated system we build," Gleicher admitted.

Changes to the platforms have forced the bad actors to adapt. “When you work really hard to conceal who you are, not get noticed, it turns out that makes it a little difficult to run an influence operation,” says Gleicher. “Because the point of an influence operation is to get noticed.”

Another result of this is that malicious groups are using more real people to help spread misinformation. Instead of using bots to amplify a message, bad actors feed misinformation to real people that they have identified as likely to share right -wing content or left-wing content depending on the message being spread. “Some of the telltale signs of coaxing are now being changed,” said Donovan. One of her greatest concerns is an attempt at voter suppression, whether its disinformation about the mechanics of voting on Election Day or just attempts to create apathy among voters.

Gleicher calls this trend "perception hacking," where smaller, less-effective networks plant the idea they are larger than they really are, so that the public loses trust in institutions.

Twitter’s Roth says that one of the other new threats his team is tackling is targeting journalists and activists in an effort to amplify their message and make people believe the disinformation networks are larger and more powerful than they actually are. By getting news organizations to publish stories on small disinformation campaign, it can have the effect of making the public think the campaigns are more sophisticated and more widespread and thereby have the intended effect of reducing the public trust in news.

The other threat he highlighted was the use of manipulated video. While he is somewhat concerned about "deepfakes" (manipulated video where the speaker is made to look like he or she is speaking words they never uttered), he is also worried about "shallow fakes," such as the video of House Speaker Nancy Pelosi video that was slowed down to make it appear as if her speech were impaired. Just this week Twiiter announced new policy rules which would clearly label manipulated media.

And finally, there is political advertising.

According to the Facebook the Russians spent just $100,000 in advertising in 2016, but the restrictions that platforms imposed has made advertising more important. “One of the things that advertising allows you to do, even on a small scale, is to just break out of your own echo chambers,” said Donovan.

This raises the specter of fact-checking the ads, which Zuckerberg has refused to do because he does not want to be the arbiter of truth and that the public should be free to make up their own mind about what candidates are saying.

Donovan thinks this is beside the point because she claims Facebook doesn’t have the capability to fact-check these ads even if it wanted to. "Facebook cannot moderate political ads. It's not that they don't want to,” she says. “There is no system in place that would mark a political ad ahead of time and there's no review process in place where they would make a determination. So they have no prescreening.”

Last month, Facebook employees wrote an open letter to Mark Zuckerberg asking him to take a number of dramatic steps including restricting targeted ads. Facebook’s unique ability deliver ads to the precise target audience based on the vast data cache they have on each user is what makes the company such a cash cow in the first place, so I don’t think we should expect to see this request enacted any time soon.

Of course the problem isn't limited to these three giant Internet companies. Ben Decker mentioned that more fringe sites like Reddit, 4chan or Discord have "have a significant impact on conversations in YouTube, Twitter and sometimes even Facebook.

Decker is also concerned about closed networks like Snapchat and new platforms such as TikTok which are more popular with young voters. "I think it's really concerning because there is a lot that we don't know," said Decker. "The more mass migration to new platforms that happens, particularly with younger audiences, it means that the rest of us have less of an insight into exactly what's happening and how it's happening," he added.

So what more can be done?

One thing all the researchers and platforms can agree on is that more collaboration between these tech firms and with law enforcement agencies around the world is essential.

“The problem actually expands beyond just information, says Ben Decker, “but also includes things like violent extremism and coordinated harassment campaigns. So by having a much more macro view of the Internet rather than their specific platform, it could actually help them clean up a number of different problems.”

ABC News partners with Facebook on a news show and is a breaking news provider for Facebook Watch.