The news came as the company released its annual "Community Standards Enforcement Report" on Thursday, an accounting of its efforts to police problematic content, including violence, hate speech and child pornography.
"The amount of accounts we took action on increased due to automated attacks by bad actors who attempt to create large volumes of accounts at one time," Facebook's vice president of integrity Guy Rosen wrote in a blog post. "We disabled 1.2 billion accounts in Q4 2018 and 2.19 billion in Q1 2019."
"We estimated that 5% of monthly active accounts are fake," Rosen wrote. That's one in every 20 accounts.
The report follows several recent measures to fight problems that have continued to plague the platform, including fake accounts meant to influence elections in Europe, Africa and Asia. It has also recently announced an increased effort to fight white supremacy after fielding criticism on that front, especially after the terror attack in New Zealand that killed 51 people and was livestreamed on Facebook.
Facebook also broke down the community standards violations into nine categories: adult nudity and sexual activity, bullying and harassment, child nudity and sexual exploitation of children, fake accounts, hate speech, regulated goods, spam, global terrorist propaganda and violence and graphic content.
Violent content appeared much more frequently than sexually inappropriate activity, according to the company.
"We estimated for every 10,000 times people viewed content on Facebook, 11 to 14 views contained content that violated our adult nudity and sexual activity policy," Rosen wrote. "We estimated for every 10,000 times people viewed content on Facebook, 25 views contained content that violated our violence and graphic content policy."
There were no metrics available yet for the prevalence or views of content in the other categories.