These violations include graphic violence, adult nudity, terrorist propaganda, bullying, hate speech and fake accounts.
Facebook revealed Tuesday that it removed more than half a billion fake accounts and millions of pieces of violent or obscene content during the first three months of 2018, pledging more transparency while shielding its chief executive from new public questioning about the company's business practices.
To distinguish the many shades of offensive content, Facebook separates them into categories: graphic violence, adult nudity/sexual activity, terrorist propaganda, hate speech, spam and fake accounts.
While the 583 million fake Facebook accounts and their removal is perhaps the biggest takeaway from this report, the company pointed out how the metrics of flagging and removal had improved when compared to previous quarters - such as improvements in photo detection technology that can detect both old and newly posted content.
Facebook just released crucial information that showcased how the social media company is acting against people producing inappropriate content or fake accounts. For instance, nudity only received seven to nine views for every 10,000 content views.
Hate speech is checked by review teams rather than technology.
Softball to Host NCAA Regional as No. 11 Overall Seed
Regional, Drake (43-10) and BYU (35-20) are joined by Albany (30-24) and the No. 1 overall team in the tournament, OR (47-7). The Division I tournament bracket was released late Sunday night with Tennessee (45-12) earning the No. 10 overall seed.
Meanwhile, Facebook said on Monday it has suspended around 200 apps as part of its investigation into whether companies misused personal user data gathered from the social network.
There is a full report but Facebook also published a more concise summary in its Newsroom blog.
Small doses of nudity and graphic violence still make their way onto Facebook, even as the company is getting better at detecting some objectionable content, according to a new report.
Facebook acknowledged it has work to do when it comes to properly removing hate speech. "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important", he said.
"Artificial intelligence isn't good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue", said Rosen. It provided a quarter-to-quarter comparison in its content-filtering efforts.
"(And) technology needs large amounts of training data to recognise meaningful patterns of behaviour, which we often lack in less widely used languages", Mr Rosen said.
But Facebook's progress in policing what users see isn't likely to temper fresh criticism from regulators in Europe over privacy protections for its billions of users worldwide.