Facebook just removed 200 white supremacist groups; What else is new?

Facebook logo
Facebook Reuters

Facebook removed more than 200 white supremacist organizations from its platform for violating both terrorism and hate speech community standards, while in all it claimed to have removed more than 22.8 million pieces of content from Facebook and around 4.4 million posts from Instagram in the second and third quarter of 2019 (6 months).

Releasing its fourth Community Standard Enforcement Report that was released Wednesday, the social media giant said some of the white supremacist organizations have been banned under its Dangerous Individuals and Organizations policy, similar to a ban on terrorist organizations or organized hate groups.

Facebook has already banned outfits like ISIS, Al-Qaeda, and their affiliates, which has now been extended to some white supremacist groups, mostly the mass shooters who represent such ideas. This time, Instagram was also included for the task and data on suicide and perpetrating self-injury have been deleted. It claimed to have removed 98.5% of Facebook content on "terrorist organizations" beyond al-Qaeda and ISIS, while 92.2% of similar posts on Instagram.

Some of the metrics Facebook used to delete the content include prevalence, content actioned, proactive rate of content discovered before someone reported it, and appealed content that included restored content after action was taken.

For Instagram, Facebook focused on data violating four policy areas -- child nudity and child sexual exploitation; regulated goods — specifically, illicit firearm and drug sales; suicide and self-injury; and terrorist propaganda.

What Else Is New in the Fourth Edition of the Report?

Data on suicide and self-injury:

The content that depicts or encourages suicide or self-injury, including certain graphic imagery and real-time depictions that experts say leads others to engage in similar behavior. "We place a sensitivity screen over content that doesn't violate our policies but that may be upsetting to some, including things like healed cuts or other non-graphic self-injury imagery in a context of recovery," said Facebook.

On Facebook, about 2 million pieces of content in Q2 2019, of which 96.1% was detected proactively, and in Q3 it removed 2.5 million pieces of content, of which 97.3% was detected proactively.

On Instagram, it removed about 835,000 pieces of content in Q2 2019, of which 77.8% was detected proactively. In Q3 2019, it removed about 845,000 pieces of content, of which 79.1% was detected proactively.

Data on child nudity and sexual exploitation of children:

On Facebook:

In Q3 2019, Facebook removed about 11.6 million pieces of content, up from Q1 2019 when it removed about 5.8 million. Over the last four quarters, it has proactively detected over 99% of the content that was removed for violating this policy.

In Q2 2019, it removed about 512,000 pieces of content, of which 92.5% was detected proactively.
In Q3, it removed 754,000 pieces of content, of which 94.6% was detected proactively.

Data on illicit firearm and drug sales:

On Facebook:

In Q3 2019, it removed about 4.4 million pieces of drug sale content, of which 97.6% was detected proactively — an increase from Q1 2019 when it removed about 841,000 pieces of drug sale content, of which 84.4% was detected proactively.

Also in Q3 2019, Facebook removed about 2.3 million pieces of firearm sales content, of which 93.8% was detected proactively — an increase from Q1 2019 when it removed about 609,000 pieces of firearm sale content, of which 69.9% was detected proactively.

On Instagram:

In Q3 2019, it removed about 1.5 million pieces of drug sale content, of which 95.3% was detected proactively.
In Q3 2019, it removed about 58,600 pieces of firearm sales content, of which 91.3% was detected proactively.

Other latest measures:

On its latest measures to find proactively the content that violates its policy guidelines, Facebook said the issue with its accounting processes did not impact how it enforced the policies or how it informed people about those actions. It has claimed to have refined the process of identifying a piece of content, both text and visual at the same time.

For example, "if we find that a post containing one photo violates our policies, we want our metric to reflect that we took action on one piece of content — not two separate actions for removing the photo and the post."

However, in July 2019, Facebook said its team found that the systems logging and counting these actions did not correctly log the actions taken. "This was largely due to needing to count multiple actions that take place within a few milliseconds and not miss, or overstate, any of the individual actions taken. We'll continue to refine the processes we use to measure our actions and build a robust system to ensure the metrics we provide are accurate," said the social media giant.

READ MORE