Facebook CEO Mark Zuckerberg received a lot of criticism and controversy after he stated in 2008 that Facebook would not ban holocaust deniers from its platform. He later explained that such content would not be removed or blocked, but its distribution across the platform would be limited, so as to stop the spread of misinformation.
Today, 12 years later, Zuckerberg seems to have changed his belief and stance on the matter as he has announced a revised policy regarding holocaust denial. According to the updated policy, Facebook will be completely removing such content from its platform now.
Zuckerberg explained that his own thinking has evolved during this time as he has observed a rise in anti-Semitic violence, which has resulted in increased ignorance towards the Holocaust, especially among young adults.
According to a survey recently conducted in the U.S., about a quarter of adults aged between 18-39 years said that the Holocaust was a myth or that it had been exaggerated, or they had no knowledge of the event.
Zuckerberg's change in the policy was much needed, as Facebook is being constantly highlighted as a potentially damaging platform with respect to promoting the spread of incorrect, misleading, and violent content.
It has been reported by the German Marshall Fund Digital that Facebook’s engagement with news resources that regularly publish false and misleading content has tripled from the third quarter of 2016 to the third quarter of 2020.
Another publication report by Press Gazette revealed that Facebook is the biggest source of false claims about the corona virus, among other social platforms.
Moreover, 94% of the 69 million child sexual abuse images reported by U.S. tech companies last year originated from Facebook, according to the National Center for Missing and Exploited Children.
The U.K. government even published a paper on why Facebook should be blocked from implementing end-to-end encryption across its messaging apps as standard. Various authorities have claimed that this measure facilitates illegal activity, such as sharing of inappropriate and disturbing content.
There is significant concern around such claims, considering how massive of a global reach Facebook possesses as a giant social media platform. Certainly, reducing the reach of problematic content across the platform isn’t a sufficient measure.
While there are limitations that Facebook’s moderation teams face in terms of blocking all violent content, the fact that the company’s automated detection systems are improving calls for better tackling of violations across the platform.