YouTube had announced in March that it would start using machine learning and AI to tackle content that violated its policies, such as hate speech and misinformation. However, the company noticed that the use of AI resulted in over-censoring and incorrect takedown of content from its platform. Now YouTube is bringing back more of its human moderators to do the job.
Data revealed that about 11 million videos were taken down from YouTube between April and June, which is estimated to be double the average number. This indicates that the AI moderation's attempt to detect harmful content wasn't too accurate.
YouTube's decision to replace human moderators with machine learning came as a result of the pandemic, but clearly, the approach didn't turn out to be very effective.
While social media platforms including Facebook and Twitter have constantly convinced users that their algorithmic and automatic filters can be of great help when it comes to tackling misleading and hateful content, AI moderation experts have expressed a different opinion regarding the claims. They say that computers lack the understanding about cultural nuances and the ability for fine judgements, unlike humans.
Some time back, comments on YouTube were being deleted that contained language critical of the Chinese Communist Party, and YouTube explained that it was because of an error with its enforcement systems.