Despite the integration of Community Notes in X (formerly
Twitter), which were expected to help in reducing the spread of false
information, it seems like plenty of harmful content could still easily spread across
This has been especially evident from a recent occurrence involving singer Taylor Swift, whose fake and controversial AI-generated images gained more than 27 million views and 260,000 likes on X, eventually leading to the account responsible for them being taken down by X. Failing to control the spreading of the images, X adopted a rather poor approach of entirely blocking searches for ‘Taylor Swift’ in the app.
Following outrage from Swift’s fans and Swift’s own decision
to take legal action against the controversy, X has announced the development
of a content moderation center in Texas. The center, comprising 100 content
moderators, is aimed at moderating child sexual abuse content mainly, alongside
other types of content too.
In conclusion, X has not really been successful in combating false information and bots, the two biggest concerns on the platform. Perhaps finally having human moderators – something that X did not deem necessary in the past - could make things better.