Content moderation and its problems have been a topic of discussion for a while now. Different platforms use various tools to moderate the posted and uploaded content, and it seems that even more efforts are needed as time goes on. Just recently, Meta announced that their independent oversight board will now be able to add warning screens to different content.
Depending on the decision of the oversight board, the content may get warnings, such as ‘sensitive’ and ‘disturbing.’ Previously, this board had the ability to review appeals received from users of the platform. Now, it seems that they will be able to make decisions on their own while reviewing the content, and this applies to any content, including videos and photos.
According to the transparency report issued quarterly, the oversight board received 347,000 appeals from both Instagram and Facebook. The board mentioned that they got almost 2 million appeals over the last two years since they started working on the project. It is obvious that the demand for moderation is as high as ever.
Being an independent entity, this board consists of people from different spheres. This includes lawyers, academics, experts in human rights, and more. It will definitely be interesting to see how this project changes and improves going forward when it comes to ensuring freedom of speech while also moderating the content.
As for more news from Meta, the company announced that they created an artificial intelligence system, which can now translate the Taiwanese language called Hokkien to English, even though there is no standard written form for this language.
What do you think about content moderation on social media? Please, share your thoughts in the comments below.