Up to now, on Facebook, machine learning systems have been used successfully to moderate controversial content. However, the tasks to be performed were delegated to them by human moderators. Now the rules will change 180 degrees. Facebook authorities want the moderators to resist the recommendations served by artificial intelligence.
Mark Zuckerberg and his team believe that these technologies developed by Facebook are already so advanced that they can easily coordinate the work of moderators. Until now, posts reported as violating the rules have been reviewed by human moderators in the order in which they were reported.
Now they will be picked up by machine learning systems, and the next analyzed, and eventually they will decide which human moderators should deal with first. Facebook boasts that it's a historic change. It is supposed to make the verification of dangerous content, i.e. fake news will significantly speed up, which will limit the spread of very dangerous content that can wreak huge havoc on society.
The topic is very controversial, because only in the first quarter of 2020, moderators had to verify as many as 9.6 million posts, which is a huge increase compared to the corresponding period of the previous year, in which there were 5.7 million submissions. Facebook authorities do not hide that they want to relieve the moderators using artificial intelligence. By the so-called fake news, they are increasingly struggling with serious mental Epic movie news this year, 11,000 of them sued Facebook for dire working conditions, which have a huge negative impact on their mental health. After many court battles, they finally won against the authorities of the world's largest social networking site, who had to pay them a total of 52 million dollars in compensation.