Recently, Facebook has said that it is improving its content moderation tool by the implementation of enhanced artificial intelligence and machine learning. According to the company, it will help combat misinformation and hate posts effectively.
Ryan Barness, product manager with the Facebook, announcement revealed that the new system would prioritize content based on numbers of different parameters such as severity, virality, and impact.
Till now, the posts are thought to violate the company’s rules are flagged by human moderators in chronological order.
Now, the company intends to ensure that the most important posts are reviewed by human moderators. To achieve this, they will reportedly use a combination of machine learning algorithms to sort posts.
Till now these potentially harmful posts are reported by users or detected automatically by Facebook’s AI-based on pre-defined parameters.
The new system will improve the moderation of real-world harm with posts such as fake propaganda that can have serious implications. Passing these posts from the AI level to human moderators resolves the same system which will then tackle spam and other less inciting posts.
Posts receiving the highest level of priority will include child exploitation, terrorism, self-harm, and other aspects. For this, Facebook is leveraging its AI expertise that it already has.
Chris Palow, Facebook’s software engineer notes that “the system might still witness flaws. However, the eventual goal that Facebook has is to instill a level of human-like intelligence in computer recognition models, which has so far been missing from such AI models. This can help it make contextual decisions is super important post moderations – something that Facebook will look forward to cutting down on problematic content.”
“The system is about marrying AI and human reviewers to make less total mistakes,” said Palow.