Facebook released a new report on Monday in which the company decribes a spike in takedowns of hateful posts. The company combines artificial intelligence with human moderators and fact-checkers to make sure that the community standards are enforced. This time the report, which is regularly released, has a focus on AI. This move towards AI is understandable, since human moderators risk developing mental health issues because they’re exposed to harmful content. Facebook has also introduced an oversight board to support moderators.
The Community Standards Enforcement Report covers data from October 2019 to March 2020. The report shows how well Facebooks policies have been enforced during that time. It only covers the last six months. Therfore it won’t give us a full look into how the company is doing during the pandemic. There has been a surge in misinformation online now that everybody is at home and Facebook has been coming up with solutions to deal with this. The next report will really show how they’re doing, but for now we can get a sense of the direction the company is going in.
Of course, there has already been an issue with a surge in misinformation during the 2016 presidential campaign in the US. Therefore, this time around, Facebook was slightly more prepared. Guy Rosen, VP Integrity, writes in a blog that they have had a couple of years build and improve tools, teams, and technology to “prevent misinformation from spreading on our apps and keep people safe from harmful content.”
Facebook has improved its technology so that more violating content is found. The proactive detection technology has been expanded to more languages and existing systems were improved. This resulted in a twenty percent increase in the detection rate for hate speech over the last year, and ninety percent of it can be removed before anyone reports it. The company has also doubled the amount of drug content that has been removed in the last quarter of 2019.
“Over the last six months, we’ve started to use technology more to prioritize content for our teams to review based on factors like virality and severity among others” Rosen wrote. But they want to continue expanding the technology, so that more posts can be deleted automatically. In this way, moderators can focus on the content that actually needs to be seen by a human.
Relying on Artificial Intelligence
Under usual circumstances, Facebook would use third party moderating firms. They will help them to go through every item of content that could be violating the community standards. But many of these firms’ employees are working on their home computer. They are not allowed to access sensitive Facebook data there, for obvious security reasons. “When we temporarily sent our content reviewers home due to the COVID-19 pandemic, we increased our reliance on these automated systems and prioritized high-severity content for our teams to review in order to continue to keep our apps safe during this time” Rosen wrote.
This plan to rely on AI more heavily was already announced at the start of the corona crisis. Mark Zuckerberg said back then that the company was expecting to see more false positives; posts flagged as offensive, when they’re actually not. They thought that the AI systems might take down too many posts.
AI fighting against Corona misinformation
Artificial Intelligence is very important in the fight against misinformation. It usually spreads extremely quickly, sometimes faster than any person can keep up with. AI can find material that has been flagged as misinformation by a moderator. It then finds and removes copies of that material. The issue with this is that it is not always that easy to find all the copies, even for a computer. Often, when it involves images, they will be slightly changed – even if it’s just two pixels – and a computer wouldn’t be able to pick it up as an exact copy anymore. This item would have to be flagged all over again. If you’re interested, a group of research scientists and software engineers have written a separate blog about the effort going into the spread of misinformation and AI development.
Facebook has also decided to temporarily ban ads and commercial listings for medical face masks and other products related to the pandemic. The AI systems are trying to find these and take them offline. In April the company “put warning labels on about 50 million pieces of content related to COVID-19 on Facebook, based on around 7,500 articles by our independent fact-checking partners,” the researchers and engineers wrote. It is clear that AI is helping where human effort falls short.
AI against hate speech
Now that the AI seems to be working, although nowhere near perfectly yet, another issue arises. Facebook is seeing that a large amount of the misinformation and hate speech on the platform is now showing up in images and videos. This makes finding violation content more difficult to find. And the issue regarding hate speech is that often the combination of an image and words on it makes it hateful. If these things are separated, they won’t be regarded as such. And you need a human being to see the two together. But the company is working to improve their AI to detect as much of this content as possible.
Facebook states that “dealing with hate speech is one of the most complex and important components of this work.” AI now detects 88.8 percent of the hate speech content that is removed. According to the company this rise of almost eight percent is largely due to the development of “a deeper semantic understanding of language, so our systems detect more subtle and complex meanings [and] [b]roadening how our tools understand content, so that our systems look at the image, text, comments, and other elements holistically”.
Facebook Moderators and Mental Health
Rosen said that the company has “started to use technology more to prioritize content for our teams to review based on factors like virality and severity among others”. And the content that is deemed severe is putting people in danger.
Last Friday Facebook agreed to pay damages to moderators because they developed mental health issues while working for the company. Facebook has also agreed to provide more counseling for moderators while they work for the company. The lawsuit involves people who have worked as a moderator for Facebook in the last five years. Every moderator involved in the lawsuit – and there are 11,250 – will receive a minimum of $1,000 and they will get extra if they are diagnosed with PTSD or related conditions. Lawyers believe that about half of them will be eligible for this. A similar lawsuit is pending in Ireland.
Extreme Working Conditions
After the 2016 presidential election Facebook hired many extra moderators, because the platform was criticized for not taking down harmful content. The company hired several third party moderating firms to help with this issue. Last year, The Verge spoke to some moderators who were working in Phoenix, AZ and Tampa, FL and discussed their working conditions. It turned out that moderators were working in awful environments, where they were exposed to images of rape, murder, an suicide. Every day they went to the office to see this content, and no counseling was offered to the employees.
These stories are examples of extremes, but if there are so many, it probably means that working as a moderator for Facebook is not without risk. Facebook has agreed in the settlement to implement some changes. They will be providing more counseling for their employees. They will also offer them tools to adjust the content they’re examining. Moderators will get the option to turn footage black and white or turn of the audio. Will this help at all? Not sure. Hopefully the fact that this is brought out into the open will help improve working conditions for moderators. They make sure that as few people as possible have to see this content, which is an important job. And it is clear that Artificial Intelligence is not yet finding all violating and harmful content.
Facebook’s Oversight Board
The big question in moderation is: Where do you draw the line? Some content is obviously violating and harmful. But sometimes it needs to be considered as free speech. And Facebook cannot be the one deciding what people are allowed to say. This would be too similar to a dictatorship. In a move towards “a new era of social media governance,” as the Guardian put it, Facebook has announced an oversight board.
Last week the company introduced the first twenty members of this new board. The panel will have final say in certain content moderation decisions. The board is comprised of “free expression advocates, journalists, a former prime minister, a Nobel laureate, and law professors”. This means that some of the power has been moved out of Zuckerberg’s hands, and into the hands of the board. The board will be reviewing appeals of Facebook’s content takedowns.
So the board seems to have been put in place to discuss controversies that Facebook faces regarding their content. The board describes itself in the New York times as “comitted to freedom of expression within the framework of international norms of human rights”. They will deal with content that has journalistic, historic, or artistic value but does not comply with the platform’s community standards. The board can choose to overrule decisions made by moderators and allow certain content on the platform anyway.
But experts aren’t sure that this board is actually going to change anything. Siva Vaidhyanathan, a media studies professor at the University of Virginia, spoke to the Guardian. She said “[i]f Facebook really wanted to take outside criticism seriously at any point in the past decade, it could have taken human rights activists seriously about problems in Myanmar; it could have taken journalists seriously about problems in the Philippines; it could have taken legal scholars seriously about the way it deals with harassment; and it could have taken social media scholars seriously about the ways that it undermines democracy in India and Brazil. But it didn’t. This is greenwashing.” She said that Facebook will still have the power to amplify certain content over other content. This is all done by algorithms, and the board does not have access to these.