Between October and December 2020, Facebook disabled 1.3 billion fake accounts. In a crackdown on deceptive behavior, the social media giant also removed 12 million pieces of Covid-19 related content that health experts had previously flagged as misinformation.
1.3 Billion Fake Accounts
The number of fake accounts on Facebook’s platform hoovers around 5% of the total number of monthly active users. Facebook’s aim is to remove as many fake accounts as they can, while removing as few authentic accounts as possible in the process.
Alarmingly, the prevalence of fake accounts is still rising. On the other hand, the number of fake accounts Facebook is taking down is increasing as well. Over the October-December period alone, Facebook disabled 1.3 Billion fake accounts.
The vast majority of accounts Facebook eventually removes are blocked within minutes of their creation. This is before they can cause any harm. Facebook says the focus is on abusive accounts, as they’re the ones most likely to cause harm. Unlike pages set up by people for their pet, for example, which are also fake accounts but are usually innocent.
Users can report a fake account by:
- going to the account’s profile
- clicking on the three dots under the cover photo
- selecting “Find Support or Report Profile”, and
- following the on-screen instructions to file a report
95% of Users Don’t View Warning Screens
Of course, preventing fake accounts is just one way to stop abuse and halt the spread of misinformation. And unfortunately, even well-meaning people sometimes share misinformation.
“To address this challenge, we’ve built a global network of more than 80 independent fact-checkers, who review content in more than 60 languages. When they rate something as false, we reduce its distribution and add a warning label with more information for anyone who sees it.”
However, Facebook admits that 95% of the time people don’t take notice of warning screens attached to posts. Consequently, Facebook also notifies those who posted the misinformation and provides context when people share, for example, outdated or false Covid-19 related information.
12 Million Pieces of Misinformation
Facebook said that with the help of artificial intelligence (AI) systems, they also removed 12 million pieces of content containing misinformation about the coronavirus and vaccines. Moreover, over the past three years, they disabled over 100 networks of coordinated inauthentic behavior.
“Despite all of these efforts, there are some who believe that we have a financial interest in turning a blind eye to misinformation”, Guy Rosen, Vice President of Facebook, said. “The opposite is true. We have every motivation to keep misinformation off of our apps and we’ve taken many steps to do so at the expense of user growth and engagement.”
In his blog post, Guy Rosen points to Facebook’s decision, in 2018, to change the news feed ranking system so that users see “more meaningful posts” from friends and family first. He said Facebook made the adjustment knowing that it would make people spend less time on Facebook, and they did.
AI to the Rescue
AI is a critical tool to help the spread of misinformation, as it significantly scales up the amount of work human experts can do, and it is also capable of taking actions proactively.
The challenge lies in near-duplications and the countless variations that appear over time, as even the smallest content change can make the difference between truth and misinformation.
Facebook uses several technologies to detect misinformation and fake news – most of them developed in-house – including ObjectDNA, LASER, and a deepfake detection tool called DFDC.