Facebook Bans Misleading Deepfakes and Restricts Manipulated Media Content

Facebook bans misleading deepfakes and restricts manipulated media content

From now on, Facebook will remove misleading videos that are heavily manipulated by artificial intelligence, also called deepfakes, from its social media platform. While true deepfakes are still relatively rare, they are becoming more common and pose a significant threat.

Different Types of “Deepfakes”

There are various types of deepfakes.

  • So called “true deepfakes” are AI-manipulated videos that are likely to mislead viewers.
  • Other types of deepfakes are “shallow fakes”. They can be just as misleading but are produced with conventional editing tools and techniques.
  • Then there are parodies or satirical manipulations. These videos may look more or less real but are clearly identifiable as “fake”. Even someone not familiar with the person or subject would notice that the message is not intended to be misleading.

Facebook’s Ban, a Baby Step

Facebook’s ban on deepfakes is, at the moment, just a baby step. In its new policy Facebook uses the following two criteria to determine whether a video should be removed from Facebook and Instagram.

Firstly, the video needs to have been edited or synthesized in “ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.”

Secondly, the policy says it needs to be a product of “artificial intelligence or machine learning that merges, replaces or superimposes content on a video, making it appear to be authentic.”

Traditionally Altered Content Still Available

The above mentioned restrictions for the removal of videos mean that traditionally altered videos will continue to be available on Facebook.

Like the inauthentic video clip of Nancy Pelosi apparently “slurring her words” that has been circulating in different versions since mid-2019. The video was promoted by Trump supporters and retweeted by Trump himself as well as by his personal lawyer Giuliani. The video made it seem like Pelosi couldn’t speak clearly by slowing her speech down.

Or, more recently, the doctored video of the British politician Keir Starmer, misleadingly edited by the Conservative Party to show how the man was “lost for words” when asked about the party’s Brexit policy. In this case they simply edited out the answer he actually gave.

Both examples show that even low-tech manipulations can be highly effective. Under the new rules, these videos would still be allowed.

Third-party Fact Checking

That being said, all videos posted on Facebook are still subject to Facebook’s fact checking system. Facebook can attach a link to a third-party fact checking site, pointing out that a video is potentially misleading. Independent fact-checkers then review the content.

If the video is factually incorrect, it will appear lower in people’s news feeds and will be labelled as false. People who see the content, try to share it, or have already done so, will see warnings telling them that the content is false. Facebook will also reject a false video if it is being run as an advertisement.

Of course, not all social media platforms have the same approach. Therefore, in some cases, it is probably wiser to label videos as false rather than removing them altogether. Otherwise, the video would simply appear on another platform or elsewhere on the internet without any warning signs.

Spot and Halt Digital Manipulation

Unfortunately, Facebook’s approach is only a partial solution to a part of the problem. Trying to rule out whether a video is fake or not, malicious or somehow misleading, could be biting of more than they can chew.

AI technology is rapidly evolving and gaining momentum. Just earlier this week, Snapchat quietly acquired AI Factory, the Ukraine-based company that is behind its animated selfie-based new video feature, Cameos. TikTok developed a similar feature. Of course, it is likely that by integrating this type of technology in popular apps even more deepfakes will appear.

The result will be an arms race between the creators, good and bad, and those who try to detect deepfakes and find new ways to authenticate content. Back in September, Facebook partnered with Microsoft and several academics to create better open-source tools for detecting deepfakes as part of their “deepfake challenge“. No doubt the upcoming 2020 election will put all who spot and halt digital manipulations to the test.

IT communication specialist
Sandra has many years of experience in the IT and tech sector as a communication specialist. She's also been co-director of a company specializing in IT, editorial services and communications project management. For VPNoverview.com she follows relevant cybercrime and online privacy developments.