Deepfakes are a type of synthetic media, either video or audio, that have the likeness of a real person, attached to a different person. What’s worrying is that deepfakes are becoming increasingly easy to create. This can have serious consequences in the lives of ordinary people, since deepfakes can be used to blackmail unsuspecting victims. They’re also used to spread fake news by creating misleading videos of well-known people.
You can protect yourself against deepfakes. These are the steps you can take:
- Get educated about deepfakes and how to spot them.
- Avoid posting pictures of yourself on public social media accounts.
- Secure your devices so that images of you, along with any other form of data, aren’t stolen and used for nefarious purposes.
- Explain the risks of deepfakes to your family and friends if you don’t want them posting pictures of you.
If your likeness is used in a deepfake piece of media, report it to the social media platform it was shared on, and contact local authorities. Depending on your place of residence, you can also talk to a legal expert.
Read our full article to find out more about deepfakes, and how to protect yourself against them.
In the second Terminator film, the T-1000 robot was able to morph its appearance to look like anyone it wanted to. Deepfakes come very close to resembling a T-1000 robot. They’re misleading media that can “morph” their appearance to look and sound like anyone, alive or dead. The future is now, and that’s not always a good thing.
What is a Deepfake?
A deepfake is a fake piece of media, usually video, that looks very real. It’s the application of AI technology and machine learning to manipulate video and voice. An example is this deepfake of Mark Zuckerberg of Facebook. The video, posted to Instagram, seemingly showed “fake Mark” saying the ominous words:
“Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures… I owe it all to Spectre. Spectre showed me that whoever controls the data, controls the future.”
But deepfakes are relatively complicated to make. Machine learning requires very large datasets to train neural networks. To create a deepfake, the data typically contains many thousands of images of two people that are morphed and merged using specialist software. Voice is then overlaid and lips are synced.
Deepfakes featuring celebrities may be funny to some, but the technology behind it is truly scary. It can even be dangerous, because short clips of celebrities saying outrageous things are only the tip of the iceberg.
How Deepfakes are Used
There are a lot of deepfakes out there. In June 2020, Facebook alone gathered 100,000 of them to teach its algorithm how to spot deepfake videos. This increase in the use of deepfake technology demonstrates that it’s becoming more and more accessible. Increased accessibility equates to more novel uses, both for good and bad. Here are a few of the latest uses of deepfakes, and how they’re harming people.
In an era where “fake news” seems to contribute to much of the world’s woes, deepfakes have taken on the role of a propaganda tool.
This deepfake video of Barack Obama shows how deepfakes can be used to manipulate the truth and release information, en masse. Deepfakes offer a very powerful mechanism to those who would attempt to manipulate people’s voting habits.
People were really worried about the effect that deepfakes could have on the 2020 election. But as this analysis from last autumn explains, deepfakes weren’t that much of a problem in that particular case. The main reason is that deepfakes are still difficult to create, at least to the point where they’re hard to distinguish from reality. But in the future, their availability for mass use may increase. As such, they might truly threaten future elections.
Deepfakes are the perfect vehicle to take online criminality to the next level. Cybercriminals can use deepfakes to blackmail people into doing their bidding. This can happen in multiple ways, for example in phishing attacks and sextortion.
As deepfake technology becomes increasingly accessible, it’s likely to be used for social engineering, which is a tactic used in many cybercrimes that manipulate human behavior by toying with our trust, creating a sense of urgency, using our sense of shame, and so on.
Sextortion is a prime example of how deepfakes might be able to do significant real life damage. Sextortion is a scam in which cybercriminals blackmail regular people to send money, under the threat of releasing compromising videos of the person. The scam can take place without the existence of an actual video, but deepfake is a mechanism by which criminals can create actual compromising material.
Even if the victim knows the video is not real, they may feel they have no choice but to pay, as the videos can be extremely realistic at times. This is not a small problem either. In January 2021, Avast researched, identified and blocked over 500,000 cases of sextortion against their clients, worldwide.
Phishing and revenge porn
And that’s only part of a bigger problem. The same technology used for sextortion can also be used for phishing. In phishing campaigns, cybercriminals will pretend to be someone else in order to incentivize actions. They can call you and use your boss’ voice, obtained through deepfake synthetization, to ask for company details.
This isn’t a hypothetical example. A couple of years ago, a British CEO was tricked into transferring $240,000 to a fraudster. The CEO believed he was talking to the head of the parent company during a phone call, who asked him to urgently transfer the money. The CEO is believed to have been tricked by a “deepfake” voice.
Deepfake technology is also used to create revenge porn. It has a particularly bad impact on women, and if you want to find out more about this use specifically, you can read this editorial by Forbes.
Creating Deepfakes with Deepfake Apps
Zao App was the first software to make deepfake creation widely accessible to people. Its use has since been restricted in most countries, but people can still access it from some parts of the world, like India.
Zao isn’t the only app that can help you create deepfake videos. A lot of other apps like that have since been released, including:
- Deepfakesweb: a paid browser app that lets you merge images and videos.
- DeepFaceLab: a more complex program built by Microsoft. Its main purpose is to help students and researchers better understand deepfake productions, but people can use it to create their own videos. It’s a bit harder to use than Deepfakesweb however, and it requires a much larger data set of media to create something convincing.
- Wombo: a lip-syncing app which lets you stitch faces to music videos. If you want to see it in action, check this clip with Elon Musk singing a probably unexpected tune:
As you can see, most deepfake apps that are widely available happen to produce videos that aren’t necessarily realistic. But that can change in the future, so it’s important to learn how to protect yourself.
How to Protect Yourself Against Deepfakes
In an attempt to counter the deepfake marketplace, Google released a repository of deepfake data. The data provided by this database will be used to help create the tools needed to detect fake videos so they can be removed. Facebook is behind a similar initiative, organizing a competition to create deepfake detection software. Microsoft developed a fully-fleshed deepfake detection tool.
But are these efforts enough to keep you safe from the dangers of deepfake? They might be, in the future. As of right now, you still need to take safety precautions if you don’t want to suffer because of deepfakes.
Sharing images publicly: a bad idea?
Sharing some images online isn’t a tragedy. However, if you take a lot of pictures of yourself and make them publicly available, you’re an easier target for cybercriminals employing deepfake technology.
In the settings of most social media platforms, you can choose to make your account private. Encourage family members and close friends to respect the same guidelines. Remember that the most realistic deepfake videos can only be created with a large data set of someone’s pictures. If you can at least limit the pictures of you that are widely available, you’re less likely to fall prey to cybercrime involving deepfakes.
Enhanced device security
Besides being conscious about the images you post, you can also make sure your devices are secured, so your data (including images and videos) never get leaked. To do this, make sure you use a VPN (Virtual Private Network) which helps you stay anonymous online by changing your IP. Anti-malware software can also help, especially on mobile devices.
Contacting the authorities
Lastly, if you are ever a victim of cybercrime involving deepfakes, report the media created with you to the social platform it was shared on. Besides that, you should also contact local authorities and make a formal complaint. If you’re in media content created with deepfake, it can be considered defamation, so you can also contact an attorney to analyze your legal options.
A Deepfake Futurescape
While deepfakes can be used for lighthearted fun, the technology is also used by cybercriminals to do serious harm. Well-known people are the easiest targets for deepfake creation, specifically if they’ll be used to spread fake news, but the dangers of deepfake can affect anyone. The most important thing when it comes to online safety and using the internet, is to be critical of everything you see online. Now that you know about the existence of deepfakes, you might think twice before you believe a video of a government official making an outrageous claim.
Deepfakes are becoming more readily available and easier to spread. Do you still have a question about this new way of creating “fake media”? Check this FAQ section for our answers to the most frequently asked questions about deepfakes.
A deepfake video is a piece of media created with deepfake technology that incorporates video and audio elements. Deepfake technology uses AI and machine learning algorithms to modify existing videos and add new faces or voices over the original file. Most popular deepfake videos feature a celebrity saying or doing something ridiculous or outrageous. You can find out more in our article about deepfakes.
We strongly advise against using deepfakes for nefarious purposes. Creating a deepfake video and sharing it publicly could be considered libel, and even extortion if you make contact with the person featured in the video.
That being said, if you just want to have some fun with your friends – for example by deepfaking your face into a music video – you can use online apps. Deepfakesweb allows you to create deepfake videos, but isn’t free. Videos can be anywhere in between $5-$75 to create, depending on the length of the deepfake. Other options are Wombo and DeepFaceLab.
Identifying the first deepfake video is hard. That’s because deepfakes didn’t just appear at once, so you can’t draw a clear line between manual video editing and AI-powered editing. In reality, the two forms of media manipulation started mingling, until AI-powered video manipulation became “autonomous,” so to speak.
However, the term “deepfake” has a clear origin. It was first used by a Reddit user with the same name back in 2017, so it’s safe to assume the first “true” deepfakes aren’t much older.