What is a deepfake, and why is it dangerous?

Anton P. | September 14, 2021

Deepfake videos and audio recordings manipulate reality in a spooky manner. The technology is nothing new, but it had been mainly available to high-skilled professionals in the past. Now, even unskilled netizens can play around with deepfakes, generating deceptive content in minutes. Deepfakes are incredibly realistic but entirely fictional images, audio recordings, or videos. Have you seen Mark Zuckerberg’s frightening confession about controlling billions of people’s stolen data? If yes, you have witnessed the alarming miracle of deepfakes.

What are deepfakes?

A deepfake is a simulation of reality. It is a technological sensation combining artificial intelligence and deep learning. The latter refers to the process of teaching systems to solve issues when provided with huge data sets. With deepfakes, the AI quickly learns what the source face looks like from different angles. Then, it applies the face on a target video, generating a forgery.

Like most innovations, deepfakes are a gamble. On the one hand, it might become the go-to technology allowing the film industry to add unbelievably genius elements. Even the wizards that have mastered AI-powered deepfake technology emphasize how rapidly it has grown.

Now, FakeApp, DeepFaceLab, and Zao pave the way for deepfakes to become available for the general public. It takes very little to generate deepfakes and become the leading actress in a blockbuster movie. And, to a degree, it is all harmless fun, a technology with genuinely outstanding possibilities.

However, the menace is that AI-based magic can conjure synthetic media to ruin reputations. In other cases, deepfake videos of influential politicians can also cause chaos. The disinformation and fake news that deepfakes could enforce is challenging to comprehend.

How does deepfake technology work?

A deepfake is a deceptive face-swapped media. The technology behind it can manipulate facial expressions, voices and synthesize both speech and faces. The fake video generation follows a sequence of steps:

  1. Specialists run dozens of face shots or other content of two people through an AI algorithm known as the encoder.
  2. To generate a deepfake, specialists first need to train a neural network. While new technologies have expedited this process, it takes time to develop a highly realistic composite.
  3. Another vital component of deepfake technology is Generative Adversarial Network (GAN). Essentially, there are two machine learning models. The first one generates forgeries from the samples provided. The second model tries to determine if the video is fake. If it can no longer detect fraud, the AI believes the deepfake to be relatively believable.
  4. Essentially, specialists put two AI programs to work against each other. One creates the deepfakes, and the other attempts to recognize signs of forgery. This rivalry allows enthusiasts or professionals to construct deepfakes close to indistinguishable from reality.

Examples of deepfakes

Some deepfakes floating online are not the works of art you expect. Sometimes, it is easy to detect a forgery. Here are some of the notorious deepfake examples:

  • Tom Cruise on TikTok. In March 2021, a series of deepfaked TikTok videos featuring Tom Cruise emerged. However, the person responsible for this ordeal emphasized just how laborious the process was. The videos were incredibly lifelike, but each of them took weeks of work. Additionally, the creator worked with a Tom Cruise impersonator Miles Fisher to generate the most satisfying result.
  • A fake news newsreader. A Korean channel, MBN, gave its viewers a surprising treat. Instead of a news anchor named Kim Joo-Ha, they saw a deepfake of her presenting relevant stories. If you take a look at the video, it is highly convincing, impossible to detect easily.
  • Donald Trump in Better Call Trump: Money Laundering 101. While not the most professional, this is a fun example of deepfakes. It features Donald Trump as Saul Goodman in the series Better Call Saul. Trump’s son-in-law also appears, replacing Jesse Pinkman.
  • Barack Obama’s public service announcement. In the deepfake video, Obama warns citizens about relying on unknown news sources. According to him, people should stay vigilant. By the end, a comedian Jordan Peel appears in the video, showing that he impersonates Obama. Creators at Buzzfeed used After Effects CC and FakeApp to generate this little announcement.
  • Nancy Pelosi’s speeches. Politics are common targets for deepfakes. Nancy Pelosi’s speeches were not exactly deepfaked. However, her voice and pitch tone did get distorted to make her sound intoxicated and incoherent. The bizarre speech pattern quickly got traction within social platforms. Thus, this situation shows how effortless it might be to change the narrative and attitude towards a specific individual.

What dangers do deepfakes pose?

Digital file manipulation is not necessarily groundbreaking. After all, professionals and enthusiasts alike have consistently shown incredible Photoshop skills. However, deepfakes are forgeries that can take traditional manipulation to the next level.

As presented in a study in Crime Science, AI-driven crime will be a significant threat to society over the next 15 years. Let’s discuss just how deepfakes could threaten democracy, reputations and constitute criminal behavior.

  • Deepfaked political figures. Discrediting a politician or “putting words into their mouths” could result in highly controversial videos. For instance, a fake video of presidents or ministers could even reach a point to trigger international conflicts. Luckily, at the moment, debunking deepfaked videos is relatively easy. However, there is no telling in what the future holds.
  • Ruining reputations and causing emotional distress. One of the earliest uses of deepfake technology included integrating celebrities’ faces into pornographic content. On Elle, Noelle Martin shared her traumatizing experience of receiving deepfaked pornographic videos of herself.
  • Spreading fake news. Imagine visiting a phony website featuring a video of a respectable expert. Without a doubt, you would be more eager to believe the message or promises it makes. However, little did you know that the video was a deepfake. Thus, the importance of reliable media channels will only grow.
  • Anyone can make deepfakes. The technology behind fabricated content is incredibly accessible. Many apps allow users to generate deepfakes without any technical skills. While such content is not highly convincing yet, programs facilitating deepfake creation will only improve.

Signs that certain content is a deepfake

Even the most believable deepfakes can have sure telltale signs proving their deceptive nature. For instance, AI can also work towards detecting them. Many researchers focus on deepfake detection and inspect ways to regulate it.

However, users themselves should know how to tell apart authentic content from a forgery. Here are some hints suggesting that a video you have watched is fake:

  • Bizarre eye movement.
  • Blurred facial features.
  • Fuzzy or more saturated faces.
  • Smoother skin.
  • Unnatural body movements.
  • Out-of-place colors.
  • Lack of emotions.
  • Teeth that look unnatural.

When it comes to fake audio recordings, it might be difficult to separate real from manufactured ones. Fortunately, while human hearing has its limit, computers are much more capable of recognizing deepfaked audio.

Overall, deepfakes can be a bit of fun, showcasing the tremendous capabilities of AI. Sadly, its potential can take a darker turn as the technology progresses. For the future, specialists must consider the political and legal implications that deepfakes might have. Some online platforms already ban non-consensual deepfake content.

The worst-case scenario is that forgeries will create a zero-trust society, with people struggling to distinguish truth from falsehood. Thus, the chances are that we will have many unfortunate incidents that will undermine trust. Furthermore, deepfakes will force us to question the authenticity of all content online.

Anton P.

Anton P.

Former chef and the head of Atlas VPN blog team. He's an experienced cybersecurity expert with a background of technical content writing.

Tags:

fakeapp

© 2021 Atlas VPN. All rights reserved.