What Is a Deepfake And How To Spot One | HP® Tech Takes (2024)

Introduction

This article will seek to answer two key questions. Firstly, what are Deepfakes which is straightforward to answer, secondly, how to spot one, which is by far the trickier proposition. We’ll also look at the history of Deepfakes, the technology behind their creation and some of their uses.

What is a Deepfake?

Simply speaking, it’s fake media content, typically still image, video or audio of one or more persons that a computer system creates using artificial intelligence (AI) techniques.

The most common use is to generate videos. For example, can computers superimpose one person’s fake images and voice patterns onto another person, instituting a face swap. As a result, a computer creates a Deepfake video of a person doing and saying things they have never done or said.

So what is a deep fake? The term is simply the conjunction of “deep learning” and “fake”, hence “Deepfake”. The distinguishing feature of Deepfake media is that they appear convincing even though they portray fictitious events at first sight.

What is Deepfake Technology?

Deepfake technology is specifically the use of computer machine learning techniques to create visual and audio content with the intention to deceive the audience. The process uses deep learning, a machine learning technique that uses an artificial neural network that learns how to create movements and sounds for an individual person by analysing real videos of that person.

The most typical method of creating Deepfake content uses two algorithms working in combination for a process of continuous refinement.

  • The first algorithm, known as the generator, produces fake content;
  • The second algorithm, known as the discriminator, then assesses the content and identifies all the data points that indicate the content is fake;
  • This information feeds back from the discriminator algorithm to the generator algorithm;
  • The generator algorithm’s machine learning-based processing then refines the content to resolve the tell-tale signs spotted by the discriminator algorithm.

This process continues until the discriminator algorithm cannot determine that the content is fake. Thus, this process is a generative adversarial network, two algorithms working in a combative partnership to achieve the desired result.

The History of Deepfakes

Manipulating photographs has been going on as long as photography has been around. Soon after the development of moving images, techniques to alter the pictures followed.

The movie industry has been at the forefront of manipulating images for artistic purposes, whether it’s superimposing computer-generated images using green screen technology or using a computer to automatically mask out an actor’s tattoos rather than relying on makeup to cover them over.

The first recorded use of Deepfake technology was in 1997, a computer-generated image of a person’s face moved in response to music to appear as if that person were singing.

The late twenty-tens says a series of advances where researchers created convincing videos, including one system that allowed a computer-generated face to recreate a person’s facial expressions in real-time. This period also saw the use of Deepfake techniques enter into general usage. Now anyone with a computer and access to software could create a Deepfake. Deepfake software is now widely available, including apps for mobile devices.

Channel 4 highlighted the capabilities of the technology on Christmas Day 2020 when they broadcast their traditional alternative Christmas speech using a Deepfake video of the Queen. This programme showed the ability to produce a broadcast-quality video that was four minutes long. The video dramatically ended by revealing the actor delivering the speech.

Why are Deepfakes Created?

Movie Making

The movie industry is always looking for methods of improving filmmaking. Deepfake technology offers the ability to include the likeness of a deceased actor in a film for continuity purposes, such as the appearances of Carrie Fisher and Peter Cushing in the Star Wars franchise films produced after their passing.

The technology also has the potential to correct acting mistakes without having to reshoot an entire scene, potentially offering huge production cost savings.

In theory, it can also replace one actor with an entirely different actor if circ*mstances warrant a post-production change.

There is also the potential to improve dubbed films by subtly modifying the actor’s facial expressions to match the dubbed soundtrack and removing the sometimes-distracting mismatch between their mouth movements and the sounds.

Deception

Another purpose widely seen is to create videos of political figures making statements that could discredit that person for parody or malicious intent. Typically the person makes a controversial or offensive statement that undermines their character or enables their opponents to support a claim against their fitness for office. Attempts of such acts of political sabotage have seen fake videos of Barak Obama and Donald Trump. However, none have stood up to scrutiny.

The technology can also create fictitious characters using AI-generated faces that appear to be real people. The purpose is to make political statements, deliver propaganda, or spread disinformation. These non-existent people are sock puppets that anonymous individuals or organisations use to convey controversial views or make personal attacks.

Criminality

From a security perspective, Deepfakes have the potential for use in phishing campaigns where hackers attempt to persuade a potential victim to click on a dangerous link or perform an action such as transferring money to the attacker’s account.

The ability to send a video message that appears to be from someone the victim knows has the potential to increase the success rate of such an attack. For example, an employee working in an accounts department receiving a video call that appears to be from a senior director instructing them to transfer funds would be significantly more compelling than an email or text message.

Adult-Orientated Content

The most common purpose is sadly content of an adult nature, creating videos of a specific person engaged in explicit acts. A survey of Deepfake videos undertaken at the end of 2019 found that over 96% were for this purpose. Almost all involved using a female celebrity’s image to generate the Deepfake.

Creating Doubt

A final observed purpose is for blackmail, or rather to counter blackmail. Deepfake technology is not yet at the stage where creating videos to blackmail a victim is a credible proposition. Forensic analysis of the videos will quickly identify the video as being fake. However, where an individual is subject to blackmail through genuine video footage, creating multiple Deepfakes on behalf of the target can cast doubt onto the believability of the real video.

Do Deepfakes have Benefits?

Away from the movie industry, Deepfake technology does have useful and practical applications.

For example, a patient who is permanently unable to speak following a medical event may require a voice generation device to communicate. While these devices were initially robotic sounding, as demonstrated by the late Stephen Hawking, modern versions sound very lifelike.

Now, Deepfake audio technology can allow these devices to replicate the user’s voice using available recordings. The ability to retain their voice, with unique accent and inflexions, offers significant wellbeing benefits over the long term as they recover.

How Can You Spot a Deepfake?

When Deepfakes first appeared, their inferior quality made visual detection simple. Looking for lip-syncing issues, odd areas of skin tones, blurring around moving features, or unnatural movements will spot the low-quality Deepfakes. However, the technology has reached the point where the generation of convincing videos that look genuine to the viewer is possible.

Looking at the believability of the content and tracing back the source of the video can help. If the video shows someone acting out of character or espousing views that run counter to their usual public persona, then you should be cautious. If the video is not from a credible and trustworthy source, then a question mark should hang over its legitimacy.

The problem is that people tend to believe anything that reinforces their personal views. Thus, even when experts expose the deception behind a Deepfake video, some people will still think it is authentic and distrust the evidence that it is fake. This is a societal problem that reached far beyond synthetic media into the broader issues of fake news.

Technological solutions for detecting Deepfakes employ primarily the same deep learning algorithms that created them in the first place.

  • Looking for subtle inconsistencies and artefacts in video and audio data that the generation process creates will provide a means to detect it as a Deepfake.
  • Other techniques look for inconsistencies in the fine detail of the fake images, such as reflections or blink patterns, to spot evidence of computer generation processes behind the imagery.

The problem is that soon after finding a solution to consistently and reliably spotting Deepfakes, updates to generation software resolve the tell-tale issues that gave them away.

Summary

Producing fake videos that can deceive the average viewer is now relatively straightforward with the technology we all have at our disposal. But, without controls, we may soon find ourselves bombarded with fake news through social media channels that appear genuine and credible. Unfortunately, it’s only a matter of time before such phoney information sways public opinion, influences elections, or manipulates stock markets.

Producing Deepfakes is not a crime unless there is an intent to use it for malicious purposes or it depicts an individual in such a way that constitutes harassment.

The cyber security sector is already seeing the use of Deepfake videos to coerce individuals to perform fraudulent acts by deceiving them into believing they are dealing with a known contact. Deepfake technology can play a significant role in social engineering techniques.

Conclusion

Spotting Deepfakes is not a simple task that we can do without the help of the technology that creates them in the first place. Unfortunately, there’s a race between the means of creation and detection, each trying to keep one step ahead.

The critical advice is don’t believe everything you see and hear, especially if it’s not from a trustworthy source that you can independently check.

About the Author: Stephen Mash is a contributing writer for HP Tech Takes. Stephen is a UK-based freelance technology writer with a background in cybersecurity and risk management.

What Is a Deepfake And How To Spot One | HP® Tech Takes (2024)

References

Top Articles
Latest Posts
Article information

Author: Gregorio Kreiger

Last Updated:

Views: 6216

Rating: 4.7 / 5 (77 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Gregorio Kreiger

Birthday: 1994-12-18

Address: 89212 Tracey Ramp, Sunside, MT 08453-0951

Phone: +9014805370218

Job: Customer Designer

Hobby: Mountain biking, Orienteering, Hiking, Sewing, Backpacking, Mushroom hunting, Backpacking

Introduction: My name is Gregorio Kreiger, I am a tender, brainy, enthusiastic, combative, agreeable, gentle, gentle person who loves writing and wants to share my knowledge and understanding with you.