The Dangers of Deepfakes
You may have seen these headlines:
Or, maybe, you've seen the deepfakes of Nicolas Cage, Nancy Pelosi, or Mark Zuckerberg over the last year or two.
But, the reality of deepfake videos, images, and audio is much more dangerous than fake Hollywood celebrity videos.
96% of deepfakes are pornographic and 99% of those mapped faces are sourced from women — and not just women celebrities.
Now let’s dive in.
What is a deepfake?
A deepfake is an image, video, or audio of a person whose face or body is digitally altered so that they look or sound like someone else.
Why is it called a deepfake?
Deepfakes are a rather new technology.
The word itself is coined from a combination of the term “deep learning” and “fake”.
“Deep learning” refers to a particular type of artificial intelligence (AI) — a neural network — and its ability to perform analytical and physical tasks on large data sets without human intervention. In other words, the artificial intelligence is able to learn and process large amounts of data in a short amount of time, adapt to change, and do things without much of a user’s effort.
Deep learning is a subset of machine learning, which is itself a subset of AI technology that enables computer systems to learn without users having to direct it closely.
A shallowfake, by contrast, is the product of people using non-deep learning programs to manipulate another’s face and/or body. People can use simple video-editing systems, for example, to create less convincing content.
What are deepfakes used for?
“Similar to computer viruses, good deepfakes can be incredibly difficult to detect. But unlike computer viruses, good deepfakes can be created by anyone. For example, any junior high school student can create a deepfake using a 5 year old iPhone,” says Ben Colman from Reality Defender.
Often the people who are using deepfake technology use it to:
- Spread misinformation and inspire misunderstanding, fear or disgust
- Create false narratives of people
- Create revenge porn (aka deepfake porn or deepfake nudes - which get ~95,000 searches a month on Google)
- Generate a specific public image for the subject (and sometimes make a one of themselves that contrasts and depends on the subject’s falsified public image)
- Censure or mock the subject
- Create pornographic content (often said to be for an individual’s sexual pleasure)
Are deepfakes always malicious?
There are some arguments for positive uses of deepfakes including using them for accessibility, education, activism, art, and self-expression. However, the potential and real-life positives of deepfakes does not negate the dominance of deepfakes that regularly harm women and girls.
“With quick search of the iOS or Android AppStore, non-technical users can download applications that make a person appear compromised, or involved in an inappropriate situation or video. We are seeing the growth of deepfakes in revenge porn, and nation state level disinformation campaigns,” says Ben.
How to make a deepfake
Previously, to make a deepfake, a person had to use a facial recognition algorithm and a variational auto-encoder (aka, a VAE, an autoencoder whose encodings distribution is regularised during the training in order to ensure that its latent space has good properties allowing us to generate some new data).
The variational auto-encoder encodes whatever images (aka, training data) the person gives the AI algorithm for the encoder to transform them into new images. The encoder would be “trained” to recognize the original subject’s face. The deepfake creator would also need another encoder that’s trained in a diversity of faces for contrast.
After training the encoders to recognize, say, Jennifer Aniston’s face, and recognize other non-Aniston faces, the deepfake creator will combine the two encoders’ information to produce her face on someone else’s body.
As for audio deepfakes, it requires enough noise to disguise the otherwise easily identified falseness of the voice.
Now, there are "deepfake makers" and "deepfake apps" and other deepfake software that make creating a deepfake as simple as uploading an image or video file.
Who is making deepfakes?
Deepfake creators may be political groups, government agencies, social media users, software tech experts, visual effects artists, or the average layman.
While the most credible deepfakes require knowledge of AI systems and design, it’s also possible for ordinary people like your friends and family to make a deepfake due to this technology being released publicly via websites anyone can find.
You may have even come across Android and Apple face-swapping apps where users can upload selfies and swap faces for a seemingly innocent moment of fun. There are quite a few of these open-source tools and apps that are seemingly innocent. While it might be fun to perform a face swap, take a second to think about why someone might do this beyond a bit of fun.
Would you doubt or look twice at this image?
It’s technically a deepfake. Would you have known?
What's the WHY behind the technology
Some researchers posit that deep learning tech could enable the visually impaired to navigate the world by transferring their navigation skills, using a cane for example, to a virtual world the AI builds. AI with deep learning can also enable educators and teachers to deliver lessons that engage their students through an immersive sensory experience.
“AI-Generated synthetic media can bring historical figures back to life for a more engaging and interactive classroom,” suggests Ashish Jaiman, the Director of product management at Microsoft.
However, deep learning is a continuation and progression of artificial intelligence — which inevitably comes with negative consequences.
How do you spot a deepfake?
Deepfake detection can be both easy and difficult.
Ben agrees - “Anyone can create a deepfake in under 10 seconds, but detecting a deepfake can take days. While multiple detection models exist, they are only as good as the deepfakes used to train them. For example, if a deepfake detection algorithm is not trained to track a person’s eye blinking, then it will not recognize deepfakes of people whose eyes have not closed for 10 minutes. For this reason, www.RealityDefender.ai has brought multiple deepfake detection models onto a single platform.”
Qualities of a badly made deepfake include:
- bad lip-synching
- patchy skin tone
- flickering around transposed faces’ edges
- visible strands on the fringe of hair, where the AI had trouble rendering the image
- badly rendered jewelry and teeth
- inconsistent lighting and glasses or the iris’ reflections
- misplaced or misshapen facial features
For more convincing deepfakes, the best way for you to spot one is to look out for:
- If the subject’s voice matches their appearance (is a very high, tinny voice coming from a portly person?)
- Whether or not the subject’s blinking too much or too little
- If their eyebrows really match their face’s shape (Are there noticeable shadows? Do they look like extra layers?)
- If their hair sits right on their scalp
- Whether the skin seems too airbrushed [seemingly like plastic or marble] or over-wrinkled
“We built Reality Defender in New York City so that we can be close to the top AI researchers across startups, academia, business, and government. This helps us support and influence emerging standards and legislation to accelerate innovations in deepfake detection,” says Ben.
Is creating a deepfake a crime?
The short answer is that it depends on the country — and if in the United States, the state — and how it was used.
On May 2, 2021, South Korean police officers cite creating deepfakes as a crime after making several arrests — mostly of teenagers. One police official stated, “Creation or distribution of deepfake content is a serious crime”.
Hogan Lovells, working for a global law firm, reports that many countries in Europe have not included prohibitions against deepfakes.
Within the United States, Texas was the first state to address the legality of deepfakes with the 2019 TX SB751, which directly prohibits and criminalizes deepfake creation, especially in the context of a political election. California also made measures against deepfakes in 2019 by “prohibit[ing] the use of deepfakes in election materials by specifically forbidding the malicious production or distribution of ‘materially deceptive’ campaign materials within sixty days of an election”.
However, these were temporary measures for particular events.
As for an example of a federal law, in 2021, the National Defense Authorization Act (NDAA) recently “ask[ed] for recommendations” that could lay the foundation for federal regulations of such media. The NDAA includes the requirement for National Security officials and officers to create annual reports for the next five years on deepfakes. These reports are supposed to show the potential harm from deepfakes, which may have things from foreign influence campaigns to fraud. Employees at the Pentagon currently work with many of the U.S.’s renowned research institutions to combat the threats deepfakes politically and personally.
The implications of deepfakes
Anyone with a good enough program (or access to newly created deepfake creation websites) can transpose and edit another person’s face onto someone else. They can even use their voice and make them seem to say something they never said. With how much we depend on the internet for both news and entertainment and how people actively distort reality to create instantly gratifying content, it’s sometimes difficult to notice when something is real or if it’s something using fantasy to make its own reality.
What does that mean when the wrong hands take advantage of such technology? If it’s so easy for people with little to no knowledge of AI systems and deep learning to still use these technologies, does this mean that more people will be inclined to inflict malicious acts against others?
One implication that arises from deepfakes is the circumstance where it becomes “proof” of a person’s actions. With deepfakes, an ex-boyfriend can easily put an ex-girlfriend’s image on the face of a porn actor to then spread it around in their respective social groups. The victim would face the often negative repercussions of having a taboo action exposed without ever having actually done it.
Again, 95% of all deepfakes are pornographic.
Women already face disproportionate amounts of discrimination and harassment — often with long-lasting consequences.
Deepfakes can, and often do, ruin lives.
Revenge porn is that much easier to create and claim real to damage someone’s reputation.
One woman describes how her Facebook images were transposed into “violently pornographic” images and shared with others who were encouraged to also share them. She highlights how the purpose of these deepfakes is to display the creator’s power over the subject in a way saying “This is what I can do to you, and I can do it however I want. You’d better act like how I want you to. There’s no room for you otherwise”.
Let’s be clear - deepfakes are a new form of violence against women.
Finally, deepfakes could absolutely corrode the trust people have not only in the subjects of deepfakes, but of content creators (news outlets, magazines, etc.) and create new forms of online scams.
You can argue that the threat of such identity distortion-making would affect not just how people interact online, but how they’re developing their own morals, living standards, ideas of autonomy, and identities.
If anyone can make you look like you enjoy certain activities and things without your consent, do you really have significant control over yourself or your environment?
What would be the point in engaging with the world if you can’t safely say or do things without “making” others come after you and potentially ruin your life?
Should you do the same to get at least a modicum of control? (And down a spiral many can go — doing the same that others have or could do against them, and then feeling like they have the right to do the same to them or others.)
With the conspiracy-laden, accusatory attitudes and the ease of misinforming and being misinformed, deepfakes are going to have a bigger impact on our lives — both digitally and physically.
While the technology does have potential uses, its disadvantages and dangers outweigh the benefits.
Ben Colman understands the implications of deepfake technology.
“Once a deepfake is shared and re-shared, it is incredibly difficult to remove from circulation. Reality Defender’s goals are two-fold:
- Stop deepfakes at the source by flagging the offending materials and the offenders themselves
- Stop deepfakes at the destination by integrating our solution into social networks, dating platforms, internet browsers and the device themselves”
Reality Defender’s goals are lofty — but needed in today’s digital age. While this new technology takes hold, it’s up to citizens, and other technology, to combat the dangers it presents.