How to spot a deepfake ahead of the US elections, and how digital humans differ

What is a deepfake, what are the risks, how do you spot one, and what should businesses keep in mind? Our CTO answered all recently on NBC and FOX.

Published
August 27, 2024
by
UneeQ Staff
Updated
How to spot a deepfake ahead of the US elections, and how digital humans differ
Loading the Elevenlabs Text to Speech AudioNative Player...

Think you can spot a real-life person from an AI-generated one? You may be surprised.

A recent study found that 40% of people were duped by an AI-generated person, believing it to be the real thing.

AI is getting more sophisticated at creating believable (but fictitious) people. It’s also improving in the ability to take a source image of a person and animate it into a realistic-looking video of that person doing or saying things they did not do or say.

Most importantly, generative AI will only get better at these reality-distorting tasks; meaning the problems associated with deepfakes spreading disinformation will only grow.

What are deepfakes?

Deepfakes are a form of synthetic media that uses deep learning to quickly and accurately create a lifelike recreation of a person saying or doing something they didn’t do. The output are usually videos, images, or audio clips that are manipulated to move in a way that wasn’t originally intended.

They can potentially show up in social media platforms, in group communications, websites and message boards, radio, TV, and even phone calls.

The reason deepfakes cause confusion is because people have a difficult time knowing if they are real or not. Through deepfake technologies, it’s possible to put words in someone else’s mouth where it’s hard to tell fact from fiction.

The risk of deepfakes

UneeQ’s CTO, Tyler Merritt, was recently invited by NBC and FOX to discuss the risk of deepfakes in respect to imagery being used online to deceive voters around the 2024 US election.

Tyler explains how these deepfake technologies work, as well as the difference between generative AI deepfakes and CGI-created digital humans, in the video below.

How to spot a deepfake

While social platforms determine how to police the use of deepfakes, it’s important for people to learn how to protect themselves from malicious disinformation and deception. In particular, people can look out for:

  • Backgrounds: A vague, blurred background, smooth surfaces, or lines that don’t match up are immediate red flags that an image is AI-generated.
  • Movement: Deepfakes are the most convincing when the subject is facing the camera directly. Once a person starts to turn to the side and move, glitches may appear.
  • Text: Watch out for misspelled words on ads or in the video.
  • Voice: Listen for utterances. Even the most well-spoken speakers will have “ums, and ahs,” pauses, and mistakes when they are speaking.
  • Flatness: If the voice sounds extraordinarily smooth and (and often a bit flat) it could be manufactured. In deepfakes the B and V can be poorly represented. 
  • Chins: Yep, you heard right. The lower half of the face is the No. 1 giveaway on manufactured videos. It’s subtle, but check to see if their chin or neck moves unnaturally or in an exaggerated way
  • Check for other coverage: If you see something about a candidate, check to see if news outlets are reporting it or if other sources are reporting the same thing. Bad actors using deepfakes would struggle to capture the exact scene from different camera angles.

Tyler unpacks these ways people can identify AI-generated voice, video, and avatars in another recent interview with NBC.

The difference between generative AI and CGI avatars

In his interview with FOX, Tyler explains why UneeQ creates its avatars using CGI (similar to the video game and movie industries) instead of AI-generated avatars.

There are many reasons, including avoiding the Uncanny Valley, being prepared for immersive 3D environments, and allowing for greater control for the enterprise brands we call our clients.

But perhaps the most pressing point is around brand trust.

UneeQ’s commitment to AI ethics involves never misleading users into thinking they're speaking to a real human when in fact they’re interacting with AI.

Not only is it a moral choice, there are commercial advantages to the interface being NOT human, including people being more open to disclose things they might be too embarrassed or fearful to tell a real person.

Users need to know when they’re speaking to an AI and when they aren’t. CGI-made digital humans are realistic but not photorealistic so users never feel misled. Instead, they’re designed to build trusted interactions between people and brands, where a digital human is used to help support the user in some way.

That could be when shoppers need personalized product information to make a good purchasing decision, or in healthcare scenarios when people need to disclose sensitive information about themselves to get treatment.

So it’s clear that avatars that are imperceptible to real people are not the best solution for enterprise brands who want to inspire trusting and positive relationships with their users. 

For more information, check out this blog post from our CEO on some of the other considerations when choosing between deepfake avatars for businesses and CGI digital humans.