What is a deepfake, what are the risks, how do you spot one, and what should businesses keep in mind? Our CTO answered all recently on NBC and FOX.

Think you can spot a real-life person from an AI-generated one? You may be surprised.
A recent study found that 40% of people were duped by an AI-generated person, believing it to be the real thing.
AI is getting more sophisticated at creating believable (but fictitious) people. It’s also improving in the ability to take a source image of a person and animate it into a realistic-looking video of that person doing or saying things they did not do or say.
Most importantly, generative AI will only get better at these reality-distorting tasks; meaning the problems associated with deepfakes spreading disinformation will only grow.
Deepfakes are a form of synthetic media that uses deep learning to quickly and accurately create a lifelike recreation of a person saying or doing something they didn’t do. The output are usually videos, images, or audio clips that are manipulated to move in a way that wasn’t originally intended.
They can potentially show up in social media platforms, in group communications, websites and message boards, radio, TV, and even phone calls.
The reason deepfakes cause confusion is because people have a difficult time knowing if they are real or not. Through deepfake technologies, it’s possible to put words in someone else’s mouth where it’s hard to tell fact from fiction.
UneeQ’s CTO, Tyler Merritt, was recently invited by NBC and FOX to discuss the risk of deepfakes in respect to imagery being used online to deceive voters around the 2024 US election.
Tyler explains how these deepfake technologies work, as well as the difference between generative AI deepfakes and CGI-created digital humans, in the video below.
While social platforms determine how to police the use of deepfakes, it’s important for people to learn how to protect themselves from malicious disinformation and deception. In particular, people can look out for:
Tyler unpacks these ways people can identify AI-generated voice, video, and avatars in another recent interview with NBC.
In his interview with FOX, Tyler explains why UneeQ creates its avatars using CGI (similar to the video game and movie industries) instead of AI-generated avatars.
There are many reasons, including avoiding the Uncanny Valley, being prepared for immersive 3D environments, and allowing for greater control for the enterprise brands we call our clients.
But perhaps the most pressing point is around brand trust.
UneeQ’s commitment to AI ethics involves never misleading users into thinking they're speaking to a real human when in fact they’re interacting with AI.
Not only is it a moral choice, there are commercial advantages to the interface being NOT human, including people being more open to disclose things they might be too embarrassed or fearful to tell a real person.
Users need to know when they’re speaking to an AI and when they aren’t. CGI-made digital humans are realistic but not photorealistic so users never feel misled. Instead, they’re designed to build trusted interactions between people and brands, where a digital human is used to help support the user in some way.
That could be when shoppers need personalized product information to make a good purchasing decision, or in healthcare scenarios when people need to disclose sensitive information about themselves to get treatment.
So it’s clear that avatars that are imperceptible to real people are not the best solution for enterprise brands who want to inspire trusting and positive relationships with their users.
For more information, check out this blog post from our CEO on some of the other considerations when choosing between deepfake avatars for businesses and CGI digital humans.