Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

What is a deepfake? How does it work? And how can you spot them?

Business person chatting with a smart AI  using an artificial intelligence chatbot development.
Igor Omilaev
/
Unsplash

A doctored version of a speech by Democratic presidential nominee Kamala Harris was re-posted to X, formerly Twitter, by owner Elon Musk late last month. The video featured fabricated remarks about Harris's qualifications to run for president and her time at the White House. Days later, Musk said the video was intended as satire, but the damage was already done — as of this posting, the altered video had more than 219,000 reposts, 930,000 likes and 100,000 bookmarks.

It's just the latest example of how good AI-generated content has become, and how damaging it can be. (Remember how Katy Perry's own mother thought her daughter attended this year's Met Gala?)

So, what is an AI deepfake? How does it work? And what can you do to spot it?

Defining a deepfake

According to the University of Cincinnati’s Intellectual Property and Computer Law Journal, the term “deepfake” is a combination of “deep learning” and “fake,” which lends itself to the hyper-realistic, digitally manipulated work it produces.

Deepfakes include face swaps, audio manipulation, facial reenactments, and lip-synching that use AI applications to merge, replace and superimpose images and video clips to create fake products that appear authentic.

The term carries a negative connotation due to the nature of the content it produces — most commonly bullying, fake news, and what's known as revenge porn. But it also has beneficial uses, such as translating videos into other languages and hiding the identities of vulnerable populations in media.

RELATED: U.S. says Russian bot farm used AI to impersonate Americans

According to UC professor Jeffrey Shaffer, AI is not inherently bad, but the models used to train it are based on human knowledge and experience, implicating our own biases in AI's responses.

“We have to teach not only how to use AI tools, but what these tools are good at, what the cautionary tales are, what we have to be careful of, because these tools do make mistakes and it can be really bad mistakes.”

5 tips for spotting AI deepfakes

1. Maintain a basic understanding of AI, staying up to date on developing AI technologies.

Shaffer says there are two kinds of AI: supervised and unsupervised. Supervised models are trained on a large dataset of labeled examples. The AI learns to recognize the features and can categorize new examples.

Unsupervised models are given a large dataset without labels, and must learn to identify and group the data on its own. This type of learning can be more dangerous for creating deepfakes, as the AI has more freedom to generate new content without human oversight.

Generative AI systems, such as ChatGPT, are typically unsupervised, but can use a combination of supervised and unsupervised learning, with human feedback and "reinforced learning" to improve the accuracy of the model over time. 

2. Look for inconsistencies in the video or audio

This includes lips not matching the words, inflection or timing issues, body language or the person appearing to look left or right when they shouldn't.

3. Be skeptical of any content that seems too perfect

Real video and audio often has small imperfections that AI may not fully replicate.

RELATED: 10 reasons why AI may be overrated

Relying on AI tools to identify deepfakes may be necessary in the future. Shaffer recommends looking for any breadcrumbs or indicators that the content was AI-generated. Some tools may include these to help identify machine-generated content. 

4. Use reverse image or video search tools to see if the content has been manipulated or is from a different source

Check the source and context of the content — deepfakes are often created and spread through social media or other online platforms.

5. Use AI tools to analyze the pixels and image features 

AI-generated images may have subtle differences compared to real photos. Shaffer recommends asking other AI tools such as GhatGPT or Claude if a suspicious work is AI generated.

He says to be aware that as AI technology advances, it will become increasingly difficult for humans to visually detect deepfakes.

Education: Northwest High School '23 | Ohio University '27