How to detect deepfakes and other AI-generated media

4 min read
ai-generated images pope putin xi jinping
AI-generated images are becoming a growing concern

In partnership with ExpressVPN

We are at a time when creating fake media is becoming increasingly easy. Deepfakes, stable diffusion, generative adversarial networks (GAN), and many other AI technologies have made it possible for users to manipulate photos and videos, or create realistic-looking media of events that never happened.

While AI-generated art and media have many positive applications, they can also be used for nefarious purposes. With the ability to seamlessly manipulate and distort reality, bad actors can use AI to deceive the public and spread misinformation.

In this post, I’ll briefly explore the threats of deepfakes and provide some tips on detecting AI-generated media. The field is evolving quickly, so you’ll have to look out for new techniques and countermeasures.

AI-generated media

The pope wearing a puffer coat. Russian President Vladimir Putin kneeling to kiss the hand of Chinese Premier Xi Jinping. Former U.S. President Donald Trump trying to evade arrest. A video of Ukrainian President Volodymyr Zelenskyy calling for surrender to Russia. None of these events have happened. But images and videos of them have been circulated on social media.

These are images and videos created with AI systems. Today, there are several deep learning technologies that can create fake media:

Deepfakes: A deepfake uses two deep neural networks to learn the facial features of an actor and a target. It then maps the target’s face onto the actor’s head. With today’s algorithms, their output has become very convincing and realistic. According to ExpressVPN, the number of deepfakes available on the internet is in the millions, with no sign of that number decreasing anytime soon.

Diffusion models: These are neural networks that have been trained on a large dataset of images and captions. By finding the patterns that connect text to images, these models can generate convincing images from text descriptions. DALL-E 2, Midjourney, and Stable Diffusion are three examples of popular diffusion models.

Generative adversarial networks: GANs are deep learning systems that can create realistic images from scratch, such as faces and landscapes.

The potential harm of AI-generated media

Donald trump arrest ai generated mid journey
A fake image generated with Midjourney purports that former U.S. President Donald Trump resisted police arrest

Deepfakes have been around for several years. But the field has had impressive advances in the past few years. And along with other generative AI technologies, such as voice synthesizers and large language models, AI-generated content can potentially become a real threat.

While we’re still at a stage where we can detect AI-generated media, it is worth noting how they can affect public opinion:

Spread of misinformation: Images and videos spread on social media very fast. By the time fact-checkers or the target of the AI-generated media have the time to respond, the damage might have already been done. Especially as the creators of deepfakes can combine them with realistic AI-generated voice, they can become very convincing.

The Mandela effect: Many people remember an event about Nelson Mandela dying in a South African prison in the 1980s. In reality, he was released from prison in 1990 and went on to become South Africa’s president. But even though the truth is obvious, the fake story has implanted itself in the minds of many people. This has become known as the “Mandela effect,” and many other examples have been documented. With deepfakes, such problems might arise more frequently.

Social engineering: Deepfakes and generative AI are making it easier to craft convincing emails, videos, websites, etc., and conduct phishing campaigns. Bad actors can use these models to create realistic-looking avatars, craft professional emails, and create voice messages that sound like real people. They can use this data to dupe their victims into revealing secrets, sending money to the attackers, or doing other harmful activities. Last year, I exposed a fake law firm that used generative AI to extort victims through DMCA notices.

Erosion of trust: The rise of deepfakes will make it harder for people to trust anything they see. It will become increasingly harder to determine what is true and what is false. On the one hand, bad actors will try to distort the truth with generative AI. On the other hand, they might deny real evidence and claim that real documents and videos are generated by AI.

How to detect AI-generated media

Researchers are constantly creating new techniques and tools to detect images and videos generated by deepfakes and other AI systems. But it is a cat-and-mouse game and as the technology gets better, old detection techniques become obsolete. For the moment, here are a few key methods that can help you detect deepfake videos, GAN faces, and images generated by diffusion models:

Face distortions: Deepfakes and GAN-generated faces usually have inconsistencies in lighting across the surface. You can also find distortions where details and edges are located. Pay special attention to hairlines, eyebrows, facial hair, earrings, and especially eyeglass borders.

gan faces
GAN faces have clearly visible artifacts

Eye and lip movements: In deepfake videos, pay special attention to the moving parts of the face. Is the shadowing consistent? Does the blinking look realistic? Do the mouth and teeth look natural when the subject speaks?

Lighting and reflections: Generative models try to predict lighting in scenes. Naturally, not all their predictions are correct, so if you look closely, you’ll find occasional glitches in shadows and reflections. Those are telltale signs of AI-generated imagery.

– Proportions: Images generated by diffusion models often have wrong proportions, such as wrong length of arms, feet, forearms, etc. These might not be evident at first, but a closer look will quickly betray AI-generated images.

Detailed objects: Current generative models are especially bad at compositionality and detailed objects. Look at things such as hands, buildings, objects with many parts. The background of the image will usually contain a few obvious mistakes.

putin xi kneel fake image
The fake image of Putin Kneeling before Xi Jinping contains telltale signs of AI (source: DW)

– Provenance: Naturally, one of the important ways to detect deepfakes and AI-generated images is to verify their provenance. Do a reverse image search on Google or Bing to see if the image is featured anywhere other than on social media. Check out whether any reputable media outlet has reported on the event. If the image is more personal and related to a friend or family member, directly check with them before believing your eyes.

Welcome to the new reality. Be more diligent. Follow the developments in generative AI. Be ready.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.