What to know about the rising threat of deepfake scams

robot webcam
Image generated with Bing Image Creator

In a recent high-profile scam, a finance employee at a multinational firm was tricked into paying 200 million Hong Kong dollars (~$25.6 million) to cyber criminals. According to Hong Kong police, the fraudsters used deepfakes to pose as the company’s chief financial officer and other staff members in a video call.

The case has brought attention to the rising threats of advanced artificial intelligence and generative models, which can be used to create fake but convincing images, videos, audio, and text.

While there seems to be more to the case than meets the eye, it is important to stay aware of the threats of AI scams and be prepared for what is to come.

The Hong Kong case

The details on the Hong Kong case are scant. According to news reports, the employee had grown suspicious after receiving a request that seemed to come from the company’s UK-based CFO.

But after the video call, he became convinced because the other people in attendance looked and sounded just like colleagues he recognized. After that, he agreed to make the money transfer. The scam was only discovered after the employee later checked with the corporation’s head office.

In my opinion, this is not a very convincing story unless we see more details on what kind of technologies and information they used. There are plenty of advanced models that can map a person’s face to another person’s body and do so with high precision.

But they still leave evident visual artifacts that need to be fixed in post-production. Therefore, deepfakes are not yet so advanced as to provide flawless results live, especially if used on subjects that do not have a lot of online images available. 

Second, if the scammed employee was in close contact with the CFO, he should have recognized the fraudster. People identify each other by more than just face and voice. There are non-verbal expressions, tone, nuances in language use, and other small details that are hard to replicate (let alone in real-time) unless done by a professional con artist or an insider with a lot of knowledge and experience.

Third, in most companies, very large transactions usually require multiple steps to be approved and carried out. Therefore, the real CFO should have probably become aware of the process somewhere in the middle of the way and been able to prevent it.

Without more information on the deepfake video call and the events that ensued, I would not panic about the Hong Kong case and extrapolate it to all situations and companies. 

However, there is no denying that deepfakes and generative AI scams are a real threat. And there have been several cases where more trivial schemes were used to trick victims, such as sending AI-generated audio and video files that were prepared in advance. 

Fortunately, the basic principles of cybersecurity can help you deal with such scams. Here are a few tips that can help you spot and prevent deepfake scams.

Rein in your instincts and think logically

Like classic scams such as phishing emails and watering hole attacks, deepfake scams bank on fear, uncertainty, and doubt (FUD). These are situations that push you into making quick decisions without thinking them over, such as an offer that is about to expire, an account that will close soon, or a relative who is in danger. 

When faced with such situations, stop and think for a moment. Think about the conditions that would be required for such a thing to happen. For example, a friend once reached out to me and told me he had received an anonymous email that claimed to have obtained damning information from one of his devices and threatened to make them public if he didn’t pay a ransom in crypto. 

It turned out that the email in question was only used on his work computer. There was no way the scammer could have associated it with one of his devices that contained personal data. Moreover, when we reviewed the email together, he saw that the email contained no information that was specific to the receiver. The sender could have sent the same email to any other person. 

When it comes to AI scams that urge you to take action, you can use the same principles. First, think about whether the threat is logically possible. Second, look for signs that prove the authenticity of the media, such as character traits that are unique to the person or bits of information that prove the sender really knows you.

Use alternate channels to verify

If you’re required to do something sensitive, always check through alternate channels. For example, if the sender is reaching out to you through their work email, try to verify the information through their personal email address, phone number, or a messaging app. A Facetime or a phone call would usually do. And don’t worry. Deepfakes are not yet advanced enough to allow anyone to pose as your family members in a call on a whim.

For example, in the case where a father was scammed with an AI-generated audio file of his son, he could have reached out through another channel to get quick verification.

If you’re receiving a message from a coworker, there are other ways to verify the claim. For example, a few years ago, an employee from a company I worked with asked for a sensitive document. It seemed to come from the finance department and the email looked legitimate. But to make sure, I forwarded it to the manager I worked with and asked her to verify it. I also told her the reason I was making this verification. She thanked me and followed up with the related department. It turned out to be real, but the extra verification step didn’t hurt.

Check for provenance

Some scams are not targeted at specific individuals. They are meant to fool the public, such as the Tom Hanks ad for dental products and the Joe Biden robocall during the New Hampshire primaries. 

In such cases, the best way to avoid scams is to check the provenance. Don’t trust anything coming from a public figure unless you see it on their official channels such as social media accounts and websites. Social media algorithms can easily be gamed with sensational videos that generate engagements and clicks. But if the video has not been shared on the real person’s account, then don’t trust it, even if it looks real.

These tips have been my guiding principles for years, even before deepfakes and the rise of generative AI. Artificial intelligence is changing the face of reality and trust, and we need to adapt. But the basic habits of cybersecurity remain valid and helpful for the time being.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.