Beware of AI scams in 2024

online scam
Image source: 123RF

In partnership with ExpressVPN

Just as advances in artificial intelligence have given rise to tools for improving productivity and business, they are also making it easier for cybercriminals to carry out their evil deeds. While scams and online theft are nothing new, they are constantly changing, and with the advent of advanced AI tools, especially generative AI, we are seeing a new slew of threats to avoid.

As revealed by ExpressVPN in its comprehensive overview of the online scam landscape in 2024, AI plays a big role in how threats are shaping in the coming year.

From the Biden robocall to the Taylor Swift deepfake, here is everything you need to know about AI scams in 2024.

Deepfakes

Deepfakes are fabricated images or videos of a person, which are supposed to convince the viewer that the victim has done something wrong or just shame them. Deepfakes have been around since 2017. But the impressive progress in generative AI, especially in text-to-image technology, has made them even more dangerous.

The most recent case that caused much concern was an explicit AI-generated image of Taylor Swift, which was distributed widely online. Social media companies moved to rein in the spread of the image, but not before it was viewed tens of millions of times.

In late 2023, there was news about AI-generated images of the war in Gaza, which were sometimes being used by news outlets covering the event. And AI-generated images of former U.S. President Donald Trump being arrested became a topic of discussion in March 2023.

While these are cases that made a lot of noise due to their high-profile subjects, there are a lot more cases that don’t get attention. Previously, creating deepfakes required a lot of images from the victim. 

However, more recent AI models can take a single shot of a subject and map it onto an image or a video. This means that many more people can become the target of deepfakes and AI-generated images created for malicious purposes.

Fake voice scams

In mid-January, ahead of the New Hampshire presidential primary, a call went out to several New Hampshire voters urging them to not vote. The call sounded like President Biden. But in reality, it was an AI-generated voice file.

With advances in AI-powered voice synthesis, the threat of voice scams is growing. Scammers have used AI-generated voices to pose as relatives, bosses, and celebrities. For example, in one case, scammers tried to defraud a man of $9,000 worth of bitcoin using the AI-generated voice of his son.

In more advanced cases, AI-generated voice and video are combined to create more elaborate scams. In October, Tom Hanks promoted dental plans in a video. But it turned out that the video was a deepfake of the actor, who later warned his followers that he had “nothing to do with” the AI-generated video or the product it was promoting.

Fake websites

Thanks to AI-based design tools, it is becoming easier to create a website that looks professional. And with the help of large language models (LLM) like ChatGPT, anyone can quickly put together web copy that looks professional and convincing at first glance. Add a few AI-generated images, and you can even create a full roster of staff members for the website.

I experienced this kind of scam when a supposed copyright lawyer contacted me and claimed that I had used her client’s image without attribution. She warned me that I would face legal action if I did not add a link to her client’s website on the image’s description page.

Being sure that I had not used unlicensed images, I decided to take a closer look at the website of the law firm of the supposed lawyer. Everything looked legit at first glance: Decent design, good copy, an address and phone number. Then I went to the company’s staff page and found a list of lawyers who had graduated from prestigious universities. However, a closer look at the faces showed the tell-tale signs of AI-generated images. I wrote about it on my website and received many emails and comments from other site owners who had received similar messages but had thought it was authentic.

This was in 2022. Today, AI-generated images and text are even more convincing and more difficult to detect.

How to protect yourself against AI scams

As AI continues to progress, new attacks emerge and old defenses become obsolete. But some fundamentals of defense against scams remain constant.

First, scammers will constantly bank on extreme emotions such as fear, anxiety, pleasure, and greed. If something you see, hear, or read sounds too good/bad/scary to be true, be skeptical. If it seems to come from a close associate, make sure to verify it with them through a different source. For example, if you received an email from a friend, try to confirm it on a messenger application or a phone call.

Second, provenance will become increasingly important as AI-generated content becomes more convincing. If you see a post on social media or a news article with convincing images and videos, check the source. If it claims to be from a celebrity or a news agency, check their official social media and web channels to see if the media is authentic.

Finally, the most basic defense against AI scams is to stay informed about how the field progresses. Make sure you remain up-to-date with the latest news about generative AI, deepfakes, and AI-generated content.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.