AI Video Call Scams Are on the Rise, and This Is How They Work

The rise of generative AI is here, for better or worse. Sadly, among the biggest winners here are scammers and peddlers of misinformation. AI video generation is far from perfect, but has become good enough to make impersonating another person possible.

How Do AI Video Call Scams Work

The premise is simple: use deepfake technology to impersonate someone and then use that to obtain something from you, usually sensitive financial information.

These are often subsets of romance scams designed to target people looking for love on dating websites and take advantage of them. But they can sometimes be used in other ways, like impersonating a well-known celebrity or political figure or occasionally even someone you might actually know (like your boss at work). In the latter case, they often use a spoofed phone number to sell the illusion better.

What Is a Deepfake

Deepfakes are a shorthand for any AI-created image or video designed to mimic an existing figure, usually for the purposes of deception. Deepfake technology is used for non-nefarious purposes—mostly memes and entertainment—but now, it is the medium for these video call scams.

What Is the Scammer’s Goal

In a nutshell? Information. This can take several forms, but at the end of the day, most AI scams want the same, even if the consequences vary.

At the low end of the spectrum, the scammer may attempt to deceive you into giving them sensitive information related to, say, the company you work for. You might have client details or contracts that could be sold to a rival business. This is probably your best-case scenario, but it could devastate your company.

In the worst cases, they could coax your bank information or social security number, often in the guise of a lover needing financial aid or even an interviewer for a promising new job you may or may not have applied to.

In either case, once the scammer gathers the information and the damage is essentially done, it becomes a factor of how quickly you figure out the deception and mitigate the damage.

How to Spot an AI Video Call Scam

Currently, deepfakes used in most AI video call scams are inconsistent. They often have visual glitches and oddities that make the video or voice appear fake.

Look for things like facial expressions that don’t quite match, the background of the video shifting oddly, the voice sounding somewhat flat, and so on. Current technologies often have difficulties tracking the target if they stand up or raise their hands above their head (being mostly trained on headshots and “shoulder-up” videos).

However, AI technology is rapidly improving. While it is good to recognize the flaws in current AI, relying on these skills in the grand scheme of things is potentially dangerous.

These deepfakes are already many times better than last year, and we are rapidly approaching when they might become almost completely indistinguishable from real video or audio. And while there are AI deepfake video detection tools, you can’t rely on them as the tech moves so fast.

How to Protect Again Deepfake Scams
Rather than trying to spot AI-generated material, try to take a more proactive approach to security. Most methods of information security that worked in the past will work now.

Verify the Caller’s Identity

Rather than trying to clock somebody based on their face or voice, use features that are harder to fake.

Make sure the call is coming from the correct phone number or account name. For apps like Teams or Zoom, check the email that sent the room code to see if it matches the callers credentials.

If you’re still apprehensive, ask them to verify their identity by other means. For example, try texting them, “Are you on this Zoom call with me right now?” or something similar.

You could even try to engage in small talk. Scammers often get flustered when forced to go “off-script,” and if they’re impersonating someone you actually know, they’ll likely have trouble answering questions like “How’s Jimmy doing? I haven’t seen him since we went on that fishing trip a while ago.”, especially if you mix false details into it to trip them up.

Finally, if it’s someone you talk to a lot, consider falling back on childhood “stranger danger” protection by setting up a password; if both of you know to say “Marzipan” at the start of the conversation or something, that’s difficult to fake.

Don’t Give Them Sensitive Information

Of course, the best protection is keeping all important information close to your vest. Nobody should ask for your bank information or social security number out loud over a call (and you should never share it online). And if they rush or pressure you to do so, that’s all the more reason to shut down.

That information should only be provided using official documentation or a method that allows you more time to verify the source’s legitimacy.

Leave a Comment