The Rise of Deepfake Scam

Written by
Harriet O'Connor
Jun 7, 2024
What is Deepfake Scam? | How to Spot a Deepfake | Deepfake Laws | Deepfake Scams

Is what we see online real? With the rise of AI deepfakes, it’s becoming more and more difficult to answer this question.

Deepfake technology first emerged in 2017 when an anonymous user on Reddit shared an algorithm that used existing AI techniques to create realistic fake videos. Fast forward to 2024, and the internet is now rife with deepfake tools that are free to use and available to everyone.

From 2022 to 2023, there was a tenfold increase in the number of deepfakes detected globally across all industries, with a 1740% surge in North America and a 1530% increase in the Asia-Pacific region. 

As deepfake technology becomes more sophisticated and accessible, it is increasingly being exploited by fraudsters for scams. As a result, the threat to online platforms grows and it’s now crucial to implement robust measures to defend against these risks. 

Here's why this matters and how you can protect your platform…

What is a deepfake scam?

Imagine receiving a video call from your boss asking for sensitive information. It looks like them. It sounds like them. But it’s not them. This is a deepfake scam. Scammers are using AI to create convincing fake videos or audio, mimicking genuine people to trick victims into handing over money or information. And these fakes are becoming increasingly difficult to spot.

There are other types of manipulated content, such as cheap fakes or shallowfakes, which use simpler editing techniques. These are typically lower in quality and easier to detect. And face-swapping technology, which has been around for over a decade, allows users to replace one person’s face with another’s, creating the illusion of someone saying or doing something they never did.

Think you can spot a deepfake? Try this fun quiz to guess which face is real:

If you guessed that the image on the right is real, you're correct. It’s tricky, isn’t it?!

How are scammers using deepfakes?

While many people use deepfake technology for positive purposes, such as entertainment or education, scammers can exploit this technology for various malicious activities, including the following: 

Family impersonation scams

Scammers are using deepfakes to impersonate targets’ loved ones or trusted individuals to exploit their victims. For example, The "Grandparent" scam has evolved with AI-generated voices, where fraudsters call elderly individuals, impersonating the voice of a distressed relative in urgent scenarios such as arrests, robberies, or illnesses. They beg for money and plead not to tell anyone, adding believable personal details that are scraped from social media to make the scam seem more convincing. 

This method exploits the emotional bond and trust between family members, making it a powerful tool for scammers to request money, sensitive information, or even persuade victims to perform certain actions.

Business impersonation scams 

Deepfakes are increasingly employed in business scams to forge communications from senior executives. Scammers use AI-generated videos or audio to convincingly impersonate CEOs, CFOs, or other high-ranking officials to instruct employees to transfer funds, divulge confidential information, or authorize sensitive transactions. 

One of the most extreme examples occurred recently when a finance worker at a multinational company in Hong Kong was tricked into paying out $25 million after fraudsters staged a deepfake video call with the company’s Chief Financial Officer and other colleagues. These scams exploit the hierarchical trust within organizations, making employees believe they are following legitimate orders from their managers.

Romance scams

Fraudsters are using deepfake images to create convincing and appealing dating profiles, often portraying attractive and charismatic individuals. AI can generate unique images that can bypass reverse image searches, making it much harder for vigilant users to detect fraudulent activity. It’s even possible for deepfake video calls to happen in real-time

Once they gain the trust of their victims, these scammers can extract money via a range of methods, such as fake investment schemes, or pretending they need financial assistance. By manipulating emotions and exploiting trust, these deepfake-powered romance scams can be highly effective and completely devastating. Find out more about the different types of romance scam. 

Sextortion

The FBI has issued warnings about an increase in sextortion cases involving deepfakes, where scammers blackmail social media or dating app users by threatening to release manipulated sexual content. AI is used to transform innocent photos into realistic, sexual images, enabling fraudsters to bypass the need for victims to share explicit content directly. This puts anyone who shares photos online at risk. The psychological impact of such threats can be severe, leading to significant mental health issues and even suicide in some devastating cases. Tragically, suicide as a result of sextortion has risen 1800% since 2021.

Fake news and disinformation

Deepfakes are being used to create realistic but false news stories or statements from public figures. These fake videos and audio clips can spread misinformation, manipulate public opinion, and cause widespread panic or confusion

Examples include deepfake videos of political figures making inflammatory statements that they never actually made, which have caused significant social and political turmoil. And there have been warnings of deepfake videos of well-known CEOs appearing to make misleading financial statements, impacting stock prices and investor decisions.

What are the deepfake laws?

Deepfake scams are a growing and serious threat. As the issue worsens, governments are starting to take action through various regulations:

US deepfake laws 🇺🇸

Currently, there are no federal laws in the US that prohibit the creation or sharing of deepfake images. However, there is a growing push for change. In January 2024, representatives proposed the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act, which aims to make it illegal to create a ‘digital depiction’ of any person without permission. Other proposed legislation includes the Senate’s NO FAKES Act, protecting performers' likenesses, and the DEFIANCE Act, allowing lawsuits over faked pornographic images. Additionally, several states have implemented their own deepfake laws, but the specifics vary widely.

UK deepfake laws 🇬🇧

The UK Online Safety Act, passed in 2023, makes it illegal to share explicit images or videos that have been digitally manipulated if they intentionally or recklessly cause distress. However, it does not prevent the creation or sharing of other AI-generated media without consent unless harm can be proven.

EU deepfake laws 🇪🇺

In the EU, deepfakes will be regulated by the AI Act, the world's first comprehensive AI law. The proposed AI Act mandates transparency obligations for deepfake creators but does not outright ban their use. An agreement on the AI Act was reached in December 2023, with finalization expected in 2024. Additionally, the General Data Protection Regulation (GDPR) provides protections against the misuse of personal data, and the Digital Services Act (DSA) requires platforms to swiftly remove illegal content, including deepfakes.

How to spot a deepfake

There are some signs to manually spot deepfakes, such as:

  • Unnatural facial movements
  • Inconsistent lip-syncing
  • Irregular eye blinking
  • Mismatched lighting and shadows
  • Subtle glitches or distortions
  • Robotic or unnatural speech patterns

However, ultimately, these methods are unreliable and impossible to scale. Deepfake technology is becoming so advanced that it's nearly impossible to detect these manipulations manually in many cases. To effectively combat deepfake scams, online platforms need to employ advanced AI detection methods to tackle the root cause of the issue: the fake accounts behind them.

Tackling deepfake scams starts with fake accounts

Online scammers almost always hide behind fake accounts. And with deepfakes making it easier than ever to create convincing fake identities, robust detection technology is more essential than ever. 

If you're concerned about deepfake scams on your platform, your first step should be deploying Pasabi’s AI fake account detection technology

Our Trust & Safety Platform detects non-genuine behaviors through:

  • AI Detection: Our AI tools identify suspicious patterns at scale.
  • Behavioral Analytics: We analyze user behavior to spot inconsistencies.
  • Cluster Technology: We uncover connections between scam networks to catch the most prolific offenders.
What is Deepfake Scam? | How to Spot a Deepfake | Deepfake Laws | Deepfake Scams

Deepfake scams are a growing threat. But with the right tools and knowledge, you can protect your users and reputation. Contact us today to learn how we can help you safeguard your platform.

Up next

Fake profile detection using machine learning | How to detect fake profiles | AI Fake Profile Detection

Safeguard Your Platform with AI Fake Profile Detection

March 22, 2024

Dating App Scams | Dating App Scammers

Love at a Cost: The Rise of Dating App Scams

February 13, 2024

Should we fear AI | The Fear of AI | Fear of AI taking over | Why do people fear AI

Should We Fear AI?

May 31, 2024

Safeguard Your Platform with AI Fake Profile Detection

Learn how to detect fake accounts, and how Pasabi can protect your online platform with fake profile detection using machine learning.

Love at a Cost: The Rise of Dating App Scams

Learn about the deceptive practices used by dating app scammers, and how AI technology can be employed to prevent these fraudsters.

Should We Fear AI?

Explore the common fears surrounding AI, and how responsible use and ethical development can turn it into a powerful force for good