Is what we see online real? With the rise of AI deepfakes, it’s becoming more and more difficult to answer this question.
Deepfake technology first emerged in 2017 when an anonymous user on Reddit shared an algorithm that used existing AI techniques to create realistic fake videos. Fast forward to 2024, and the internet is now rife with deepfake tools that are free to use and available to everyone.
From 2022 to 2023, there was a tenfold increase in the number of deepfakes detected globally across all industries, with a 1740% surge in North America and a 1530% increase in the Asia-Pacific region.
As deepfake technology becomes more sophisticated and accessible, it is increasingly being exploited by fraudsters for scams. As a result, the threat to online platforms grows and it’s now crucial to implement robust measures to defend against these risks.
Here's why this matters and how you can protect your platform…
Imagine receiving a video call from your boss asking for sensitive information. It looks like them. It sounds like them. But it’s not them. This is a deepfake scam. Scammers are using AI to create convincing fake videos or audio, mimicking genuine people to trick victims into handing over money or information. And these fakes are becoming increasingly difficult to spot.
There are other types of manipulated content, such as cheap fakes or shallowfakes, which use simpler editing techniques. These are typically lower in quality and easier to detect. And face-swapping technology, which has been around for over a decade, allows users to replace one person’s face with another’s, creating the illusion of someone saying or doing something they never did.
Think you can spot a deepfake? Try this fun quiz to guess which face is real:
If you guessed that the image on the right is real, you're correct. It’s tricky, isn’t it?!
While many people use deepfake technology for positive purposes, such as entertainment or education, scammers can exploit this technology for various malicious activities, including the following:
Scammers are using deepfakes to impersonate targets’ loved ones or trusted individuals to exploit their victims. For example, The "Grandparent" scam has evolved with AI-generated voices, where fraudsters call elderly individuals, impersonating the voice of a distressed relative in urgent scenarios such as arrests, robberies, or illnesses. They beg for money and plead not to tell anyone, adding believable personal details that are scraped from social media to make the scam seem more convincing.
This method exploits the emotional bond and trust between family members, making it a powerful tool for scammers to request money, sensitive information, or even persuade victims to perform certain actions.
Deepfakes are increasingly employed in business scams to forge communications from senior executives. Scammers use AI-generated videos or audio to convincingly impersonate CEOs, CFOs, or other high-ranking officials to instruct employees to transfer funds, divulge confidential information, or authorize sensitive transactions.
One of the most extreme examples occurred recently when a finance worker at a multinational company in Hong Kong was tricked into paying out $25 million after fraudsters staged a deepfake video call with the company’s Chief Financial Officer and other colleagues. These scams exploit the hierarchical trust within organizations, making employees believe they are following legitimate orders from their managers.
Fraudsters are using deepfake images to create convincing and appealing dating profiles, often portraying attractive and charismatic individuals. AI can generate unique images that can bypass reverse image searches, making it much harder for vigilant users to detect fraudulent activity. It’s even possible for deepfake video calls to happen in real-time.
Once they gain the trust of their victims, these scammers can extract money via a range of methods, such as fake investment schemes, or pretending they need financial assistance. By manipulating emotions and exploiting trust, these deepfake-powered romance scams can be highly effective and completely devastating. Find out more about the different types of romance scam.
The FBI has issued warnings about an increase in sextortion cases involving deepfakes, where scammers blackmail social media or dating app users by threatening to release manipulated sexual content. AI is used to transform innocent photos into realistic, sexual images, enabling fraudsters to bypass the need for victims to share explicit content directly. This puts anyone who shares photos online at risk. The psychological impact of such threats can be severe, leading to significant mental health issues and even suicide in some devastating cases. Tragically, suicide as a result of sextortion has risen 1800% since 2021.
Deepfakes are being used to create realistic but false news stories or statements from public figures. These fake videos and audio clips can spread misinformation, manipulate public opinion, and cause widespread panic or confusion.
Examples include deepfake videos of political figures making inflammatory statements that they never actually made, which have caused significant social and political turmoil. And there have been warnings of deepfake videos of well-known CEOs appearing to make misleading financial statements, impacting stock prices and investor decisions.
Deepfake scams are a growing and serious threat. As the issue worsens, governments are starting to take action through various regulations:
Currently, there are no federal laws in the US that prohibit the creation or sharing of deepfake images. However, there is a growing push for change. In January 2024, representatives proposed the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act, which aims to make it illegal to create a ‘digital depiction’ of any person without permission. Other proposed legislation includes the Senate’s NO FAKES Act, protecting performers' likenesses, and the DEFIANCE Act, allowing lawsuits over faked pornographic images. Additionally, several states have implemented their own deepfake laws, but the specifics vary widely.
The UK Online Safety Act, passed in 2023, makes it illegal to share explicit images or videos that have been digitally manipulated if they intentionally or recklessly cause distress. However, it does not prevent the creation or sharing of other AI-generated media without consent unless harm can be proven.
In the EU, deepfakes will be regulated by the AI Act, the world's first comprehensive AI law. The proposed AI Act mandates transparency obligations for deepfake creators but does not outright ban their use. An agreement on the AI Act was reached in December 2023, with finalization expected in 2024. Additionally, the General Data Protection Regulation (GDPR) provides protections against the misuse of personal data, and the Digital Services Act (DSA) requires platforms to swiftly remove illegal content, including deepfakes.
There are some signs to manually spot deepfakes, such as:
However, ultimately, these methods are unreliable and impossible to scale. Deepfake technology is becoming so advanced that it's nearly impossible to detect these manipulations manually in many cases. To effectively combat deepfake scams, online platforms need to employ advanced AI detection methods to tackle the root cause of the issue: the fake accounts behind them.
Online scammers almost always hide behind fake accounts. And with deepfakes making it easier than ever to create convincing fake identities, robust detection technology is more essential than ever.
If you're concerned about deepfake scams on your platform, your first step should be deploying Pasabi’s AI fake account detection technology.
Our Trust & Safety Platform detects non-genuine behaviors through:
Deepfake scams are a growing threat. But with the right tools and knowledge, you can protect your users and reputation. Contact us today to learn how we can help you safeguard your platform.