Gen AI: Faking it has Never Been so Easy

Written by
Jen McKeeman
Sep 3, 2023
Gen AI: Faking it has Never Been so Easy

It’s difficult not to be impressed by the increasing sophistication of generative AI tools. An amazing tool to generate code, an aid to creatives battling with writers’ block and now even showing potential to detect signs of breast cancer missed in screenings - the full capabilities are, as yet, unknown, and the prospect is exciting!

However, what is certain is that while many people are looking to explore the use of AI for good, there are also those looking to exploit it for bad. To delve into this idea a bit further, I imagined what it would be like if I was an unscrupulous actor, looking to create a successful business for myself.

A business at the touch of a button

So, I decided to create a fictitious childcare service and asked ChatGPT for help along the way:

Gen AI Fakes

I decided on ‘Tiny Tots Daycare’ and figured I would need to think about having a website to advertise my childcare services to parents. ChatGPT quickly delivered some copy:

Gen AI fakes

As a parent, I read through the generated suggestion and thought it checked all the boxes quite nicely - nurturing environment, experienced and caring staff and tailored to my child’s needs.

Like any business, imagined or otherwise, I now needed a logo. So, in a few minutes, Midjourney kindly provided a selection for me to choose from. I went with this:

Gen AI fakes

I thought it looked cute, friendly and created the right impression for my ‘nurturing’ business.

Positive reviews in no time

Next, I started thinking about what else I would need to build my fake business, and knew reviews would be crucial. And those reviews needed to be 5-star, glowing reviews. Here are just a couple that were generated in seconds with limited direction:

positive_reviews

Reading through my fake reviews, I was both impressed and appalled at the same time. Impressed by how credible they sounded and appalled at the consequences for parents potentially basing their childcare decisions on such invented reviews.

Fake reviews might seem relatively harmless if, for example, you’re trying to decide on a new dining table. The worst outcome is that it’s not as advertised and you have the inconvenience of having to return it, which, thinking about it, isn’t actually an easy task given the size of a dining table! However, the outcome is much more serious if fake reviews have influenced your decision to leave your kids with a shady sitter, such as my invented Tiny Tots Daycare enterprise.

And to be honest, there are many other situations where fake reviews could have equally detrimental effects. For example, choosing a care facility for elderly parents. Entrusting loved ones in the care of someone requires research and recommendations from genuine clients or residents. Fake reviews hinder and mislead this emotional and difficult decision-making process.

Equally, take reviews for medical services, you’d want to know that the ‘top’ doctor you’d chosen for your knee surgery was based on authentic reviews from genuine patients.

Following a negative experience with a doctor due to fake reviews, former US federal investigator, Kay Dean, has been investigating online medical review fraud and found scores of Facebook groups dedicated to buying and selling fake reviews, i.e. review brokers/sellers. Consumers, although cognizant of the problem of fake reviews, are likely to be unaware of the scale of it and its true harm.

Scaling made simple

The challenge generative AI poses is that just as it can lead to productivity gains for businesses, it also creates opportunities for fraudsters to scam at scale and speed. These tools in the wrong hands can do untold harm.

So, where bad actors were once running a handful of scams simultaneously, they are now able to quickly scale their reach much further. For example, romance scammers can abuse AI tools to create fake profile pictures and generate misleading, convincing messages to run multiple fake accounts on dating apps and exploit many victims at once. Add onto that their ability to use an AI voice generator, they can even fake their voice and video too. Knowing what’s real and what's not is becoming increasingly difficult. And this is just the beginning for this nascent technology.

What does this mean for identifying inauthentic content and protecting the user experience?

The outcome means users need more protection from and transparency around potentially fabricated content and fake users. Businesses need to employ sophisticated fraud detection technology to identify nefarious activity, fake content and the bad actors behind it.

A recent UCLA study concluded that looking at the behaviors of reviewers, rather than review content, yields better results. This approach has proven successful in Pasabi’s experience across a variety of fraud threats - counterfeit, scams, fake accounts and fake reviews.

Irrespective of their criminal activity, fraudsters follow detectable patterns and leave behind digital fingerprints and clues. A combination of behavioral analytics, machine learning, cluster technology and our unique scoring system enables us to find them. Our technology analyzes suspicious behavioral signals and finds connections in your data to identify the most prolific offenders, providing evidence for you to take enforcement action.

This allows you to provide your genuine users with a safer and more trusted experience and provides a more level playing field for honest businesses.

While governments, regulators and tech leaders debate and decide what guardrails generative AI tools should have, fraudsters are already using them. We need to work smarter to stop them.

If you'd like to learn more about our approach, check out our recent paper: Detecting Fake Reviews through Behavioral Analytics & Network Science.

Up next

AI Trust and Safety

The Double-Edged Sword: AI's Impact on Trust and Safety

February 27, 2024

You Don’t Have a Fake Review Problem, You’ve Got a Fake Accounts Problem

You Don’t Have a Fake Review Problem, You’ve Got a Fake Accounts Problem

February 19, 2024

The Reputational Cost of Online Fraud

The Reputational Cost of Online Fraud for Platforms

March 5, 2024

The Double-Edged Sword: AI's Impact on Trust and Safety

Concerned about the impact of AI on Trust and Safety? Learn how to counteract these challenges using AI to bolster your Trust & Safety efforts.

You Don’t Have a Fake Review Problem, You’ve Got a Fake Accounts Problem

Whether you’re looking to crack down on fake reviews, scams or counterfeit, you’re ultimately talking about fake accounts.

The Reputational Cost of Online Fraud for Platforms

Discover the devastating impact of fraud on online platforms, from immediate financial losses to lasting reputational harm.