In the last quarter of 2023, TikTok removed 181 million fake accounts from their platform. And from October 2017 to 2023, Facebook deleted 27.67 billion fake accounts. That’s around 3.5 times the total population of planet Earth!
But what counts as a fake account? Surely if an account exists, it is in fact, real?
A fake account is an account where the identity on the profile doesn't match the identity of the person (or bot) behind it. For example: Someone who sets up an account pretending to be a celebrity.
As many as 1 in 3 Americans have at least one fake account in addition to their real one. And while not all fake accounts are created for malicious purposes, many unfortunately are.
Individuals and criminal organizations hide behind the anonymity of fake accounts to scam innocent people, send hatred, or spread misinformation. Their actions not only harm users, they also harm the reputation of the platform.
Fake accounts—whether malicious or not—can make it difficult for teams to gain a true understanding of their user base. When a platform has a fake accounts problem, their user data is not a true reflection of the people who actually use their product, and so, decision-making becomes a guessing game. By removing these accounts, platforms ensure that their data reflects genuine user behavior, enabling better strategic planning and resource allocation.
Platforms known for high levels of fake account activity may lose users to competitors with more stringent account verification and moderation practices. Proactively removing fake accounts helps retain users and maintain a competitive edge in the market.
Many regions have strict regulations regarding user data and online fraud. Platforms that fail to address fake accounts may face legal repercussions, fines, or sanctions. Ensuring a clean user base helps platforms remain compliant with relevant laws and regulations.
Real users are more likely to engage with content and participate in transactions than fake accounts. By focusing on a genuine user base, platforms can boost engagement levels and, consequently, revenue streams.
A platform free of fake accounts fosters a healthier, more active community. Genuine interactions and organic growth become more prevalent, creating a positive feedback loop that attracts more authentic users.
Fake accounts can overwhelm moderation efforts, making it harder to manage and enforce community guidelines. Removing these accounts allows moderation teams to focus more effectively on genuine users and content, ensuring a safer and more respectful environment.
ID verification certainly helps to prevent some fake accounts, but it doesn’t catch them all. Scammers can set up accounts with fake IDs, or exploit weaknesses in biometric systems by duplicating or replacing biometric data by using sophisticated AI to generate deep-fakes to trick facial recognition technology.
Today, platforms need continual monitoring to weed out fake account abuse before it causes real harm. Pasabi’s Trust & Safety Platform does exactly that. It applies machine learning and AI to analyze user behavior for all of the points listed above (and more). It checks its repository of known bad actors to see if this user has been flagged elsewhere before. It then uses a unique scoring system to flag accounts with suspicious activity, so your team can investigate further and make informed decisions to protect your users.
This collaboration between humans and AI can help you filter the fake accounts that pose a threat to your platform from the legitimate users you want to protect.
Want to know more about how Pasabi can help you detect fake accounts on your platform? Sign up to book a demo today.
→ Further reading: Safeguard your platform with AI fake profile detection