For many businesses, hiring is off the table right now. Global economic growth is set to slow to 2.6% this year (2024)—just above recession threshold. And in 2023, 1,185 tech companies laid off over 260,000 people.
What can businesses do when they know they need to scale their trust and safety operations, but they’re wary of employing new team members (especially when the global economy is so volatile)?
Approximately 35% of global companies report using artificial intelligence. With cyber security and fraud prevention being the second most popular use case for AI in business.
This comes as no surprise. There are many ways this groundbreaking technology can support your trust and safety ops teams. AI uses algorithms and machine learning to detect suspicious activity that signals potential security breaches. It can help with:
AI can collect and analyze vast amounts of user data from multiple sources to spot patterns, or detect anomalies that signal fraudulent activity. What’s more, it can achieve this at incredibly high speeds.
AI doesn’t rely solely on IP and IP reputation. Instead, it monitors user behavior to distinguish malicious bots from real people. Scammers frequently use bots to deploy their fraudulent activity at scale, but AI can stop them in their tracks.
“Credential stuffing” is when scammers input common usernames and passwords to hack into user accounts. AI can detect this through changes in website traffic and a higher-than-usual login failure rate to flag a credential stuffing attack.
Criminals use fake accounts to steal money and access sensitive data. AI can detect a change in IP address, and patterns of unusual behavior, to signal that an account has been compromised.
AI monitors your platform around the clock and detects suspicious activity as it happens. This allows for immediate action, preventing fraudsters from causing harm to your users and your reputation before they get a chance to strike.
Every day, criminals are setting up new, sophisticated schemes. Since AI models continuously monitor user behavior, they can highlight unusual patterns that have not been detected before. This subsequently trains your team on the latest scams and threats with real-life examples.
The short answer is No. While AI can help monitor user behavior, optimize T&S policies and identify areas that could pose new threats, it can’t pick up on the subtleties and nuances that humans can. Where possible, AI can be used to automate low-level decisioning. This then allows humans to focus their valuable time and experience on moderating more complex issues. This is particularly important in the case of AI programs delivering false positives— when legitimate accounts or transactions are flagged as fraudulent. Human moderators’ expertise is needed to investigate further and rectify any inaccuracies.
AI systems also rely on clean data sets to work. If data is outdated, incomplete or inaccurate, they may not perform as well.
Naturally, many trust and safety moderators are concerned about AI replacing them, but we see this as a catalyst for the evolution of this important role. Now, Trust and Safety moderators can spend less time working on detecting patterns that AI can, and instead, use their specialist knowledge more effectively. This is an incredibly exciting time for T&S practitioners, with increased opportunity to specialize in certain niches or industries.
This is why we’ll always need human safety moderators to work alongside AI—to understand the various contexts and nuances that AI can’t pick up.
However, it doesn’t happen overnight. Just as it takes time and investment to introduce a new AI system, so too, does it take time and investment to upskill a team. But enabling an AI system first will free up your team’s time for training and development, without compromising the safety of your platform.
Pasabi’s Trust & Safety Platform enables platforms, marketplaces and online communities to detect multiple threat risks using AI. We leverage behavioral analytics and machine learning to identify bad actors responsible for fake accounts, fake reviews, counterfeit, and scams. Our platform screens your data against our repository of known bad actors, quickly highlighting the worst offenders for your team to take the necessary action. We identify clusters of suspicious activity in your data—boosting your team’s efficiency in finding fraudulent behavior and giving you the evidence you need to take enforcement action at scale. Humans bring context, empathy and good judgment. AI brings processing power and speed. The two combined are a powerful tool in your trust and safety arsenal.
Want to know more about our Trust & Safety Platform? Or interested in booking a short demo? See how we can help you use AI to scale your trust and safety ops team (without increasing headcount).