Artificial intelligence (AI) is set to transform our future. It's the hot topic on everyone's lips and one of the most debated subjects of our time.
While many are excited by the advancements AI promises, others fear the unknown, worrying that it poses a threat to humanity. Technophobia, the fear of advanced technologies like AI and robots, is on the rise, with a recent survey finding that only 9% of people believe AI will do more good than harm.
Prominent figures have weighed in on both sides of the debate. Elon Musk has called for a six-month pause on AI development, believing it poses an existential threat to the human race. And Geoffrey Hinton, the ‘Godfather of AI’, stepped down from his position of VP Engineering at Google in 2023 due to the dangers of misinformation and exploitation by bad actors. In contrast, Bill Gates argues that AI will boost productivity and creativity, and Mark Zuckerberg believes AI will make our lives better.
With the recent launch of ChatGPT-4o, this debate has intensified. Is AI becoming too human-like? Will it take our jobs? Is it distorting our sense of reality? In this article, we explore the much-debated question: Should we fear AI?
One of the most common fears surrounding AI is that it will replace jobs, leading to mass unemployment, poverty, and loss of control in decision-making. Around one in three Brits are worried that AI could take their jobs, with those in administrative and secretarial roles (43%) and sales and customer service jobs (41%) the most likely to be concerned. This fear is fueled by reports such as Goldman Sachs’ estimate that 300 million jobs will be affected by generative AI, with 7% of jobs potentially being eliminated. And the first fully automated McDonald's in Texas, where robots serve customers without any human workers at all, shows how it could happen…
Many are also concerned about AI replacing roles that require human judgment, such as in legal, medical, financial, and educational decisions. There is a fear that AI systems might embed existing biases from their historical training data, which could threaten human rights and increase inequalities, posing significant risks to marginalized groups.
Despite these fears, AI is designed to assist, not replace humans. Most jobs today require cognitive abilities that AI lacks. For instance, while AI might handle data entry, it can't replace roles that require human creativity and complex and nuanced decision-making. While some jobs may become automated, AI is also creating new opportunities. New technologies always create new job markets, and AI will be no different. It will generate demand for new skills and people to train, support, and integrate these systems. Studies have found that by 2025, 97 million people will work in the AI space, and the market size is expected to grow by at least 120% year-over-year.
Furthermore, AI is helping save lives by assisting in sectors where human life is at risk, such as medical diagnoses, extreme weather predictions, safety mechanisms, and more. In business, AI can automate repetitive and tedious tasks, allowing employees to focus on strategic and creative work. According to IBM, 34% of companies currently use AI, and an additional 42% are exploring it. And for individuals, AI-powered assistants can streamline daily tasks, and act as a helpful tool for those requiring additional support in their daily lives whether it be writing emails or simplifying complex concepts. As a result, AI has opened up numerous opportunities, with countless researchers and engineers dedicated to developing solutions that enhance our lives.
The idea of AI taking over the world has been a staple of science fiction for decades. Movies like The Terminator, The Matrix, and I, Robot depict AI as an intimidating force that can develop spontaneously and become super-intelligent, posing a threat to humanity. This narrative, supercharged by the media and popular culture, has ingrained a deep fear of AI in our collective consciousness.
Adding fuel to the fire are articles in reputable publications like the New York Times and the Washington Post, which paint scenes of AI machines disrupting political systems, taking power, and creating chaos. There is also anxiety about AI replacing human connections, as depicted in the film Her where the main character falls in love with an AI operating system. Some fear a future where people form emotional bonds with machines instead of other humans, potentially impacting our social interactions in the real world.
However, the terrifying idea of AI taking over is purely fiction. AI is developed and programmed by humans and operates strictly within the parameters set by its developers. It cannot decide to ‘turn’ on its creators or act beyond its programmed functions. Its development is guided by strict ethical standards and regulated by laws such as the US AI Bill of Rights and the EU’s Artificial Intelligence Act, which ensure a systematic approach to controlling its advancement, so that it remains a tool for human benefit rather than a threat.
These concerns raises an important question: Are we at risk of hindering progress as a result of AI fear mongering that skews public perception?
Amongst the most pressing fears of AI is its potential to alter our perception of reality, with 80% of Americans believing AI will help criminals scam them. AI-generated content, such as deepfakes and voice cloning, have reached a level of sophistication that makes it nearly impossible to distinguish from authentic material. This raises significant Trust & Safety issues. Deepfakes can be used to create misleading videos that can tarnish reputations, spread false information, scam people, or even manipulate political outcomes. And voice cloning scams can imitate someone's voice with such high accuracy that fraudsters can deceive people into giving away sensitive information. Generative AI is also enabling cybercriminals to create convincing personas, which can be used for a range of fraudulent activity, from fake reviews to phishing and romance scams.
This erosion of authenticity makes it difficult to trust what we see and read. If we cannot differentiate between real and fake, our ability to make informed decisions is compromised.
While these fears are valid, it’s crucial to understand that the root cause of these issues is not the AI technology itself, but irresponsible use by humans. The same AI that creates these threats can also be a powerful tool to defend against them.
Pasabi is at the forefront of using AI to safeguard online platforms, which are increasingly targeted by cybercriminals who exploit their vast user bases. Our AI-powered behavioral analytics technology detects bad actors online at scale by analyzing patterns to spot anomalies that indicate fraudulent activities.
As cyber threats become more sophisticated, businesses and online platforms need to embrace AI for Trust & Safety. It offers the precision, speed, and scalability required to protect your users and maintain your reputation.
Fear of the unknown is part of human nature. As with any new technology, AI brings uncertainties that can feed anxiety. However, we must recognize that it's not the AI itself we should fear, but instead be aware of the malicious ways humans might choose to use it.
AI has the incredible power to solve complex problems, enhance productivity, and save lives. The key lies in how we manage its development responsibly and ethically, implement effective defenses against misuse, and control AI scare mongering. Companies like Pasabi are leading the way by developing advanced tools to detect threat actors and their fraudulent activities, demonstrating that AI can be a powerful force for good.