chat
expand_more

The Rise of Malicious AI: 5 Key Insights from an Ethical Hacker

Discover how AI is being used for bad as hackers leverage it to carry out their cybercrimes, in this recap of a white paper from hacker FC.
September 17, 2024

Artificial intelligence has become prevalent in nearly every industry worldwide over the last two years, and cybercrime is no exception. While the cybersecurity industry is focused on how to use AI to stop bad actors, those cybercriminals were trying to defend against are innovating even faster—often using AI to supercharge their attacks and make them more sophisticated than ever before.

To understand more about how attackers are using this innovative technology, we worked with an ethical hacker. FreakyClown, known today as FC, provided us with some key insights in his most recent white paper, titled The Rise, Use, and Future of Malicious Al: A Hacker's Insight. Here are some of the lessons from his deep dive into this new world of cybercrime.

1. Malicious AI Technology is Readily Available

The increasing accessibility of AI technologies and frameworks combined with the knowledge transfer has led to an explosion of malicious tools and AI models. While commercial AI tools available to the public (like ChatGPT and CoPilot) have built-in safety systems and controls in place, cybercriminals are now creating their own versions, such as FraudGPT, PoisonGPT, VirusGPT and EvilGPT—each name inspired by their niche intended use.

As the dark web becomes flooded with new malicious tools and open-source AI models are being de-censored, criminals can utilize them for malicious activities. The backbone behind any AI model is the dataset, and whilst commercial ones do not allow you to ingest your own data into them, the criminalized versions do. This makes them not only more capable of creating attacks, but also in defining their target data.

2. AI-Enhanced Malware is Here

In 1980, the first malware that replicated itself and moved through a network, the Morris Worm, was released. Earlier this year, we saw the Morris Worm 2.0, which targets generative AI systems. Like traditional worms, it steals data and deploys malware, but it does so by manipulating the AI prompts to bypass security measures and replicating itself across different platforms. It was created by researchers as a proof of concept, but the creation of this tool is an example of the kind of advanced malware research and development that we are likely to see—both by criminals and legitimate security researchers. Within the next few years malware will have advanced techniques built in that will allow it to recognize the system it is in and morph itself to defend against, or even avoid, current detection systems.

3. Deepfakes are Deeply Troubling

In addition to malware, the introduction of generative AI tools has led to a substantial rise of impersonation attacks. These tools have enabled criminals to use digital twins and face-swapping technologies, adding far more sophistication to their more traditional scamming techniques. In February 2024, a finance worker in Hong Kong was tricked into paying $26 million to fraudsters posing as the multinational firm’s chief financial officer, and these tools are being used today to further political agendas.

4. AI is Leading to Increased Cybercrime-as-a-Service

With the rise of AI-enhanced hacking tools, it has become even easier for lesser-skilled criminals to start dabbling into the hacking side themselves. As with all tools, AI systems developed legally and legitimately can, unfortunately, be subverted by criminals for malicious purposes. Take, for example, the AI-powered tool Nebula by BerylliumSec, which is effectively an assistant for hackers, who can interact with the computer using natural language—making it possible for hackers to use it to do the heavy lifting of commands and execution to target vulnerable people and organizations.

5. AI is Needed to Stop AI

The rise of the malicious use of AI presents significant challenges to cybersecurity. Understanding the methods, impacts, and defense mechanisms is crucial for mitigating these threats. And so to protect themselves and their employees, organizations must proactively adopt AI-driven defense measures, collaborate on threat intelligence, and continuously educate their workforce to stay ahead of the malicious use of AI. It must be remembered that whilst the rise of AI is a force multiplier for threat actors, it is also a force multiplier for those defending against those attacks. Together, with the right tools, we can stay safe from these attacks.

Moving Into an AI-Powered Future

As AI becomes more integrated into every tool we use, the lines between traditional and AI-driven cyber attacks will blur. Regardless, the takeaway is clear: AI is both the problem and the solution. The key will be staying ahead of the curve, continuously adapting defense mechanisms, and fostering collaboration across both industries and borders.

Discover more from FC in his whitepaper, The Rise, Use, and Future of Malicious Al: A Hacker's Insight.

The Rise of Malicious AI: 5 Key Insights from an Ethical Hacker

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Posts

B Manufacturing Industry Attack Trends Blog
New data shows a surge in advanced email attacks on manufacturing organizations. Explore our research on this alarming trend.
Read More
B Dropbox Open Enrollment Attack Blog
Discover how Dropbox was exploited in a sophisticated phishing attack that leveraged AiTM tactics to steal credentials during the open enrollment period.
Read More
B AISOC
Discover how AI is transforming security operation centers by reducing noise, enhancing clarity, and empowering analysts with enriched data for faster threat detection and response.
Read More
B Microsoft Blog
Explore the latest cybersecurity insights from Microsoft’s 2024 Digital Defense Report. Discover next-gen security strategies, AI-driven defenses, and critical approaches to counter evolving threats and safeguard your organization.
Read More
B Osterman Blog
Explore five key insights from Osterman Research on how AI-driven tools are revolutionizing defensive cybersecurity by enhancing threat detection, boosting security team efficiency, and countering sophisticated cyberattacks.
Read More
B AI Native Vendors
Explore how AI-native security like Abnormal fights back against AI-powered cyberattacks, protecting your organization from human-targeted threats.
Read More