chat
expand_more

The Rise of Malicious AI: 5 Key Insights from an Ethical Hacker

Discover how AI is being used for bad as hackers leverage it to carry out their cybercrimes, in this recap of a white paper from hacker FC.
September 17, 2024

Artificial intelligence has become prevalent in nearly every industry worldwide over the last two years, and cybercrime is no exception. While the cybersecurity industry is focused on how to use AI to stop bad actors, those cybercriminals were trying to defend against are innovating even faster—often using AI to supercharge their attacks and make them more sophisticated than ever before.

To understand more about how attackers are using this innovative technology, we worked with an ethical hacker. FreakyClown, known today as FC, provided us with some key insights in his most recent white paper, titled The Rise, Use, and Future of Malicious Al: A Hacker's Insight. Here are some of the lessons from his deep dive into this new world of cybercrime.

1. Malicious AI Technology is Readily Available

The increasing accessibility of AI technologies and frameworks combined with the knowledge transfer has led to an explosion of malicious tools and AI models. While commercial AI tools available to the public (like ChatGPT and CoPilot) have built-in safety systems and controls in place, cybercriminals are now creating their own versions, such as FraudGPT, PoisonGPT, VirusGPT and EvilGPT—each name inspired by their niche intended use.

As the dark web becomes flooded with new malicious tools and open-source AI models are being de-censored, criminals can utilize them for malicious activities. The backbone behind any AI model is the dataset, and whilst commercial ones do not allow you to ingest your own data into them, the criminalized versions do. This makes them not only more capable of creating attacks, but also in defining their target data.

2. AI-Enhanced Malware is Here

In 1980, the first malware that replicated itself and moved through a network, the Morris Worm, was released. Earlier this year, we saw the Morris Worm 2.0, which targets generative AI systems. Like traditional worms, it steals data and deploys malware, but it does so by manipulating the AI prompts to bypass security measures and replicating itself across different platforms. It was created by researchers as a proof of concept, but the creation of this tool is an example of the kind of advanced malware research and development that we are likely to see—both by criminals and legitimate security researchers. Within the next few years malware will have advanced techniques built in that will allow it to recognize the system it is in and morph itself to defend against, or even avoid, current detection systems.

3. Deepfakes are Deeply Troubling

In addition to malware, the introduction of generative AI tools has led to a substantial rise of impersonation attacks. These tools have enabled criminals to use digital twins and face-swapping technologies, adding far more sophistication to their more traditional scamming techniques. In February 2024, a finance worker in Hong Kong was tricked into paying $26 million to fraudsters posing as the multinational firm’s chief financial officer, and these tools are being used today to further political agendas.

4. AI is Leading to Increased Cybercrime-as-a-Service

With the rise of AI-enhanced hacking tools, it has become even easier for lesser-skilled criminals to start dabbling into the hacking side themselves. As with all tools, AI systems developed legally and legitimately can, unfortunately, be subverted by criminals for malicious purposes. Take, for example, the AI-powered tool Nebula by BerylliumSec, which is effectively an assistant for hackers, who can interact with the computer using natural language—making it possible for hackers to use it to do the heavy lifting of commands and execution to target vulnerable people and organizations.

5. AI is Needed to Stop AI

The rise of the malicious use of AI presents significant challenges to cybersecurity. Understanding the methods, impacts, and defense mechanisms is crucial for mitigating these threats. And so to protect themselves and their employees, organizations must proactively adopt AI-driven defense measures, collaborate on threat intelligence, and continuously educate their workforce to stay ahead of the malicious use of AI. It must be remembered that whilst the rise of AI is a force multiplier for threat actors, it is also a force multiplier for those defending against those attacks. Together, with the right tools, we can stay safe from these attacks.

Moving Into an AI-Powered Future

As AI becomes more integrated into every tool we use, the lines between traditional and AI-driven cyber attacks will blur. Regardless, the takeaway is clear: AI is both the problem and the solution. The key will be staying ahead of the curve, continuously adapting defense mechanisms, and fostering collaboration across both industries and borders.

Discover more from FC in his whitepaper, The Rise, Use, and Future of Malicious Al: A Hacker's Insight.

The Rise of Malicious AI: 5 Key Insights from an Ethical Hacker

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Posts

B Podcast Blog
Explore insights on AI, collaboration, career growth, and unforgettable stories from industry leaders shaping the future of cybersecurity.
Read More
B AI Vendor
Learn how to evaluate transparency, risks, scalability, and ethical considerations to make informed cybersecurity decisions.
Read More
B SOC Prod
Learn how AI-driven automation boosts SOC productivity by reducing false positives, addressing skills gaps, and enhancing threat detection. Discover strategies to future-proof your SOC and strengthen cybersecurity defenses.
Read More
B Proofpoint Customer Story F500 Insurance Provider
A Fortune 500 insurance provider blocked 6,454 missed attacks and saved 341 SOC hours per month by adding Abnormal to address gaps left by Proofpoint.
Read More
B Malicious AI Platforms Blog
What happened to WormGPT? Discover how AI tools like WormGPT changed cybercrime, why they vanished, and what cybercriminals are using now.
Read More
B MKT748 Open Graph Images for Cyber Savvy 7
Explore insights from Brian Markham, CISO at EAB, as he discusses cybersecurity challenges, building trust in education, adapting to AI threats, and his goals for the future. Learn how he and his team are working to make education smarter while prioritizing data security.
Read More