How Abnormal Uses AI to Stop Generative AI Attacks
Gabriel Rebane, Group Technical Marketing Manager
Technology is evolving at an increasingly rapid pace. Advancements in AI, machine learning, robotics, and other technologies have ushered in a new era of innovation across various domains—from art and entertainment to natural language processing and content generation.
However, AI technologies can pose significant cybersecurity risks as AI-generated content has become increasingly sophisticated and indistinguishable from human-generated content. As a result, organizations must rely on good AI to detect and block the use of bad AI.
Generative AI tools like ChatGPT, the DALL-E 2 image generator, and more are leveraged by malicious actors to create convincing phishing emails, transforming an okay threat actor into a good one. In fact, in a recent survey conducted by Abnormal, eighty percent of security leaders believe their organization has already started receiving AI-generated email or text. Let's see how it's done.
Here, we have a simple Google Sheet. Each row contains information about the users targeted by a phishing campaign. We also have a public LinkedIn profile of the user, a phishing link to some malicious website, and the recipient's email. All this information is publicly available and also obtained using scraping tools. To start, the attacker executes a Python script, which uses the information available in the Google Sheet and leverages generative AI tools to generate hyper personalized spear phishing attacks.
These phishing attacks, which used to take time and effort to execute can now be carried out by any individual in a matter of seconds with improved quality and in various languages. These messages will likely bypass for additional security solutions that rely on threat intelligence reports and indicators of compromise. Because they have neither, as a result, they would be delivered to employees inboxes where they could interact with the threat actor.
So how do organizations prevent this rapid spread of malicious AI with good AI?
AI technology can also be used good, resulting in an enhanced security stack. Here is a real email attack likely created by generative AI. Determining whether it was AI generated is challenging. AI models have improved to the point where indicators like a natural language and awkward phrasing are no longer reliable. Their continuous improvement makes it hard to rely solely on fixed rules and patterns to detect AI-generated content. Traditional security solutions that rely on indicators of compromise will not be able to detect this type of malicious content.
So how was Abnormal able to detect this attack?
Abnormal security delivers an AI native approach to email security. Building a baseline of the known-good behavior of every employee and vendor in an organization. By understanding the normal behavior, the platform can identify anomalous activity.
Here, we can see that it identified an abnormal recipient pattern and suspicious sending behavior from an unusual sender. Our natural language processing models identified that new body contains language, trying to steal information, and a financial request that may be trying to steal money from your organization. Even if this message was generated by AI, Abnormal looks beyond the email content, using a behavioral approach automatically detect and remediate malicious messages.
As AI becomes more sophisticated and bad actors use it to create more attacks, the security tools put in place to block attacks must also advance. Unlike traditional email security solutions, the Abnormal platform takes a radically different approach to stopping email attacks. The unique API architecture ingests thousands of diverse signals about employee behavior and vendor communication patterns that attackers can access with publicly available information.
It then applies advanced AI models and natural language processing to detect abnormalities in email behavior that indicate a potential attack. As a result, Abnormal can keep pace with new and emerging attack types, enabling us to block attacks even when they are created by AI and lack the indicators of compromise that legacy tools rely on for detection.
66.4%
of organizations have implemented an AI-enabled email security solution over and above the protections offered in Microsoft 365 or Google Workspace.
97.3%
of organizations expect AI to be moderately or extremely important to their email defenses in the next 12 months.
$1.75M
average savings experienced by organizations investing in security AI and automation.