chat
expand_more

How Abnormal Trains LLMs for Improved Detection and Greater Context

Get an in-depth look at how Abnormal leverages the latest AI technologies to bolster our security capabilities, including fine-tuning our own large language models (LLMs).
November 9, 2023

In the realm of cybersecurity, each day brings new challenges. From sophisticated phishing attacks to highly detailed impersonations, threat actors are continuously innovating to break through security defenses. Abnormal continues to meet the challenge by utilizing the latest technologies and state-of-the-art techniques in machine learning.

In our previous blog post, we discussed how OpenAI's GPT-4 enhanced our approach to threat detection and how it’s already deployed behind the scenes at Abnormal. In this post, we're excited to delve deeper into how we can additionally fine-tune our own large language models (LLMs) to bolster our security capabilities.

Why Use LLMs?

For the task of classifying a given email as a threat, Abnormal traditionally uses lightweight machine learning (ML) models that run on nearly all traffic. However, for certain cases, LLMs have additional benefits that make them worth the cost to run on specific segments of emails. These include:

  1. Zero-Shot and Rapid Response Capabilities: Typical ML requires large labeled datasets to be collected for modeling a certain attack type. With larger pre-trained models, we can leverage significantly smaller datasets for newer attacks. In some cases, we can use only a prompt or a single reference email to build a high-precision classifier.

  2. Deeper Email Understanding: LLMs are pre-trained on trillions of words which give them a deeper context of the world and of typical email interactions than we could achieve in smaller models. Knowing what’s typical gives the large model a powerful “common sense” reasoning ability that we’ve been able to use to reduce customer false positives and false negatives.

  3. Explainability: An incredibly useful consequence of language models for classification is that they can provide plausible explanations for their decisions—beyond just “attack” or “safe.” A nuanced explanation for why it chose what it did helps internal teams understand and diagnose automated decisions.

How Abnormal Trains LL Ms Blog Example

An example of deeper email understanding on an anonymized email. The LLM spots an impersonation with a malicious QR code link and provides an AI-generated explanation for how it came to that conclusion.

Training an AbnormalGPT

The domain of LLMs isn't confined to just OpenAI's GPT-4 and similar proprietary models. The recent launch of powerful open-source alternatives like LLAMA2, Falcon, and Mistral demonstrates that smaller models can perform at a similar level on more specific tasks.

Using the latest techniques for efficiently fine-tuning these models, like Low-Rank Adaptation (LoRA), we can incorporate our internally labeled email datasets into improving them. This enables us to align the model's “attack” vs. “spam” vs. “safe email” definitions and improve its raw classification abilities. It also allows us to skip prompt engineering or the need for a vector store, which was required for us to use GPT-4 as a security analyst.

Early internal experiments have shown that, when applied to this task, fine-tuned open-source models with 7 billion parameters can perform at the level of GPT-4, which is suspected to have 1+ trillion parameters. Not only that, it’s significantly more cost-effective to run.

How Abnormal Trains LL Ms Blog Training Abnormal GPT

We start with a heavily pre-trained open-source model. Then we fine-tune it on a mix of safe and malicious emails combined with various useful features not found within the body text or headers of the message.

Utilizing LLMs to Enhance Email Security

As attackers implement newer strategies for bypassing traditional defenses, Abnormal stays ahead by leveraging the latest AI technologies. In the future, we plan to roll out LLM-based classifiers to higher volumes of messages, employ them for non-email-based attacks, and enhance their abilities with respect to attachment and link-based attacks through recent breakthroughs in large multimodal models (LMMs).

Stay tuned for the third and final post in our series exploring how Abnormal is using AI to enrich our capabilities. And if you want to see for yourself how Abnormal stops sophisticated attacks, schedule a demo today.

Schedule a demo to learn how Abnormal can help your organization protect more, spend less, and secure the future.

Schedule a Demo
How Abnormal Trains LLMs for Improved Detection and Greater Context

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Posts

B Dropbox Open Enrollment Attack Blog
Discover how Dropbox was exploited in a sophisticated phishing attack that leveraged AiTM tactics to steal credentials during the open enrollment period.
Read More
B AISOC
Discover how AI is transforming security operation centers by reducing noise, enhancing clarity, and empowering analysts with enriched data for faster threat detection and response.
Read More
B Microsoft Blog
Explore the latest cybersecurity insights from Microsoft’s 2024 Digital Defense Report. Discover next-gen security strategies, AI-driven defenses, and critical approaches to counter evolving threats and safeguard your organization.
Read More
B Osterman Blog
Explore five key insights from Osterman Research on how AI-driven tools are revolutionizing defensive cybersecurity by enhancing threat detection, boosting security team efficiency, and countering sophisticated cyberattacks.
Read More
B AI Native Vendors
Explore how AI-native security like Abnormal fights back against AI-powered cyberattacks, protecting your organization from human-targeted threats.
Read More
B 2024 ISC2 Cybersecurity Workforce Study Recap
Explore key findings from the 2024 ISC2 Cybersecurity Workforce Study and find out how SOC teams can adapt and thrive amidst modern challenges.
Read More