chat
expand_more

How Abnormal Trains LLMs for Improved Detection and Greater Context

Get an in-depth look at how Abnormal leverages the latest AI technologies to bolster our security capabilities, including fine-tuning our own large language models (LLMs).
November 9, 2023

In the realm of cybersecurity, each day brings new challenges. From sophisticated phishing attacks to highly detailed impersonations, threat actors are continuously innovating to break through security defenses. Abnormal continues to meet the challenge by utilizing the latest technologies and state-of-the-art techniques in machine learning.

In our previous blog post, we discussed how OpenAI's GPT-4 enhanced our approach to threat detection and how it’s already deployed behind the scenes at Abnormal. In this post, we're excited to delve deeper into how we can additionally fine-tune our own large language models (LLMs) to bolster our security capabilities.

Why Use LLMs?

For the task of classifying a given email as a threat, Abnormal traditionally uses lightweight machine learning (ML) models that run on nearly all traffic. However, for certain cases, LLMs have additional benefits that make them worth the cost to run on specific segments of emails. These include:

  1. Zero-Shot and Rapid Response Capabilities: Typical ML requires large labeled datasets to be collected for modeling a certain attack type. With larger pre-trained models, we can leverage significantly smaller datasets for newer attacks. In some cases, we can use only a prompt or a single reference email to build a high-precision classifier.

  2. Deeper Email Understanding: LLMs are pre-trained on trillions of words which give them a deeper context of the world and of typical email interactions than we could achieve in smaller models. Knowing what’s typical gives the large model a powerful “common sense” reasoning ability that we’ve been able to use to reduce customer false positives and false negatives.

  3. Explainability: An incredibly useful consequence of language models for classification is that they can provide plausible explanations for their decisions—beyond just “attack” or “safe.” A nuanced explanation for why it chose what it did helps internal teams understand and diagnose automated decisions.

How Abnormal Trains LL Ms Blog Example

An example of deeper email understanding on an anonymized email. The LLM spots an impersonation with a malicious QR code link and provides an AI-generated explanation for how it came to that conclusion.

Training an AbnormalGPT

The domain of LLMs isn't confined to just OpenAI's GPT-4 and similar proprietary models. The recent launch of powerful open-source alternatives like LLAMA2, Falcon, and Mistral demonstrates that smaller models can perform at a similar level on more specific tasks.

Using the latest techniques for efficiently fine-tuning these models, like Low-Rank Adaptation (LoRA), we can incorporate our internally labeled email datasets into improving them. This enables us to align the model's “attack” vs. “spam” vs. “safe email” definitions and improve its raw classification abilities. It also allows us to skip prompt engineering or the need for a vector store, which was required for us to use GPT-4 as a security analyst.

Early internal experiments have shown that, when applied to this task, fine-tuned open-source models with 7 billion parameters can perform at the level of GPT-4, which is suspected to have 1+ trillion parameters. Not only that, it’s significantly more cost-effective to run.

How Abnormal Trains LL Ms Blog Training Abnormal GPT

We start with a heavily pre-trained open-source model. Then we fine-tune it on a mix of safe and malicious emails combined with various useful features not found within the body text or headers of the message.

Utilizing LLMs to Enhance Email Security

As attackers implement newer strategies for bypassing traditional defenses, Abnormal stays ahead by leveraging the latest AI technologies. In the future, we plan to roll out LLM-based classifiers to higher volumes of messages, employ them for non-email-based attacks, and enhance their abilities with respect to attachment and link-based attacks through recent breakthroughs in large multimodal models (LMMs).

Stay tuned for the third and final post in our series exploring how Abnormal is using AI to enrich our capabilities. And if you want to see for yourself how Abnormal stops sophisticated attacks, schedule a demo today.

Schedule a demo to learn how Abnormal can help your organization protect more, spend less, and secure the future.

Schedule a Demo
How Abnormal Trains LLMs for Improved Detection and Greater Context

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Posts

B Proofpoint Customer Story Blog 8
A Fortune 500 transportation and logistics leader blocked more than 6,700 attacks missed by Proofpoint and reclaimed 350 SOC hours per month by adding Abnormal to its security stack.
Read More
B Gartner MQ 2024 Announcement Blog
Abnormal Security was named a Leader in the 2024 Gartner Magic Quadrant for Email Security Platforms and positioned furthest for Completeness of Vision.
Read More
B Gift Card Scams Tricker to Spot Blog
Learn why gift card scams are becoming more difficult to identify, how cybercriminals evolve their tactics, and strategies to protect your organization.
Read More
B Offensive AI 12 16 24
Learn how AI is used in cybersecurity, what defensive AI vs. offensive AI means, and how to use defensive AI to combat offensive AI.
Read More
B Proofpoint Customer Story Blog 7
See how Abnormal's AI helped a Fortune 500 insurance provider detect 27,847 threats missed by Proofpoint and save 6,600+ hours in employee productivity.
Read More
B Cyberattack Forecast Emerging Threats Blog
Uncover the latest email threats and strategies to strengthen your cybersecurity and prepare for 2025.
Read More