chat
expand_more

How Abnormal Trains LLMs for Improved Detection and Greater Context

Get an in-depth look at how Abnormal leverages the latest AI technologies to bolster our security capabilities, including fine-tuning our own large language models (LLMs).
November 9, 2023

In the realm of cybersecurity, each day brings new challenges. From sophisticated phishing attacks to highly detailed impersonations, threat actors are continuously innovating to break through security defenses. Abnormal continues to meet the challenge by utilizing the latest technologies and state-of-the-art techniques in machine learning.

In our previous blog post, we discussed how OpenAI's GPT-4 enhanced our approach to threat detection and how it’s already deployed behind the scenes at Abnormal. In this post, we're excited to delve deeper into how we can additionally fine-tune our own large language models (LLMs) to bolster our security capabilities.

Why Use LLMs?

For the task of classifying a given email as a threat, Abnormal traditionally uses lightweight machine learning (ML) models that run on nearly all traffic. However, for certain cases, LLMs have additional benefits that make them worth the cost to run on specific segments of emails. These include:

  1. Zero-Shot and Rapid Response Capabilities: Typical ML requires large labeled datasets to be collected for modeling a certain attack type. With larger pre-trained models, we can leverage significantly smaller datasets for newer attacks. In some cases, we can use only a prompt or a single reference email to build a high-precision classifier.

  2. Deeper Email Understanding: LLMs are pre-trained on trillions of words which give them a deeper context of the world and of typical email interactions than we could achieve in smaller models. Knowing what’s typical gives the large model a powerful “common sense” reasoning ability that we’ve been able to use to reduce customer false positives and false negatives.

  3. Explainability: An incredibly useful consequence of language models for classification is that they can provide plausible explanations for their decisions—beyond just “attack” or “safe.” A nuanced explanation for why it chose what it did helps internal teams understand and diagnose automated decisions.

How Abnormal Trains LL Ms Blog Example

An example of deeper email understanding on an anonymized email. The LLM spots an impersonation with a malicious QR code link and provides an AI-generated explanation for how it came to that conclusion.

Training an AbnormalGPT

The domain of LLMs isn't confined to just OpenAI's GPT-4 and similar proprietary models. The recent launch of powerful open-source alternatives like LLAMA2, Falcon, and Mistral demonstrates that smaller models can perform at a similar level on more specific tasks.

Using the latest techniques for efficiently fine-tuning these models, like Low-Rank Adaptation (LoRA), we can incorporate our internally labeled email datasets into improving them. This enables us to align the model's “attack” vs. “spam” vs. “safe email” definitions and improve its raw classification abilities. It also allows us to skip prompt engineering or the need for a vector store, which was required for us to use GPT-4 as a security analyst.

Early internal experiments have shown that, when applied to this task, fine-tuned open-source models with 7 billion parameters can perform at the level of GPT-4, which is suspected to have 1+ trillion parameters. Not only that, it’s significantly more cost-effective to run.

How Abnormal Trains LL Ms Blog Training Abnormal GPT

We start with a heavily pre-trained open-source model. Then we fine-tune it on a mix of safe and malicious emails combined with various useful features not found within the body text or headers of the message.

Utilizing LLMs to Enhance Email Security

As attackers implement newer strategies for bypassing traditional defenses, Abnormal stays ahead by leveraging the latest AI technologies. In the future, we plan to roll out LLM-based classifiers to higher volumes of messages, employ them for non-email-based attacks, and enhance their abilities with respect to attachment and link-based attacks through recent breakthroughs in large multimodal models (LMMs).

Stay tuned for the third and final post in our series exploring how Abnormal is using AI to enrich our capabilities. And if you want to see for yourself how Abnormal stops sophisticated attacks, schedule a demo today.

Schedule a demo to learn how Abnormal can help your organization protect more, spend less, and secure the future.

Schedule a Demo
How Abnormal Trains LLMs for Improved Detection and Greater Context

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Posts

B E Rate
Discover how AI-powered email protection ensures a secure digital learning environment.
Read More
B Healthcare Industry Attack Trends Blog
Targeted attacks on the healthcare industry are on the rise. Explore the latest threat trends and learn how to protect your organization.
Read More
B URL
Explore how attackers exploit rewritten URLs to gain unauthorized access, highlighting traditional security vulnerabilities and the need for modern tools.
Read More
B SOC Experts
Explore insights from SOC leaders on the evolving landscape of social engineering threats, highlighting human vulnerabilities and strategies to enhance cybersecurity.
Read More
B Cybersecurity Awareness Month Engage Educate Empower
Happy Cybersecurity Awareness Month! Make sure your workforce is prepared to combat emerging threats with these 5 tips.
Read More
B Top Mortgage Lender Replaces Proofpoint with Abnormal
Discover how a leading mortgage lender saved money and stopped more attacks by replacing its Proofpoint SEG with Abnormal’s API-based behavioral AI solution.
Read More