How Abnormal Trains LLMs for Improved Detection and Greater Context
In the realm of cybersecurity, each day brings new challenges. From sophisticated phishing attacks to highly detailed impersonations, threat actors are continuously innovating to break through security defenses. Abnormal continues to meet the challenge by utilizing the latest technologies and state-of-the-art techniques in machine learning.
In our previous blog post, we discussed how OpenAI's GPT-4 enhanced our approach to threat detection and how it’s already deployed behind the scenes at Abnormal. In this post, we're excited to delve deeper into how we can additionally fine-tune our own large language models (LLMs) to bolster our security capabilities.
Why Use LLMs?
For the task of classifying a given email as a threat, Abnormal traditionally uses lightweight machine learning (ML) models that run on nearly all traffic. However, for certain cases, LLMs have additional benefits that make them worth the cost to run on specific segments of emails. These include:
Zero-Shot and Rapid Response Capabilities: Typical ML requires large labeled datasets to be collected for modeling a certain attack type. With larger pre-trained models, we can leverage significantly smaller datasets for newer attacks. In some cases, we can use only a prompt or a single reference email to build a high-precision classifier.
Deeper Email Understanding: LLMs are pre-trained on trillions of words which give them a deeper context of the world and of typical email interactions than we could achieve in smaller models. Knowing what’s typical gives the large model a powerful “common sense” reasoning ability that we’ve been able to use to reduce customer false positives and false negatives.
Explainability: An incredibly useful consequence of language models for classification is that they can provide plausible explanations for their decisions—beyond just “attack” or “safe.” A nuanced explanation for why it chose what it did helps internal teams understand and diagnose automated decisions.
Training an AbnormalGPT
The domain of LLMs isn't confined to just OpenAI's GPT-4 and similar proprietary models. The recent launch of powerful open-source alternatives like LLAMA2, Falcon, and Mistral demonstrates that smaller models can perform at a similar level on more specific tasks.
Using the latest techniques for efficiently fine-tuning these models, like Low-Rank Adaptation (LoRA), we can incorporate our internally labeled email datasets into improving them. This enables us to align the model's “attack” vs. “spam” vs. “safe email” definitions and improve its raw classification abilities. It also allows us to skip prompt engineering or the need for a vector store, which was required for us to use GPT-4 as a security analyst.
Early internal experiments have shown that, when applied to this task, fine-tuned open-source models with 7 billion parameters can perform at the level of GPT-4, which is suspected to have 1+ trillion parameters. Not only that, it’s significantly more cost-effective to run.
We start with a heavily pre-trained open-source model. Then we fine-tune it on a mix of safe and malicious emails combined with various useful features not found within the body text or headers of the message.
Utilizing LLMs to Enhance Email Security
As attackers implement newer strategies for bypassing traditional defenses, Abnormal stays ahead by leveraging the latest AI technologies. In the future, we plan to roll out LLM-based classifiers to higher volumes of messages, employ them for non-email-based attacks, and enhance their abilities with respect to attachment and link-based attacks through recent breakthroughs in large multimodal models (LMMs).
Stay tuned for the third and final post in our series exploring how Abnormal is using AI to enrich our capabilities. And if you want to see for yourself how Abnormal stops sophisticated attacks, schedule a demo today.
Schedule a demo to learn how Abnormal can help your organization protect more, spend less, and secure the future.