chat
expand_more

How Abnormal Trains LLMs for Improved Detection and Greater Context

Get an in-depth look at how Abnormal leverages the latest AI technologies to bolster our security capabilities, including fine-tuning our own large language models (LLMs).
November 9, 2023

In the realm of cybersecurity, each day brings new challenges. From sophisticated phishing attacks to highly detailed impersonations, threat actors are continuously innovating to break through security defenses. Abnormal continues to meet the challenge by utilizing the latest technologies and state-of-the-art techniques in machine learning.

In our previous blog post, we discussed how OpenAI's GPT-4 enhanced our approach to threat detection and how it’s already deployed behind the scenes at Abnormal. In this post, we're excited to delve deeper into how we can additionally fine-tune our own large language models (LLMs) to bolster our security capabilities.

Why Use LLMs?

For the task of classifying a given email as a threat, Abnormal traditionally uses lightweight machine learning (ML) models that run on nearly all traffic. However, for certain cases, LLMs have additional benefits that make them worth the cost to run on specific segments of emails. These include:

  1. Zero-Shot and Rapid Response Capabilities: Typical ML requires large labeled datasets to be collected for modeling a certain attack type. With larger pre-trained models, we can leverage significantly smaller datasets for newer attacks. In some cases, we can use only a prompt or a single reference email to build a high-precision classifier.

  2. Deeper Email Understanding: LLMs are pre-trained on trillions of words which give them a deeper context of the world and of typical email interactions than we could achieve in smaller models. Knowing what’s typical gives the large model a powerful “common sense” reasoning ability that we’ve been able to use to reduce customer false positives and false negatives.

  3. Explainability: An incredibly useful consequence of language models for classification is that they can provide plausible explanations for their decisions—beyond just “attack” or “safe.” A nuanced explanation for why it chose what it did helps internal teams understand and diagnose automated decisions.

How Abnormal Trains LL Ms Blog Example

An example of deeper email understanding on an anonymized email. The LLM spots an impersonation with a malicious QR code link and provides an AI-generated explanation for how it came to that conclusion.

Training an AbnormalGPT

The domain of LLMs isn't confined to just OpenAI's GPT-4 and similar proprietary models. The recent launch of powerful open-source alternatives like LLAMA2, Falcon, and Mistral demonstrates that smaller models can perform at a similar level on more specific tasks.

Using the latest techniques for efficiently fine-tuning these models, like Low-Rank Adaptation (LoRA), we can incorporate our internally labeled email datasets into improving them. This enables us to align the model's “attack” vs. “spam” vs. “safe email” definitions and improve its raw classification abilities. It also allows us to skip prompt engineering or the need for a vector store, which was required for us to use GPT-4 as a security analyst.

Early internal experiments have shown that, when applied to this task, fine-tuned open-source models with 7 billion parameters can perform at the level of GPT-4, which is suspected to have 1+ trillion parameters. Not only that, it’s significantly more cost-effective to run.

How Abnormal Trains LL Ms Blog Training Abnormal GPT

We start with a heavily pre-trained open-source model. Then we fine-tune it on a mix of safe and malicious emails combined with various useful features not found within the body text or headers of the message.

Utilizing LLMs to Enhance Email Security

As attackers implement newer strategies for bypassing traditional defenses, Abnormal stays ahead by leveraging the latest AI technologies. In the future, we plan to roll out LLM-based classifiers to higher volumes of messages, employ them for non-email-based attacks, and enhance their abilities with respect to attachment and link-based attacks through recent breakthroughs in large multimodal models (LMMs).

Stay tuned for the third and final post in our series exploring how Abnormal is using AI to enrich our capabilities. And if you want to see for yourself how Abnormal stops sophisticated attacks, schedule a demo today.

Schedule a demo to learn how Abnormal can help your organization protect more, spend less, and secure the future.

How Abnormal Trains LLMs for Improved Detection and Greater Context

See Abnormal in Action

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Discover How It All Works

See How Abnormal AI Protects Humans

Related Posts

B Phishing Australia
Attackers rely on the trust currency of corporate email to launch highly personalised phishing attacks. Luckily, a revolution in email security means humans are no longer the last line of defence.
Read More
B Exploiting Trusted AI Tools Blog
Malicious AI is rewriting the rules of cybercrime. Learn how traditional GPTs are being exploited and why security teams need to act now.
Read More
B Cursor
Learn how Abnormal has employed Cursor to optimize our enterprise codebase for LLMs, automate project rules, and build a security-first AI dev culture.
Read More
B X Files Fileless Malware
Learn how XFiles uses fileless malware, Cloudflare Turnstile widgets, and phishing emails to steal login details, cryptocurrency wallets, and access to corporate systems.
Read More
B Email Metrics
Understand essential email security metrics that reveal the strength of your protection and highlight areas for improvement in your security program.
Read More
B 1500x1500 MKT579z 3 Images for Proofpoint Customer Story Blog 15
A global industrial manufacturer blocked 3,232 missed attacks and saved 336 SOC hours per month by adding Abnormal to address gaps left by Proofpoint.
Read More