chat
expand_more

Combining ML Models to Detect Email Attacks

Sophisticated social engineering email attacks are on the rise and getting more advanced every day. They prey on the trust we put in our business tools and social networks, especially when a message appears to be from someone on our contact list, or even...
November 18, 2020

This article is a follow-up to one I wrote a year ago—Lessons from Building AI to Stop Cyberattacks—in which I discussed the overall problem of detecting social engineering attacks using ML techniques and our general solution at Abnormal. This post aims to walk through the process we use to model various aspects of a given email, and then to ultimately detect and block attacks.

As discussed in the previous post, sophisticated social engineering email attacks are on the rise and getting more advanced every day. They prey on the trust we put in our business tools and social networks, especially when a message appears to be from someone on our contact list, or even more insidiously, when the attack is actually from a contact whose account has been compromised. The FBI estimates that over the past few years over 75% of cyberattacks start with social engineering, usually through email.

Why is Email Attack Detection a Hard ML Problem?

There are a multitude of reasons on why detecting these attacks can be extremely hard. Here are a few of them.

Detecting Attacks is Like Finding a Needle in a Haystack: The first challenge is that the base rate is very low. Advanced attacks are rare in comparison to the overall volume of legitimate email. For example:

  • Only 1 in 100,000 emails is an advanced spear-phishing attack.
  • Less than 1 in 10,000,000 emails is advanced business email compromise like invoice fraud or lateral spear phishing where a compromised account phishs another employee.
  • When compared to spam, which accounts for 65 in every 100 emails, we have an extremely biased classification problem, which raises all sorts of difficulties.

There's an Enormous Amount of Data: At the same time, the data we have is large (many terabytes), messy, multi-modal, and difficult to collect and serve at low latency for a real-time system. For example, features that an ML system would want to evaluate include:

  • Text of the email
  • Metadata and headers
  • History of communication for parties involved, including geo locations, IPs, etc.
  • Account sign-ins, mail filters, and browsers used
  • Content of all attachments
  • Content of all links and the landing pages those links lead to
  • …and so much more.

Turning all this data into useful features for a detection system is a huge challenge from a data engineering, as well as machine learning, point of view.

Attackers are Adversarial: To make matters worse, attackers actively manipulate the data to make it hard on ML models, constantly improving their techniques and developing entirely new strategies.

The Precision Must Be Very High: oo build a product to prevent email attacks, we must avoid false positives and the disruption of legitimate business communications, but at the same time, catch every single attack. The false-positive rate needs to be as low as one in a million!

To effectively solve this problem, we must be diligent and extremely thoughtful about how we break down the overall detection problem into components that are solved carefully.

Example of the ML Modeling Problem

Let’s start with this hypothetical email attack and imagine how we could model various dimensions and how those models come together.

Subject: Reset your password
From: Microsoft Support <admin@fakemicrosoft.com>
Content: “Please click _here_ to reset the password to your account.”

This is a simple and prototypical phishing attack.

As with any well-crafted social engineering attack, it appears nearly identical to a legitimate message. In this case, it looks to be a legitimate password reset message from Microsoft. Because of this, modeling any single dimension of this message will be fruitless for classification purposes. Instead, we need to break up the problem into component sub-problems.

Thinking Like the Attacker: Our first step is always to put ourselves in the mind of the attacker. To do so we break an attack down into what we call “attack facets."

Attack Facets:

  1. Attack Goal: What is the attacker trying to accomplish? Steal money? Steal credentials? Something else?
  2. Impersonation Strategy: How is the attacker building credibility with the recipient? Are they impersonating someone? Are they sending from a compromised account?
  3. Impersonated Party: Who is being impersonated? A trusted brand? A known vendor? The CEO of a company?
  4. Payload Vector: How is the actual attack delivered? A link? An Attachment?
Image for post

If we break down the Microsoft password reset example, we have:

  1. Attack Goal: Steal a user's credentials.
  2. Impersonation Strategy: Impersonate a brand through a lookalike display name.
  3. Impersonated Party: The official Microsoft brand.
  4. Payload Vector: A link to a fake login page.

Modeling the Problem: Building ML models to solve a problem with such a low base rate and precision requirements forces a high degree of diligence when modeling sub-problems and feature engineering. We cannot rely just on the magic of machine learning.

In the last section, we described a way to break an attack into components. We can use that same breakdown to help inspire the type of information we would like to model about an email in order to determine if it is an attack.

All these models rely on similar underlying techniques, specifically:

  • Behavior Modeling: Identifying abnormal behavior by modeling normal communication patterns and finding outliers from that
  • Content Modeling: Understanding the content of an email
  • Identity Resolution: Matching the identity of individuals and organizations referenced in an email (perhaps in an obfuscated way) to a database of these entities
Image for post

Understanding the Attack

Attack Goal and Payload: Identifying an attack goal requires modeling the content of a message. We must understand what is being said. Is the email asking the recipient to do anything? Is it an urgent tone? Are there other aspects we need to consider? This model may identify malicious content as well as safe content order to differentiate the two.

Impersonated Party: What does an impersonation look like? First of all, the email must appear to the recipient to look like someone they trust. We build identity models to match various parts of an email against known entities inside and outside an organization. For example, we may identify an employee impersonation by matching against the Active Directory. We may identify a brand impersonation by matching against the known patterns of brand-originating emails. We might identify a vendor impersonation by matching against our vendor database.

Impersonation Strategy: An impersonation happens when an email is not from the entity it is claiming to be from. To do so, we identify normal behavior patterns to spot these abnormal ones. This may include abnormal behavior between the recipient and the sender. It may be unusual sending patterns from the sender. In the simplest case, like the example above, we can simply note that Microsoft never sends from “fakemicrosoft.com”. In more difficult cases, like account takeover and vendor email compromise, we must look at more subtle clues like unusual geo-location and IP address of the sender or failing authentication.

Attack Payload: For the payload, we must understand the content of attachments and links. Modeling these requires a combination of NLP models, computer vision models to identify logos, URL models to identify suspicious links, and so forth. Modeling each of these dimensions gives our system an understanding of emails, particularly along dimensions that might be used by attackers to conduct social engineering attacks. The next step is to actually detect these attacks.

Combining Models to Detect Attacks

Ultimately, we need to combine these sub-models to produce a classification result (for example P(Attack)). Just like any ML problem, the features given to a classifier are crucial for good performance. The careful modeling described above gives us very high bandwidth features. We can combine these models in a few possible ways.

(1) One humongous classification model: Train a single classifier with all the inputs available to each sub-model. All the input features could be chosen based on the features that worked well within each sub-problem, but this final model combines everything and learns unique combinations and relationships.

(2) Extract features from sub-models and combine to predict target: There are 3 ways we can go about this.

(2.a) Ensemble of Models-as-Features: Each sub-model is a feature. Its output is dependent on the type of model. For example, a content model might predict a vector of binary topic features

(2.b) Ensemble of Classifiers: Build sub-classifiers that each predict some target and combine them using some kind of ensemble model or set of rules. For example, a content classifier would predict the probability of attack given the content alone.

(2.c) Embeddings: Each sub-model is trained to predict P(attack) like above or some other supervised or unsupervised target, but rather than combining their predictions, we extract embeddings, for example, by taking the penultimate layer of a neural net.

Image for post

Each of the above approaches has advantages and disadvantages. Training one humongous model has the advantage of learning all complex cross dependencies, but it is harder to understand and harder to debug, and more prone to overfitting. It also requires all the data available in one shot, unlike building sub-models that could potentially operate on disparate datasets.

The various methods of extracting features from sub-models also have tradeoffs. Training sub-classifiers is useful because they are very interpretable (for example, we could have a signal that represents the suspiciousness of text content alone), but in some cases, it is difficult to predict the attack target directly from a sub-domain of data. For example, purely a rare communication pattern is not sufficient to slice the space meaningfully to predict an attack. Similarly as discussed above, a pure content model cannot predict an attack without context regarding the communication pattern. The embeddings approach is good, but also finicky, it is important to vet your embeddings and not just trust they will work. Also, the embedding approach is more prone to overfitting or accidental label leakage.

Most importantly, with all these approaches, it is crucial to think deeply about all the data going into models and also the actual distribution of outputs. Blindly trusting in the black box of ML is rarely a good idea. Careful modeling and feature engineering are necessary, especially when it comes to the inputs to each of the sub-models.

Our Solution at Abnormal

As a fast-growing startup, we originally had a very small ML team, which has been growing quickly over the past year. With the growth of the team, we also have adapted our approach to modeling, feature engineering, and training our classifiers. At first, it was easiest to just focus on one large model that combined features carefully engineered to solve subproblems. However, as we’ve added more team members, it has become important to split the problem into various components that can be developed simultaneously.

Our current solution is a combination of all the above approaches, depending on the particular sub-model. We still use a large monolithic model as one signal, but our best models use a combination of inputs, including embeddings representing an aspect of an email and prediction values from sub-classifiers, for example, a suspicious URL score. Combining models and managing feature dependencies and versioning is also difficult.

Takeaways for Solving Other ML Problems

Here's my best advice for how understanding ML at Abnormal can help with other ML problems.

  1. Deeply understand your domain.
  2. Carefully engineer features and sub-models. Don’t trust black box ML.
  3. Solving many sub-problems and combining them for a classifier works well, but don’t be dogmatic. Sure, embeddings may be the purest solution, but if it’s simpler to just create a sub-classifier or good set of features, start with that.
  4. Breaking up a problem also allows scaling a team. If multiple ML engineers are working on a single problem, they must necessarily focus on separate components.
  5. Modeling a problem as a combination of subproblems also helps with explainability. It’s easier to debug a text model than a giant multi-modal neural net.

There's still more for us to do. We need to figure out a more general pattern for developing good embeddings and better ways of modeling sub-parts of the problem, better data platforms, and feature engineering tools, and so much more. Attacks are constantly evolving and our client base is ever-growing, leading to tons of new challenges every day. If these problems interest you, we’re hiring!

Combining ML Models to Detect Email Attacks

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Posts

B SOC Prod
Learn how AI-driven automation boosts SOC productivity by reducing false positives, addressing skills gaps, and enhancing threat detection. Discover strategies to future-proof your SOC and strengthen cybersecurity defenses.
Read More
B Proofpoint Customer Story F500 Insurance Provider
A Fortune 500 insurance provider blocked 6,454 missed attacks and saved 341 SOC hours per month by adding Abnormal to address gaps left by Proofpoint.
Read More
B Malicious AI Platforms Blog
What happened to WormGPT? Discover how AI tools like WormGPT changed cybercrime, why they vanished, and what cybercriminals are using now.
Read More
B MKT748 Open Graph Images for Cyber Savvy 7
Explore insights from Brian Markham, CISO at EAB, as he discusses cybersecurity challenges, building trust in education, adapting to AI threats, and his goals for the future. Learn how he and his team are working to make education smarter while prioritizing data security.
Read More
B Manufacturing Industry Attack Trends Blog
New data shows a surge in advanced email attacks on manufacturing organizations. Explore our research on this alarming trend.
Read More
B Dropbox Open Enrollment Attack Blog
Discover how Dropbox was exploited in a sophisticated phishing attack that leveraged AiTM tactics to steal credentials during the open enrollment period.
Read More