chat
expand_more

Graph of Models and Features

At the core of all Abnormal’s detection products sits a sophisticated web of prediction models. For any of these models to function, we need deep and thoughtfully engineered features, careful modeling of sub-problems, and the ability to join data from a set of databases. For example, one type of email attack...
February 2, 2021

At the core of all Abnormal’s detection products sits a sophisticated web of prediction models. For any of these models to function, we need deep and thoughtfully engineered features, careful modeling of sub-problems, and the ability to join data from a set of databases.

For example, one type of email attack we detect is called business email compromise (BEC). A common BEC attack is a “VIP impersonation” in which the attacker pretends to be the CEO or other VIP in a company in order to convince an employee to take some action. Some of the inputs to a model detecting this sort of attack include:

  1. Model evaluating how similar the sender’s name appears to match a VIP (indicating impersonation)
  2. NLP models applied to the text of the message
  3. Known communication patterns for the individuals involved
  4. Identity of the individuals involved extracted from an employee database
  5. … and many more

All these attributes are carefully engineered and may rely on one another in a directed graphical fashion.

This article describes Abnormal’s graph-of-attributes system which makes this type of interconnected modeling scalable. This system has enabled us to grow our ML team while continuing to rapidly innovate.

Attributes

We store all original entity data as rich thrift objects (for example a thrift object representing an email or an account sign-in). This allows flexibility in terms of the data types we log, enables easy, backward compatibility, and understandable data structures. But as soon as we want to convert this data into something that will be consumed by data science engines and models, we should convert these into attributes. An attribute is a simply-typed object (float / int / string / boolean) with a numeric attribute ID.

Image for post

Attribute vs Features: Attributes are conceptually similar to features, but they might not be quite ready to feed into an ML model. These should be ready to convert into a form consumable by models. All the heavy lifting should occur at the time of attribute extraction, for example running inference on a model or hydrating from a database.

The core principles we are working off include:

  • Attributes can rely on multiple modes of inputs (Other raw attributes, Outputs of models, Data hydrated from a database lookup or join)
  • Attributes should be flat data (i.e. primitives) and representable in a columnar database
  • Attributes should be simple to convert to features (for example you may need to convert a categorical attribute into a one-hot vector)
  • We will always need to change and improve attributes over time

Consuming Attributes: Once data is converted into a columnar format, it can be consumed in many ways—ingested into a columnar store for analytics, tracked in metrics to monitor distributional shifts, and converted directly into a feature dataframe ready for training with minimal extra logic.

Directed Graph of Attributes

Computing attributes as a directed graph allows enormous flexibility for parallel development by multiple engineers. If each attribute declares its inputs, we can ensure everything is queried and calculated in the correct order. This enables attributes of multiple types:

  1. Raw features
  2. Heuristics that use many other features as input
  3. Models that make a prediction from many other features
  4. Embeddings

Our Attribute Hydration Graph looks like this.

Image for post

Explicitly encoding the graph of attributes seems complex, but it saves us painful headaches down the road when we want to use one attribute as an input to another.

Attribute Versioning

Inevitably, we will want to iterate on attributes and the worst feeling is realizing that the attribute you want to modify is used by ten downstream models. How do you make a change without retraining all those models? How do you verify and stage the change?

This situation comes up frequently. Some common cases include:

  • An attribute is the output of a model or an embedding. You want to re-train the model, but this attribute is used by other models or heuristics.
  • An attribute relies on a database serving aggregate features and we would like to experiment with different aggregate bucketizations.
  • We have a carefully engineered heuristic feature and we would like to update the logic.

If each attribute is versioned and downstream consumers register which version they wish to consume, then we can easily bump the version (while continuing to compute the previous versions) without affecting the consumers.

Scaling a Machine Learning Team

In addition to enabling flexible modeling of complex problems, this graph of models enables us to scale our machine learning engineering team. Previously, we had a rigid pipeline of features and models which was really only allowed a single ML engineer at a time to develop. Now, we can have multiple ML engineers developing models for sub-problems, and then combining the resulting features and models together later.

We need to figure out how to more efficiently re-extract this graph of attributes for historical data and good processes for sunsetting older attributes. We would like to build a system that allows our security analysts and anyone else in the company to easily contribute attributes and allow those to automatically flow into downstream models and analysis. We need to improve our ability to surface relevant attributes and models scores important to a given decision back to the client to understand the reasons an event is flagged. And so much more… If these problems interest you, we’re hiring!

Graph of Models and Features

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Posts

B Proofpoint Customer Story Blog 8
A Fortune 500 transportation and logistics leader blocked more than 6,700 attacks missed by Proofpoint and reclaimed 350 SOC hours per month by adding Abnormal to its security stack.
Read More
B Gartner MQ 2024 Announcement Blog
Abnormal Security was named a Leader in the 2024 Gartner Magic Quadrant for Email Security Platforms and positioned furthest for Completeness of Vision.
Read More
B Gift Card Scams Tricker to Spot Blog
Learn why gift card scams are becoming more difficult to identify, how cybercriminals evolve their tactics, and strategies to protect your organization.
Read More
B Offensive AI 12 16 24
Learn how AI is used in cybersecurity, what defensive AI vs. offensive AI means, and how to use defensive AI to combat offensive AI.
Read More
B Proofpoint Customer Story Blog 7
See how Abnormal's AI helped a Fortune 500 insurance provider detect 27,847 threats missed by Proofpoint and save 6,600+ hours in employee productivity.
Read More
B Cyberattack Forecast Emerging Threats Blog
Uncover the latest email threats and strategies to strengthen your cybersecurity and prepare for 2025.
Read More