chat
expand_more

How We Overhauled Our Architecture to Handle 100x Scale: Python to Golang, Celery to Kafka

As our customer base has expanded, so has the volume of emails our system processes. Here’s how we overcame scaling challenges with one service in particular.
August 30, 2022

As a rapidly growing email security startup, we recognize just how essential flexibility and adaptability are to our success. Protecting an ever-increasing number of organizations from advanced threats like business email compromise (classified by the FBI as one of the most financially damaging cybercrimes) depends on not only our ability to properly scale but also troubleshoot growing pains.

To keep employee inboxes secure, Abnormal Security processes all mail to filter out undesirable messages—including, but not limited to, phishing, spoofing, and malware. While we are certainly excited to see that our customer base is expanding (since it reflects the increasing demand for our solution), this growth also creates challenges. In particular, it has vastly increased the volume of mail we go through daily across our services, leading to scaling issues and frequent outages.

This was especially significant in one service, Unwanted Mail, which filters all mail to determine actions to take on messages we detect as unwanted, such as spam and promotional mail.

In part one of this blog post, we share why we decided to overhaul the Unwanted Mail service, which was originally built with Python and Celery as its broker. In part two, we will discuss our process and the benefits we reaped.

Unwanted Mail Service

On a high level, the Unwanted Mail service (which we’ll refer to as “UM”) evaluates incoming mail to identify and take action on spam and promotional mail. Over time, the service intelligently infers user preferences by observing how they interact with these messages, intelligently building safelists and blocklists for future mail.

Overhauled Architecture 1 Message Flow

High-level overview of Unwanted Mail Service

Service Growth

UM processes both safe mail and unwanted mail to output message movement decisions. Given that an average person receives 100-120 emails a day, and Abnormal’s customers have an average of approximately 1000 mailboxes, every new customer onboarded to Abnormal results in an increase in hundreds of thousands of messages for the UM service.

Here’s a quick chart to show how UM has grown over the first six months of 2022. The following shows the increase in requests UM received over a two-week period. We started to experience scaling problems with our current architecture at around the 30 million mark as shown below.

Overhauled Architecture 2 Unwanted Mail Requests

Total number of requests sent to Unwanted Mail Service

Why Our Current Solution Just Wasn’t Working

“An outage a day keeps customers away.”

Python and Celery are an integral part of Abnormal services. All of our products, including UM, were implemented and maintained by the asynchronous model Celery provides. The diagram below shows the original UM architecture.

Overhauled Architecture 3 Original Architecture

Original architecture of UM service with Python + Celery

In the early startup days, Python and Celery were great in that they allowed us to quickly build products and iterate quickly. However, as the company grew, our engineering vision changed from building fast to building scalable solutions fast.

We found that our current architecture could not meet our needs for the following reasons:

  1. Service Scalability
    Due to high resource overhead, we could not have a high number of concurrent workers per task. To increase processing parallelism, we had to increase the number of tasks in our cluster, which was extremely costly and not sustainable.

  2. Broker Scalability
    As we reached task scaling limitations, our Celery peak queue length grew longer while traffic increased. We were starting to hit Redis broker capacity issues as Celery’s Redis broker started hitting memory capacity, which led to it denying enqueue requests entirely. Although we had increased the Redis cache size twice in an attempt to alleviate this problem, we knew we were going to reach the end of our runway very soon.

  3. Availability
    A single point of failure on the Redis broker backing Celery meant that if that ever goes down, the whole queue goes “poof”. As various services were also sharing the same broker, this also meant that long processing times on other services reduced availability on our own service.

  4. Reliability
    Redis is an in-memory cache. If the Redis broker goes down completely, the queued tasks are lost and will never be processed. While we had instrumented solutions as workarounds, these were unsustainable long term.

  5. Efficiency
    Celery workers were experiencing mysterious minor memory leaks and needed to be frequently rotated. This not only increased the amount of maintenance effort required but also meant that we had to minimize concurrency to free up some memory space before memory leak caused task performance to degrade.

Furthermore, as this architecture could not handle the volume, we had to preprocess messages from within our upstream Notifications Processor system before they were sent to separate async Python workers via Celery. This meant that business logic was spread throughout the codebase, decreasing maintainability and complicating monitoring. Python and Celery were great, but we needed to find a better long-term solution.

Challenges in Choosing (and Implementing) a New Solution

“It can’t be that easy, right?”

With the problem properly framed, we now had to research and implement a better solution that would last. There were several challenges associated with rewriting this service, especially a critical one like UM:

  1. We had a new feature coming that would at least double the current traffic

  2. We were still growing fast and needed a solution that would be able to hit the ground running

  3. We wanted a solution with longevity, not just a patch on an existing solution that will save us for the next three months but lasts for the next three years

  4. It was architecturally impossible to meet traffic and reliability demands with the current architecture without overhauling upstream systems. Even with tweaks and optimizations on Celery, Redis broker, and our consumer tasks, we weren’t going to scale up to projected traffic by the end of the year.

All things considered, the signs pointed to the urgent need for a rewrite before we hit the catastrophic failure point—even if rewriting the service was not a product priority.

Why Golang + Kafka

“A step towards a brave new world.”

The new design we arrived at is represented in the diagram below. In the new architecture, we have replaced Celery with Kafka as our broker and rewritten our workers in Golang.

Overhauled Architecture 4 New Architecture

New architecture of UM service with Golang + Kafka

Benefits of Golang

There were several benefits of using Golang over Python for our production service. Here are the five biggest ones, ordered by impact and importance:

  1. Low concurrency overhead thanks to goroutines and channels. This means we can have a much larger number of concurrent workers per task (spoiler alert: we are comfortably at 50x) and can increase message consumption throughput without increasing size.

  2. Low deployment overhead. This is thanks to low build times, small binary size, and much faster and more reliable dependency resolution.

  3. Static typing. By being verbose on what type of data is being passed through and processed within code flow, we can catch typing errors easily on compilation, allowing code to be protected from runtime errors most of the time.

  4. Native pooling support for Redis and SQL clients. This allows us to manage connection pools efficiently.

  5. Speed. Golang code just runs faster than Python (given the same business logic). Small gains on processing time of one process leads to incrementally large gains on the whole service as QPS increases.

Benefits of Kafka

We decided on Kafka for the following reasons:

High Queue Capacity

We did a preliminary test to quickly measure queue capacity by setting up our upstream system to produce 10x the amount of traffic we currently send to Celery into a topic on a Kafka cluster. In our initial test, we found that the Kafka cluster reached 200x capacity at approximately 2x the daily cost of the Redis broker behind Celery. This was as expected, as Kafka brokers store incoming messages into a persistent data storage that is orders of magnitude larger than the in-memory storage a Redis broker provides.

High Availability

This came in 3 levels.

  1. Multiple brokers: Taking replication configuration out of the picture, even if a broker goes down, other partitions are still running, meaning the queue is never entirely dead.

  2. Multiple AZs per broker: If we get AZ-level outages, there are still brokers residing in other AZs, so the entire cluster will never go down completely.

  3. Replication: Bringing replication back into the picture, by setting replication configuration properly, if a broker goes down, another broker with an in-sync replica of the partition will naturally be elected as the leader and process messages in the partition. If set properly, there would be minimal chance of outages, partial or otherwise—even if individual brokers do go down.

Reliability

Kafka has a persistent log store. Even if the entire cluster goes down due to some major disaster, when brokers come back online they would just read from the persisted log storage and resume processing of unprocessed messages.

A Successfully Scaled Solution

With these considerations in place, the benefits of Golang and Kafka were pretty clear. We would like to thank Praveen Bathala for giving us advice and guidance for this project, without which this would not have been possible.

In the second part of this blog post, we will discuss how we effectively migrated the system to this new architecture without any downtime for customers and showcase how this significantly improved our systems.

As a fast-growing company, we have lots of interesting engineering problems to solve, just like this one! If these problems interest you, and you want to further your growth as an engineer, we’re hiring! Learn more at our careers website.

How We Overhauled Our Architecture to Handle 100x Scale: Python to Golang, Celery to Kafka

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

 

See the Abnormal Solution to the Email Security Problem

Protect your organization from the full spectrum of email attacks with Abnormal.

 
Integrates Insights Reporting 09 08 22

Related Posts

B Most Interesting Attacks Q1 2024
Take a look at five of the most unique and sophisticated email attacks recently detected and stopped by Abnormal.
Read More
B MKT499 Images for Customer Blog Series
Discover key industry trends and insights from cybersecurity leader Michael Marassa, CTO of New Trier Township High School District 203.
Read More
B Construction Professional Services QR Code Attacks
Abnormal data shows construction firms and professional service providers are up to 19.2 times and 18.5 times, respectively, more likely to receive QR code attacks than organizations in other industries.
Read More
B 1500x1500 Evolving Abnormal R2
From the beginning, we created Abnormal Security to be a generational company that protects people from cybercrime. Here’s how we’re doing it.
Read More
Blog Cover 1500x1500 Images for SOC Time Blog
Discover the critical tasks that occupy SOC analysts’ schedules beyond mere inbox management, and discover insights into optimizing efficiency in cybersecurity operations.
Read More
B 1500x1500 MKT494 Top Women in Cybersecurity
In honor of Women's History Month, we're spotlighting 10 women leaders who are making invaluable contributions to cybersecurity.
Read More