chat
expand_more

Machine Learning as a Way to Drive Business Value

Jesh Bratman, a founding member at Abnormal Security and Head of Machine Learning, was just featured on The Tech Trek’s podcast. Jesh deeps-dives into his past, building ML systems to detect abusive behavior at Twitter, and how he used this background to transition...
October 19, 2020

Jesh Bratman, a founding member at Abnormal Security and Head of Machine Learning, was just featured on The Tech Trek’s podcast. Jesh deeps-dives into his past, building ML systems to detect abusive behavior at Twitter, and how he used this background to transition into detecting malicious emails for Abnormal Security. He also discusses the chess match between attackers and ML models, and insights on how to evaluate the application, use, and importance of ML within an organization.

If you would like to reach out to Jesh about anything he discussed on the podcast, please reach out to him via his LinkedIn or Twitter.

For the full episode, head over to The Tech Trek's Spotify.

We’ve included a transcript of the full episode below:

***

Host:

On this episode of the podcast, I have Jesh Bratman. He is the Head of ML at Abnormal Security. We are going to get into a few different topics on this episode We're going to touch on the chess match between hackers and ML. We're going to talk about understanding the context of abuse detection. And then we're going to have a little bit of career advice on if you're joining a startup, what to look at and evaluate in terms of ML opportunities. Jesh, thanks for being on.

Jesh Bratman:

It's good to be here. Thanks for having me.

Host:

Awesome, man. So you have an awesome background. I did not do it justice. So could you maybe let everyone know who you are and kind of how you got to where you're at now?

Jesh Bratman:

Yeah, absolutely. So as you mentioned, right now, I am working at this startup called Abnormal, running our machine learning for detecting cyber attacks. And how did I get here? So my background is machine learning in academia. I did about three quarters of a PhD, dropped out eight or nine years ago, and joined a startup in Silicon Valley. And this is an ad tech startup. There, I ran ML for trying to place bids for ads in these online exchanges. That startup was actually acquired by Twitter. I worked for a long time on the machine learning platform at Twitter as well. And in addition to building machine learning platform components, also worked on various ML problems across Twitter, including one problem that was huge then at the time while I was there and still is a big problem with Twitter, which is detecting abusive behavior. So things like hate speech and harassment and bullying, all of these sorts of bad behaviors on the Twitter platform, which is sort of a constant problem on Twitter and other social media platforms.

Jesh Bratman:

And the group that I helped found this startup with I knew through Twitter and through this previous startup. And we brought a lot of the problems and techniques that were used to stop abuse on Twitter to this problem of stopping social engineered email attacks, which is the core of Abnormal Security's products. So trying to adapt to these very cleverly designed attacks that are meant to trick humans. And how do you build ML systems, AI systems to identify those and prevent them? And so that's how I ended up at this company. We're about two and a half years old now. It's going well. We are primarily focused on stopping these socially engineered attacks, but also other types of cybersecurity problems like account takeover detection too, detecting when accounts have been taken over by an outside attacker and identifying strange behaviors within those accounts. And also other similar problems like data loss prevention, where an organization is worried about sensitive documents being leaked outside of their organization. Those are all sorts of the types of ML detection problems that we do.

Host:

Awesome. I guess just the first question, so maybe define, just to make sure everyone has the same context of the social engineered email text, what is that exactly?

Jesh Bratman:

Yeah, that's a good question. It's a really broad range of cyber attacks that sort of encompasses anything that involves not actually breaking through security, but by convincing someone to allow them through. So the most basic example of this is a phishing email in which someone says, "Oh, hey, click on this link to reset your password." And it appears to be from Google or something like that. And this is the famous example of John Podesta who fell for one of these and all the DNC emails were leaked. So that's a really basic type of social engineering, but they get very sophisticated where the attackers will play this long game, they will invent people at another organization and start communications over the course of months to build up sort of a reputation. And then at the right point, launch some kind of attack to steal money or steal credentials or steal information.

Host:

Yeah, I think we all know a few on the internet that we've read about these situations. I guess to kind of unpack that, I know based on what you did at Twitter and what you're doing now, the classification is where I know in our pre-call we talked a lot about the core of what you guys are doing. Maybe talk about what that exactly means generally, and then we can kind of talk about it in relation to these email attacks.

Jesh Bratman:

Right. So the core of this machine learning problem, if you think of this cybersecurity problem as a machine learning problem, is you have a very vast amount of information that you could consider in your algorithm. So in this case, it's all the text content and all the other information around a particular communication. In this case, an email. Or in the case at Twitter that I discussed, this sort of tweet conversation, the history of communication between these two parties. The problem is to differentiate a legitimate, safe interaction from one that is illegitimate in some way. And in the case of Twitter, that could be detecting the difference between just two friends being kind of harsh to each other compared to strangers who are bullying and harassing each other, which the context of these two things is very important.

Jesh Bratman:

And so this is one of the crux of the problem from an ML point of view. It's not just an NLP problem because the text itself is going to look totally normal in a lot of cases, totally legitimate. In the cases of email attacks, the emails are crafted to look very much like safe messages. So just from the text content, you can understand all you want about it using modern NLP, but that's not going to help you differentiate whether it's legitimate or not. The key of this is to pair the understanding of the content with the understanding of the context. So who are these people to each other? This communication that's happening right now, and we're making a classification decision about, is this abnormal in some other way? Is this communication between these two groups of people unusual? These two individuals? And that combined with the understanding of the content is really how you can try to get some classification power here. That's a really tricky problem because either one of those by themselves is not sufficient at all to even start on building a classifier for this sort of problem.

Host:

I guess when it comes to these type of problems, especially in email. In Twitter, you have a baseline of somebody who signs up and now we're all on the same platform. I'm a big fan of the platform, I interact quite a bit. I can see how understanding the context is something where you guys in your past could understand that data, understand the context, identify anomalies. When you're talking about email, where you might get an email from somebody that has never emailed you and you need to detect anomaly versus not, context versus not, what are the challenges of actually being able to, obviously without spilling the secret sauce of what you guys are doing, but what's some of the challenges of actually detecting that? That seems like a pretty challenging issue to identify context to an outside person's email to you.

Jesh Bratman:

Yeah, it is really hard because email is this kind of wild world of anybody can contact anyone else, right? Which is actually a little bit similar to Twitter in that way in that Twitter is the one other common platform in which anyone can contact anyone in a way with an at mention at Twitter, right? With email, you just send an email to somebody. So you're right. On Twitter, it was easier in a lot of ways because Twitter had all the data. So we had access to the graph of all past communications. In the email world at Abnormal, we have to kind of piece that together. So what we can do is we sort of look over the history of all communication within an organization. So we build up this graph of communication that's happened in the past. Of course, that's not totally sufficient because there are going to be legitimate sort of cold call emails that come through and you can't just stop all of those.

Jesh Bratman:

You have to identify, "Okay, well, this thing seems to be unusual. It seems to be a rare communication. These people have never talked before." But that doesn't mean by itself it's illegitimate. That, paired with understanding of the content, and you say, "Well, okay. Well, this seems to be asking for a password reset. Or this seems to be trying to solicit a payment. Or this seems to be initiating a purchase order or something like that." When you combine those two things together, that's when you can start differentiating. Building up this understanding of the communication patterns is really hard, especially for all sorts of little reasons in that people's different email clients will name themselves different things. And so to even know who is the same person is sometimes really hard.

Host:

I'd imagine that's got to be, in terms of seeing an outside email, I mean, I'm sure you get a lot of spam versus cold emails to your work, as everyone. And sometimes, there is that one out of 800 emails that come to you and you go, "Hey, this one. I got to remember this for when I do need it." And what's funny is I kind of look at the context of that, just trying to think about relating it to what happens on Twitter. And I'm like, if it's an outsider organization, never had any impact, what's the chances of, and I guess I don't know the product, I was going to ask about false positive negatives. In terms of what happens when you guys might err on the side of, "Hey, let's flag this," versus not, have you seen any impact on the user community and potentially people having concerns over ML looking at emails in this way?

Jesh Bratman:

Yeah. I think sort of unpack a couple questions in there. I think I'll answer the first one first about false positives. And it's a very good point. Before working on fraud or attack detection type ML problems, this problem of extreme precision in recall wasn't something that I had dealt with as much. A lot of the problems that I had worked on in the past, other than this abuse one at Twitter, was a lot around building ML models that have good sort of average performance. So you imagine recommendation systems, chat bots. A lot of these things just often need to have a good average case performance. Chatbots maybe need more individual basis performance.

Jesh Bratman:

In this case, you really, really care about every single decision that's being made. And even a few incorrect decisions can be really harmful in both directions. Both false negative can be really harmful because you're letting through a potentially very damaging attack. A false positive can be harmful in that it disrupts the business that you're protecting or organization you're protecting. Because they won't get an email, right? There's a lot of ways to kind of mitigate that from the product point of view. But from the machine learning point of view, which is really what I focus on, it's really necessary to have very careful and diligent evaluations of your models. It's especially hard when you have this requirement for both high precision and high recall, that you are extremely careful about everything in your data cleaning and test and hold out sets, that you are actually really confident in the performance of these models.

Jesh Bratman:

Because one of the very scary things is you push one of these models out in production. How do you make sure that it is actually at that precision and recall sort of level that you expect it to be? And how do you maintain that it's continuing to be at that level when all sorts of new things that you haven't anticipated could pop up? I think this is a perennial problem in all AI applications these days. I think a lot of the easier ML problems have been solved, and now you're seeing these much more difficult, challenging problems that where this last mile is a big deal. I think self-driving cars is a good example of that, right?

Jesh Bratman:

One of the things that's taking this technology forever to actually go over the finish line is all of these edge cases, right? And that is one of the inherent problems with ML is that it often does a very good job on an average case, but there's always these outliers. And you have to limit the impact of those outliers. So it's hard. I think it's one of the hardest parts of the problem that Abnormal Security's dealing with is to ensure those very low false negative and false positives.

Host:

I agree. I think this sounds like a very complicated learning solution. I mean, I guess the one thing when you were mentioning about false positives, false negatives, how many emails have to go through the system before, I guess, a model can go live? What's the threshold you guys have kind of discovered to be able to build that graph and understand when that right precision point is available for a company?

Jesh Bratman:

Well, in terms of measuring the precision, we sort of do statistical tests on it. So we say, "This is the number of messages that we have to evaluate this on to have belief in what its precision is going to be." We do have to make a bit of a leap of faith when it comes to a new client integration, although we do a lot of QA in the process. Where we're building models, they are generalizable. So we don't necessarily have to build a new model for a new client that comes on board. We do expect generalization.

Jesh Bratman:

So email attacks are similar across everybody, right? This is a common problem for everybody. Different organizations do have particular types of attacks based on what industry they're in. One of our clients is an energy company and they're attacked in maybe more sophisticated ways because there's state actors involved in those attacks. Then maybe a retail or manufacturing company that maybe has more just financially motivated criminals going after them. So the sort of type of attacks that you're seeing across that will depend a lot on the industry. But as we build up more data, we have more generalizable models as that goes.

Host:

It's funny. As I'm realizing in every podcast, I have a COVID factor question because the environment's changed. Has there been any change for you guys during the period where everyone was remote versus previous, was that an issue or not?

Jesh Bratman:

I think the first answer is on our product and the problem space we're in. And it's only become more of a problem. Everyone's working from home, more and more business is being done over Zoom and email and Slack. And more invoicing is being done over email now. People can't go print things out and hand them to each other. So the amount of cyber attacks has also gone up. People are capitalizing on the fact that more business is being done over the internet now, but they're also capitalizing on the chaos. We saw, right at the beginning of COVID, a huge, huge spike in a different type of opportunistic attacks around COVID. So we have some blog posts out about it as well. But for example, people pretending to be messages from the CDC or from the White House. Or there was a particularly malicious one that was saying, "Oh, this is how you have to go claim your stimulus check. You have to go through this process here and give your social security number and bank account, routing number," or something like that.

Jesh Bratman:

So there's been a huge increase in attacks. And for us as a company, luckily and unluckily, cybersecurity budgets kind of can't go away even if companies are struggling. It's sort of a crucial line item to protect an organization. From that point of view, I mean, we have had to train a lot of new models just in the last few weeks specifically to account for some of these new things we're seeing.

Host:

And I'll get some of those links to those blog posts just to kind of share with this discussion point. With the security thing, I was interviewing someone, he said, "Security cannot take a rest." I think he mentioned security, compliance, privacy. Those do not rest, no matter what's going on. And I think that you're very right about that. So I guess just to touch on one thing, I know we've talked about the product. Maybe in terms of, I know you mentioned the business value that can be provided from ML. I guess this is in two contexts, right? So I want to actually talk about maybe the first in terms of what you've seen from your career when you're working, approaching solving these problems. This is a really challenging problem. Twitter had a different use case. When you're talking about what you could do, the business value you can generate from your work, how do those discussions look internally or with other peers?

Jesh Bratman:

I think that's an interesting question that a lot of people in ML, aspiring data scientists, maybe don't even think about, of sort of the role of the business value of ML being really crucial to think about. ML is sort of a hammer, right? It's a way to generalize from data, produce automated systems from data that make intelligent decisions. Now, there's no point in making those intelligent decisions if the value of those decisions, if no one's sort of willing to pay for that or incorporate it for some reason into something that they're doing. One thing that's really important to remember is that, especially when you're working a technology startup, you're trying to create a product people want to buy. And you can't really approach it as, "Oh, I have this ML solution to solve this interesting ML AI problem. And I'm going to build that, and that's going to then ... Customers are going to come to that." That's not how the world works.

Jesh Bratman:

You have to start with the product, start with something that people want to buy. In this case with Abnormal, it's preventing attacks, cyber attacks, particularly how we've started as business compromised email (BEC) attacks, and phishing attacks, and account takeover attacks. So the product that the customers want are those things. Now, the solution to that may or may not be machine learning, right? Maybe you can just start with building heuristic systems to try to stop these things or sort of building threat intelligence systems, which is sort of the classic approach to stopping this type of attack. We came in with a hypothesis that ML could do this much better than what the current incumbents were able to do in the email security space. And to prove that out, we had to be very honest with ourselves. And so, as me and the rest of the team were building up this product, we were additionally building up the best we could do without ML. Because if you can have a non-ML solution, it's usually better. It's easier to maintain, it has lesser of these edge cases to deal with.

Jesh Bratman:

So you always want to kind of build a baseline that doesn't involve machine learning to see how much of this product, their business value is derived from machine learning. And as someone who's coming in, as the technology expert here, as the machine learning person, you want your hammer, the thing you're an expert at, to be the solution. But I've seen a lot of people put blinders on and not think about the whole problem, think about the actual business value you're trying to create, and thinking about really where this hammer is useful and where it's not, and focusing on where it really is.

Jesh Bratman:

In our case, when we first started the company, we didn't have a lot of data. And it wasn't until we started getting some pilot customers that we were able to really get the amount of data that we needed to build good ML models. Once we did, once we started sort of building up that data flywheel, the ML solutions vastly outstripped anything we could do without ML. Those were directly in line with what our customers wanted.

Jesh Bratman:

And I think one other thing that's interesting is sort of thinking creatively also about what ML can do and provide value by itself. One very specific example for Abnormal is we have these models that predict whether an email is an attack or not. The core of our product is just block that email or not. But there's a lot of value in addition to provide ML explainability, to provide the insights back to the customer why this was an attack, why the ML model was able to identify this as an attack. That helps their security analysts understand the landscape of problems, understand what are factors that they may not be considering themselves when they're thinking about vulnerabilities in their organization.

Jesh Bratman:

There's a lot of value can be provided by something else that's not directly the ML classification that you sort of think of as the best classification as possible. But also providing context back to an organization. There's all sorts of value that that adds in helping them understand the problem, but also helping them sort of sell the product internally for these sort of enterprise software products. You want to not only do the ML, but show what it's doing and why it's working and why it's actually better than what maybe some competitive solutions are trying to do. It's really important as an ML practitioner to not just be in your hole of here's the technology problem I'm trying to solve, but think about the bigger picture of what you're actually trying to accomplish.

Host:

Yeah. I'd imagine that your natural instinct is to bring the hammer in and to actually take a step back and see whether or not there's a viable solution without the sledgehammer. That's got to be pushing someone's own mental, emotional side because you obviously are gravitating to what you know, what you can do. And you're like, I can solve it faster. So it's an interesting viewpoint that you sometimes have to temper that and look at the alternative before you proceed.

Jesh Bratman:

Our CTO and one of the co-founders, he has some ML background too, but he's very much sort of from the software point of view. And so he and I have a very healthy kind of we butt heads on this. He's like, "Well, why can't we just try to solve it in this way?" And then we go back and forth and find the right combination of techniques.

Host:

That's awesome. That seems like a really good counterbalance so that you guys are kind of working through the right solution. I guess in terms of the solution, who typically uses Abnormal? And I've always kind of viewed security as if you've ever had ADT or some security system at your home, you're either ahead of the curve because you're worried about it, or you had an issue and now you're dealing with it reactively. Who tends to be the typical customer for the product?

Jesh Bratman:

It's a combination of both. Our customers are large organizations, so Fortune 500 and Fortune 1000 companies. And some of them come to us because they have a sophisticated security team who is looking to the future and trying to identify who are the kind of leaders in this space? And who is doing something that hasn't been done before? That's some of our customers. Some of our customers are in the other camp you mentioned in that they were hit by something. Everybody's being hit by something. So this isn't unusual. But I think what's kind of happening is there's an old generation of email security that hasn't been able to keep up with the attacks that are happening. The attackers are getting more sophisticated. And for one reason or another, the traditional secure email gateway (SEG) sort of companies are letting through a lot of attacks.

Jesh Bratman:

So almost every organization is aware of this and is just trying to find a solution. And sometimes they may buy us on top of one of those products. Sometimes we may replace one of those other products. And in those cases, these organizations have had something bad happen to them. They've lost money or they've had a pretty bad security breach. So everyone's trying to find a solution together, which is one thing I really like about being in the security space is that everyone's sort of on the same side here. No one wants these things to be able to happen. Even in comparison to some of the other places in ML, even just different areas of technology startups, it's more of a sort of adversarial space between trying to convince your clients that they need you. In this case, there's no convincing that really needs to happen. Our clients know that they need more security. And it's just whether our products can deliver it or not.

Host:

You mentioned, obviously on the same side, I was going to ask you to talk a little bit about the chess match that happens, hacker versus ML models, and kind of the dynamics that are at play and kind of, I guess, the evolution of how this is being played out now.

Jesh Bratman:

This is something new to me as an ML person. The problem is adversarial. The attackers know that there's ML models behind the scenes trying to block their attacks. They know this. And they're also very smart. A lot of them are very intelligent. They probably have a knowledge of how machine learning works. And it can probably hypothesize what the models are picking up on what they're not picking up on. So they're constantly trying to get around both the automated security, but also get around the individual people they're targeting, right? They're trying to trick them in new ways. From the machine learning point of view, there's a long history of this. I think one of the simplest things you kind of seen back a long time ago is in spam, hackers would paste large extra text at the bottom to try to throw off spam models, right? So that's still being done, right?

Jesh Bratman:

But they're also doing very sophisticated things. So for example, one of our features, in fact, and probably features of other companies in a similar space to us, is how often have these two people communicated in the past? Pretty obvious feature that you might use to understand whether this is fraud. Well, attackers will try to hack that feature by creating innocuous communication over the course of months with an individual or with an organization by sort of they might send a message that's just like, "Oh, test message. Disregard this." Right? And maybe no one will even think anything of that. And that might happen every once in a while over the course of a few months. Now they've sort of built up this reputation that there is communication between these people. That's just the beginning of this whole space of where they can try to manipulate the models and get around.

Host:

It's amazing how, like you mentioned, I think what fascinates me about security is that there is completely everyone's on one team and there's completely a different team and everyone's united with different technologies, weapons, options, tools, whatever we want to call them to combat that. There's this ebb and flow. Since the beginning of time, we've had security problems of various kinds 100,000 years ago to now. I don't know if this is ever going to potentially go away. I think it's human nature. But very interesting.

Jesh Bratman:

One other thing, just to note, we found in some of the hacker communities they're using this AB testing software that they have for their own phishing emails. So they're doing data science, machine learning stuff on their end to try to get around. They'll have the system that tries out, sort of has a bandit system where it will send out and see how many responses they get for particular types of attacks and choose the ones that have the best efficacy.

Host:

Wow. Interesting. That's scary.

Jesh Bratman:

Yeah. It is very scary to imagine there are these organizations, large organizations, some of them state funded, who are probably little tech startups in themselves who are on the other end of it.

Host:

Geez. I know I wanted to ask you, we wanted to talk about one thing that's a little out of context, but I think you have such a good background. You mentioned a viewpoint on it. So I know you worked at a startup, got acquired by Twitter. You're a part of the founding team at Abnormal. So I think we want to talk about evaluating opportunities in ML. And I think you were talking in our pre-call a little bit about concepts of what somebody should be looking at if you're looking to take that first job or a transition into a role, and kind of how to evaluate how one of these solutions fares versus another. So maybe if you could share some of your thoughts there.

Jesh Bratman:

Yeah. I think I have two sides to this part is, one, evaluating opportunities for doing ML. And then the other is the type of ML engineer or data scientist you want to be and how that fits into that career choice. So I found my roles in which I'm doing ML and that component of the product is crucial to the product. I've been much better than the cases where the ML is kind of tangential to the product itself.

Jesh Bratman:

And this is a contrast in my particular career between Twitter and the two startups that I've worked at. The two startups I've worked at, Abnormal and TellApart, which was the one acquired by Twitter, were both crucially dependent on the ML component. The value of the product was directly proportional to how well the ML system worked. At TellApart, it was how well you can buy ads, how well you can give recommendations of products to users, how well you can give recommendations of emails for the email marketing product. The value of the product was the machine intelligence behind it. At Twitter, the value of the product is the product of Twitter. ML is a way to make Twitter better. Now, I really loved my time at Twitter. However, I felt that there were a lot of things that were kind of put me and the rest of the ML team a little bit on the side because ML was trying to make Twitter better, but it was always on the side. It was always a sort of improvement to the product. It wasn't at the center.

Jesh Bratman:

At Abnormal, again, ML is the center of it. The hard problem here is detecting these attacks. This is the value that we're providing. There's really no other reason anyone is buying our product other than whether we can detect attacks better than anyone else. And the way we detect attacks better than anyone else is the best ML and AI system to do so. And my recommendation for anyone who's interested in working at a startup in an ML role is to really consider that whether or not the product value itself is crucially dependent on ML. If it is, you're going to be in a much better position and you will learn more, I believe, I think you will be pushed harder, you will be forced to overcome more challenges than you will if ML is an improvement on top of a product rather than the core of it.

Host:

That's interesting because I think, I guess, the business value that we talked about originally kind of ties back into if it's a part of the core, you're part of that business strategy of how you're going to generate revenue. As you mentioned with Abnormal, people come to you guys because your machine learning solves a problem for them. Not that they care what the model is, but obviously that's really the core of what they're buying. So interesting that there's a contrast of where the ML sits and how close it is to core business strategy might make an opportunity maybe more interesting or less interesting to a potential candidate.

Jesh Bratman:

There's a lot of unexpected differences in the type of work that you'll end up doing in the two cases. I think what you don't realize is that you will probably get pulled into doing more data science and less actual building of ML systems if the product doesn't depend crucially on ML. So you'll often get pulled into things that are urgent at that startup, which may not be doing the thing you want to do because, at a startup, often there's so many unanticipated things. It's sort of all hands on deck for solving problem A, B or C, right? So if you're at a startup in which ML is a third priority and you're someone who's an ML engineer or a data scientist, you're going to be pulled into the other half of that equation. You're going to be pulled more into data analysis and analytics on one side, or you get pulled more into software engineering on the other side.

Jesh Bratman:

But if you're at a company or a startup where ML is crucial, that's always going to be the fire, right? It's always going to be make this better. Right? And so you're going to be pulled directly into the thing you want to be doing when things get rough at the startup, which is what you want to be doing. You want to be the one at the center of things, you want to be the one really solving the business problems.

Host:

That's good advice. I think we'll actually probably make sure that gets out there on its own because I think a lot of people probably, as they're seeing this from the outside, don't see the distinction. I think that's fantastic. I definitely appreciate your time. I think we could keep talking. Time flies on podcasts, as I've mentioned. I think we could keep talking, but cognizant of you getting back to your day and fighting email attacks. So I appreciate you being on. Thanks.

Jesh Bratman:

Yeah. Thanks for having me.

Host:

Awesome. And if someone does want to reach out, follow up, or whatnot with you, what's the best platform to reach out to you?

Jesh Bratman:

Email or Twitter is fine as well.

Host:

Okay, awesome. And we'll put your Twitter handle on the show notes when we release it. And until we come back next week for another episode with another guest, I appreciate any feedback. Drop a review on a platform, let us know how you're doing, topics. Anything you want to hear differently. We're all ears on this. Until next week, thank you.

Machine Learning as a Way to Drive Business Value

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Posts

B Manufacturing Industry Attack Trends Blog
New data shows a surge in advanced email attacks on manufacturing organizations. Explore our research on this alarming trend.
Read More
B Dropbox Open Enrollment Attack Blog
Discover how Dropbox was exploited in a sophisticated phishing attack that leveraged AiTM tactics to steal credentials during the open enrollment period.
Read More
B AISOC
Discover how AI is transforming security operation centers by reducing noise, enhancing clarity, and empowering analysts with enriched data for faster threat detection and response.
Read More
B Microsoft Blog
Explore the latest cybersecurity insights from Microsoft’s 2024 Digital Defense Report. Discover next-gen security strategies, AI-driven defenses, and critical approaches to counter evolving threats and safeguard your organization.
Read More
B Osterman Blog
Explore five key insights from Osterman Research on how AI-driven tools are revolutionizing defensive cybersecurity by enhancing threat detection, boosting security team efficiency, and countering sophisticated cyberattacks.
Read More
B AI Native Vendors
Explore how AI-native security like Abnormal fights back against AI-powered cyberattacks, protecting your organization from human-targeted threats.
Read More