chat
expand_more

Attackers Are Bypassing MFA: Do You Know How to Stop Them?

See how advanced attacks are able to bypass MFA and how you can protect your organization.
October 10, 2023

For the better part of the last decade, while security tools and trends came and went, there has been one mantra that has remained consistent: “Multi-factor authentication (MFA) is 99.9% effective at protecting users from account compromise.”

Sure, SIM swapping attacks degraded the efficacy of SMS one-time passcodes (OTP) as attackers could hijack a user’s device through clever social engineering tactics, but these types of attacks are still fairly uncommon and typically thwarted by authenticator apps, biometrics, and push notifications.

This is Cybersecurity Awareness Month, so before I dive into this topic, I want to add the caveat that I do not want to be a pessimist: MFA is still standard practice. All users should enable MFA. But October is also the spookiest month of the year, so let’s talk about those attacks that still break MFA and how you can protect your organization.

Session Hijacking and MFA Fatigue: Very Different Tactics, Very Similar Results

Session hijacking (or token theft) seems to be the tactic du jour for many threat actors, with token forgery being deployed in the recent attacks on various governmental bodies and the purchase of active session tokens from the Dark Web leading to the breach of EA games a couple of years back.

It’s easy to understand why this tactic is so preferred: attackers can avoid authentication flows—often protected with behavioral analytics to detect unusual activity—and avoid the headache of cracking MFA or finding a user who has not enabled it.

Instead, a threat actor can simply piggyback onto an active Slack session (to use an example based on real-world attack analyses) then with a bit of savvy social engineering, that attacker can convince IT to reset passcodes, add new MFA devices, and complete any other tasks necessary to gain complete control of the hijacked account. It should go without saying, but no amount of education on the importance of enabling MFA can stop this sort of attack, making it one of the more insidious tactics to crop up in recent years.

MFA fatigue, conversely, takes the approach of ramming MFA head-on until it breaks. With compromised credentials in hand (and considering credential compromise has jumped 300% in the past year, there is no shortage of targets), attackers sign in and sign in and sign in, sending a seemingly endless wave of MFA push notifications asking a user to validate the authentication. As this tactic is normally used late at night or early in the morning when victims are more likely to blindly accept simply to make the notifications stop, it’s no wonder that 62% of consumers report having experienced an MFA fatigue attack.

So, understanding that MFA can both be bypassed or hit with a figurative battering ram, what can you do to protect your users?

Advanced AI Analyzes Behavior and Spots the Imposter

Aside from the MFA mantra mentioned in the intro, there is another common refrain in cybersecurity, especially in recent months: “AI-based solutions can solve all of your problems.”

This recent AI boom has as many legitimate AI-based providers as it does pretenders. It is important to consider how a vendor is using AI to detect threat actors that have bypassed MFA. There are various methods from the basic to the advanced, meaning efficacy varies just as greatly. At the absolute core of the solution, though, is the need to understand user behavior. To combat an attacker who has skipped the inbox and possibly skipped authentication altogether, the next best way to detect that intruder is by detecting deviations from baseline behaviors.

But it is not that simple. A user who normally authenticates from New York and is suddenly in Berlin may simply be on vacation. More basic solutions may flag the location change as suspicious, but this has the side effect of sending security teams chasing after false positives. Any potentially suspicious event needs to be considered in the context of all other behavioral events associated with a given user. A true AI-based solution will pick up on this context, first determining what constitutes baseline behavior then determining whether changes in behavior are likely the result of compromise.

To build on the above example, let’s say a threat actor in Berlin has stolen session tokens for an active Slack session. That threat actor then convinces IT that the compromised user has lost their MFA device and needs a new one registered to their account. IT complies, and this attacker can now consistently access the account they have hijacked. To positively identify compromise, you cannot simply say, “This user has initiated a Slack session in Berlin when they usually work out of New York. They are a threat.” That is a significant leap of logic for a human security practitioner, let alone an effective AI model. Unfortunately, this is the way many of the more basic solutions operate.

Instead, you want to look for a security solution that takes into account the broader context of the attack: this user initiated a session in Berlin but also has an active Okta session in New York from 12 hours ago. That is possibly enough time to travel to Berlin, but it is unusual that there was no Okta session initiated in Berlin. It is similarly unusual that the new MFA device that the user has registered is an Android phone when that user typically uses Apple products. All of these signals in concert paint a clearer picture than taking each one in a vacuum.

How Abnormal Uses AI to Stop MFA Bypass and Beyond

Abnormal’s Account Takeover Protection can and does analyze signals similar to the ones in the above example along with over 40,000 more to detect MFA bypass attacks and account takeovers, in general. Email communication patterns, IP addresses, changes to mail rules, and user privileges, among 40,000 other behavioral indicators are correlated to confirm when account compromise has occurred—even when no known indicators of compromise (IOCs) like malicious IP addresses are present.

From there, AI meets automation, automatically blocking access to the compromised account, terminating all sessions, and resetting passwords. The analyzed events are made available to security teams in a comprehensive Abnormal Case file to support investigation. Considering it takes 328 days on average to detect and contain a breach that began with credential compromise, this automated detection and remediation significantly reduces the potential damages. In fact, Abnormal saves organizations $50k per instance of account takeover, on average, and remediates compromised accounts in less than 6 seconds.

While these days, MFA may not be 99.9% effective, with AI-based account takeover protection from Abnormal, security teams can feel confident that when compromise does occur, the threat actors won’t get far.

Interested in learning more about MFA bypass and how Abnormal keeps your organization safe? Schedule a demo today!

Schedule a Demo
Attackers Are Bypassing MFA: Do You Know How to Stop Them?

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Posts

B Proofpoint Customer Story Blog 8
A Fortune 500 transportation and logistics leader blocked more than 6,700 attacks missed by Proofpoint and reclaimed 350 SOC hours per month by adding Abnormal to its security stack.
Read More
B Gartner MQ 2024 Announcement Blog
Abnormal Security was named a Leader in the 2024 Gartner Magic Quadrant for Email Security Platforms and positioned furthest for Completeness of Vision.
Read More
B Gift Card Scams Tricker to Spot Blog
Learn why gift card scams are becoming more difficult to identify, how cybercriminals evolve their tactics, and strategies to protect your organization.
Read More
B Offensive AI 12 16 24
Learn how AI is used in cybersecurity, what defensive AI vs. offensive AI means, and how to use defensive AI to combat offensive AI.
Read More
B Proofpoint Customer Story Blog 7
See how Abnormal's AI helped a Fortune 500 insurance provider detect 27,847 threats missed by Proofpoint and save 6,600+ hours in employee productivity.
Read More
B Cyberattack Forecast Emerging Threats Blog
Uncover the latest email threats and strategies to strengthen your cybersecurity and prepare for 2025.
Read More