Account Compromise Arms Race: Behavioural AI as the Key to Phishing Defense
Cybercriminals are continuously innovating. Developing sophisticated techniques to bypass traditional security controls. From QR code phishing to social engineering scams and non-email-based credential theft, attackers are diversifying their methods in the ongoing arms race of account compromise.
Despite advancements like multi-factor authentication (MFA) and passkeys, threat actors continue to exploit human behavior, technical vulnerabilities, and security gaps. This is where behavioural AI comes into play.
In the third edition of the Account Compromise Arms Race blog series, we explore how behavioural AI proactively detects phishing attempts and account compromises before they reach your users.
URL-Based Credential Phishing
You may be wondering: what does behavioural AI have to do with phishing resistance? Quite simply, it identifies phishing attacks and account compromise well before damage can occur.
Consider a common phishing email claiming to have documents "ready for your review" related to a 2024 bonus.

Clicking on “Review” takes you to a “Let’s confirm you are human” link, hosted on a compromised website belonging to a building company.

You are then asked to complete a challenge and finally presented with an exact replica of the Microsoft login panel, complete with company logo in the foreground and background.

Yet, despite all the layers of obfuscation and the use of a compromised builders[.]com domain to host the initial challenge (likely with good reputation), behavioural AI is not fooled. The original email that delivered the phishing link contains a range of signals that are “abnormal”:
A suspicious URL is identified based on factors like an embedded/encoded email in the URL or a URL domain that doesn’t match the sender domain.
A sender who is not normally sending to (and usually not receiving from) the organization.
A financially related request from a sender the recipient doesn’t normally correspond with and/or a recipient who doesn’t normally receive this type of request.

Abnormal Security's analysis of the threat reveals multiple indicators of malicious intent.
Combine all of these signals, and you get a clear picture of the attack. Without even crawling the link, behavioural AI identifies the threat, ensuring the malicious email never reaches the recipient.
QR Code-Based Phishing Attacks
QR code phishing is another clear-cut case where behavioural AI detects an attack based on multiple abnormal signals.

Email pretending to be from Microsoft with a QR code to move the attack to a mobile phone.
The email comes from an abnormal sender, is failing DKIM, and contains hidden characters. It’s also pretending to be from an automated system (based on the sender display name) and contains Microsoft branding, yet did not come from a Microsoft domain.

Abnormal Security's analysis of the threat reveals multiple indicators of malicious intent.
Without even decoding the QR code, the email is rapidly removed, based on multiple abnormal signals, driving the attack probability well beyond normal thresholds.
Non-Email-Based Credential Phishing
How about an attack that doesn’t arrive via email, e.g. via WhatsApp or SMS, where the target falls victim to a session hijacking or info-stealer malware attack?
In this example, the threat actor gained access to the victim’s account and immediately registered a new MFA device.

Abnormal Security's Account Takeover (ATO) timeline reveals new MFA device registered.
A day later, the threat actor logged in from an abnormal location using an operating system (MacOS), ISP, and location that has never been seen before for this user.

Abnormal Security's Account Takeover (ATO) timeline reveals a foreign login with abnormal ISP, Location, and OS.
Two days later, a suspicious mail filter was created matching on the word “Donation”. In all likelihood, this account will be used for gift card fraud.

Abnormal Security's Account Takeover (ATO) timeline reveals a suspicious mail filter being created.
These signals, when combined, make a high-confidence case for the account being compromised. Abnormal Security can then automatically remediate the account, i.e.:
All sessions were terminated for this user (booting the threat actor out of the account).
The user’s password is reset.
The user’s account access is blocked.

Abnormal Security's Account Takeover (ATO) combines multiple signals.
Socially-Engineered Attacks
The good old socially-engineered attack is not resolved by any form of account control like MFA or passkeys. Fortunately, this is a specialty for behavioural AI, as the intent, sender frequency, sender domain and many other signals all stand out in this type of attack.
For example, we see an attempt to defraud the organisation by pretending to be a customer and asking when an invoice is due for payment. The real customer is then issued a duplicate invoice with new payment details – going straight to the threat actor’s account.

Threat actor's email from a look-alike domain requesting payment information.
The email is very obviously an attack once multiple signals are identified, such as:
The sender domain is young and doesn’t match the one in the signature.
The sender is not normal for this organisation.
The recipient shouldn’t be seeing a financial request from this abnormal sender.

Abnormal Security's analysis of the threat reveals an invoice fraud attempt.
Abnormal Security is a modern cloud email platform solution that uses a fundamentally different approach to detect and protect against the widest range of advanced attacks, including new and targeted credential phishing, socially engineered attacks, payment fraud attempts, and many other email-borne attacks that legacy secure email gateway solutions can't catch.
Behavioural science and machine learning are used to baseline normal conversation patterns, user activity and language used in email. This allows Abnormal Security to learn the “known good” behaviour for your organisation, making thousands of context signals available for every email, to then identify anomalies i.e. what is “abnormal”, with very high precision.
In addition, Abnormal Security will track login telemetry for M365, Google Workspace and many mainstream SaaS apps to baseline normal login behaviour, then detect abnormal account activity indicating account compromise, and take appropriate remediation action.
The Future of Phishing Defense: Why Behavioural AI is Essential
As cybercriminals evolve their tactics—using QR codes, social engineering, and non-email-based attacks—traditional defenses like MFA and secure email gateways are no longer enough.
Behavioural AI provides a proactive, adaptive security layer by analyzing thousands of signals to detect anomalies before threats reach users. By continuously learning and adapting, Abnormal Security ensures precise detection and prevention of phishing and account compromise. In the ongoing arms race against attackers, organizations must rely on intelligent, behaviour-based security to stay ahead.
Interested in learning more about how Abnormal can protect your organization? Schedule a demo today!
Get AI Protection for Your Human Interactions
