chat
expand_more

New Research: AI’s Role in the Escalating Email Attack Landscape

Explore new research on how AI is amplifying the impact of BEC and VEC attacks and learn how to defend against these evolving email security threats.
February 12, 2025

Email was never designed with security in mind, yet it has become the foundation of modern business communication—leaving organizations to address its inherent vulnerabilities. Over the decades, additional security layers have been implemented to mitigate email’s risks, but attackers have consistently adapted, finding new ways to exploit both technological weaknesses and human vulnerabilities.

Now, with the widespread availability of AI tools, protecting the inbox is even more challenging, as cybercriminals can launch highly sophisticated and scalable attacks with unprecedented ease.

In our latest email threat report, released today, we examine recent trends in advanced attacks—including how malicious AI has escalated the ongoing battle between threat actors and security teams.

Median Monthly BEC Attacks Grow by More Than Half

In business email compromise (BEC) attacks, threat actors meticulously research their targets and employ advanced social engineering tactics to impersonate a colleague or superior, manipulating employees into providing confidential information or completing fraudulent financial transactions.

While BEC only accounts for a small percentage of overall advanced email attacks, it’s one of the most financially costly cybercrimes, causing $2.9 billion in losses in 2023 alone. It’s also becoming more prevalent as threat actors increasingly turn to emerging tools and dark web resources to streamline and scale their attacks.

Between 2023 and 2024, median monthly BEC attacks grew by more than 54%, topping out at nearly 20 attacks per 1,000 mailboxes in June 2024—roughly triple the number of attacks organizations saw in June 2023.

H1 2025 Threat Report Median Monthly BEC Attacks

BEC had already established itself as a leading cyber threat, but the proliferation and democratization of AI have complicated matters, to say the least. AI-powered tools can be used to analyze extensive datasets from social media, online activity, and previous communications to craft hyper-personalized messages that convincingly mimic the writing style of the impersonated individual. These advanced techniques not only raise the likelihood of evading traditional security measures but also the likelihood of deceiving recipients, heightening the risk posed by BEC campaigns.

Additionally, while legitimate tools like ChatGPT have built-in measures to prevent malicious use, these can be circumvented with the right prompts. Plus, in the past two years, multiple uncensored AI chatbots and even a large language model (LLM) designed specifically for cybercriminals have surfaced, empowering novice attackers and helping experienced threat actors enhance their campaigns.

Vendor Email Compromise Endures as Persistent Threat

A subset of BEC, vendor email compromise (VEC) involves the impersonation of trusted vendors to manipulate targets into paying bogus invoices, updating banking details to divert funds from legitimate accounts, or completing fraudulent wire transfers. In some cases, attackers leverage compromised vendor email accounts and even hijack existing threads to deceive targets.

On par with previous years, VEC attacks remained consistent and have shown no signs of dropping. During any given week in 2024, organizations had, on average, a 70% chance of receiving at least one VEC attack—an increase of more than 10% from 2023.

H1 2025 Threat Report Organizations Targeted by VEC

One of the biggest challenges in detecting VEC attacks is that the emails often appear routine. Employees regularly receive invoice reminders or payment update requests, making fraudulent messages harder to spot. Additionally, the scale of some vendor ecosystems means employees often lack visibility into individual relationships, leaving them unsure whether a request is unusual. This is especially true for new hires or employees reassigned to a different role after restructuring.

Threat actors exploit this, and with AI-powered tools, they can generate remarkably believable messages that mirror real vendor communications, complete with realistic language, formatting, and urgency cues.

In a tight job market, with economic uncertainty and persistent layoff concerns, employees may also rush to resolve an apparent oversight—like a missing payment—without verifying the request. AI further amplifies this risk by helping cybercriminals make fraudulent invoices and follow-up messages more persuasive, increasing the likelihood of success.

This proliferation of weaponized generative AI—combined with the wealth of personal data easily found online and the treasure trove of hacking tools available on dark web forums—not only empowers veteran cybercriminals to up-level their vendor fraud attacks but also enables less experienced attackers to begin engaging in VEC.

Securing Your Organization Against Evolving Email Threats

The modern email threat landscape is defined by constant evolution, with cybercriminals continually responding to heightened awareness and improved defenses with new attack strategies designed to outmaneuver traditional defenses.

Generative AI has further amplified the problem, allowing threat actors to create highly sophisticated business email compromise and vendor email compromise attacks that appear indistinguishable from legitimate communications.

However, these attacks can be effectively neutralized with the right solution—one that leverages AI to analyze identity, context, and content and build behavioral baselines for every identity in your cloud environment. Understanding an organization’s unique communication patterns enables an AI-native email security platform to precisely detect and then automatically remediate anomalous messages before they ever reach employee inboxes.

Investing in AI-native, API-based email security is no longer optional—it’s essential. By proactively blocking these attacks, organizations can protect their employees and mitigate the risk of costly mistakes, securing their operations against both existing and emerging threats.

For additional insights into the attack landscape and recent threat trends, download the H1 2025 Email Threat Report.

Download the Report
New Research: AI’s Role in the Escalating Email Attack Landscape

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Posts

B H1 2025 Email Threat Report Blog
Explore new research on how AI is amplifying the impact of BEC and VEC attacks and learn how to defend against these evolving email security threats.
Read More
B Exploiting Google Services Blog
Cybercriminals misuse Google services for phishing, ad hijacking, and more. Learn five attack methods and how to protect your accounts.
Read More
B AI vs AI
Uncover the dangers of AI-driven scams. Our ethical hacker demonstrates real-time social engineering attacks, highlighting essential cybersecurity strategies for 2025.
Read More
B Misclassification Adaptation Blog
Learn how Abnormal Security minimizes false positives and false negatives with a multi-layered approach to cyberattack detection and email security.
Read More
B Docusign Phish
Threat actors are exploiting Docusign to bypass traditional email security, but Abnormal Security’s AI-powered platform stops these attacks by detecting behavioral anomalies in real time.
Read More
B Phishing Loop Bypass MFA Compromise Accounts Blog
A new phishing campaign targeting Microsoft ADFS bypasses MFA with social engineering and technical deception. Learn how attackers take over accounts—and how to stop them.
Read More