chat
expand_more

BEC in the Age of AI: The Growing Threat

Business email compromise (BEC) has seen growth due to criminals adopting AI tools. See the trends and discover how to protect your business from cybercriminals.
February 27, 2025

Business email compromise (BEC) is one of the most financially damaging cyber threats today. According to the FBI’s latest Internet Crime Report, business email compromise resulted in over $2.7 billion in reported losses in 2023 alone—and the soon-to-be released 2024 numbers are likely to be even worse.

While BEC had already solidified its position as a top threat, the rise of AI has made these attacks even more sophisticated and thus more difficult to detect. Cybercriminals are leveraging AI-driven tools to create highly personalized, convincing malicious emails that bypass traditional security measures—leading to greater financial losses for organizations worldwide.

As its adoption grows, the damage will only escalate. In fact, in a recent survey by Abnormal, 91% of security professionals reported experiencing AI-enabled cyberattacks in the previous six months—showcasing how quickly this threat is increasing.

In this blog, we’ll explore how AI is fueling the evolution of business email compromise attacks, the tactics threat actors are using, and—most importantly—how organizations can defend themselves.

BEC Is Evolving Faster Thanks to AI

Business email compromise relies on social engineering tactics and impersonation, with attackers posing as executives, vendors, or trusted contacts to manipulate employees into transferring funds or divulging sensitive information. Initially recognized as an emerging threat in 2018, BEC is quickly becoming even more effective today with the rise of AI.

Threat Actors Can Scale Email Threats Like Never Before

Generative AI tools have given cybercriminals the ability to craft highly convincing and personalized emails at scale.

Legitimate tools like ChatGPT have built-in safeguards to prevent malicious use, but the right prompts can bypass these protections. Additionally, over the past two years, multiple uncensored AI chatbots—and even a large language model (LLM) built explicitly for threat actors—have surfaced. These empower attackers to mimic real business communications and avoid the telltale signs of malicious emails, such as poor grammar and spelling errors.

This increased automation means bad actors can easily:

  • Create personalized email messages that closely match a company's (or an individual’s) communication style.

  • Impersonate vendors and employees with near-perfect accuracy.

  • Generate fraudulent invoices that appear authentic.

  • Scale their campaigns to target multiple employees simultaneously.

The Rise of Cross-Platform AI Attacks

Unfortunately, AI-powered cybercrime isn’t just limited to emails. Attackers are now working across channels through a combination of email, voice, and even deepfake video to trick employees into approving fraudulent transactions.

For example, an attacker may send a BEC email impersonating a CEO, then follow up with a deepfake audio or video message reinforcing the urgency of the request. With AI-generated deepfakes becoming more realistic, it’s easier than ever for cybercriminals to convince employees to act without hesitation.

Cybercriminals Are Coding Smarter with AI

Beyond business email compromise, AI is also being used to:

  • Build sophisticated malware that evades detection.

  • Create malicious websites that look nearly identical to legitimate login pages.

  • Design cybercrime kits that automate attacks with tools like WormGPT and FraudGPT—malicious alternatives to mainstream AI models.

With these advancements, even low-skilled threat actors can execute high-impact attacks, making traditional cybersecurity measures less effective.

Security Teams Struggle to Keep Up

One of the biggest challenges for organizations is that BEC attacks don’t have the usual red flags. Legacy defenses rely on rule-based detections that look for known indicators—such as misspellings, suspicious senders, and payload-based attacks. But with AI, attackers can generate brand-new, unique attempts that slip past security filters.

How to Fight Back Against AI-Powered BEC

Defending against AI-driven business email compromise requires a proactive, multi-layered approach. Here’s how organizations can stay ahead:

  • Implement AI-Powered Security Solutions: Legacy security solutions struggle against modern AI threats. Organizations need advanced, AI-driven security platforms like Abnormal Security that can detect subtle behavioral anomalies in email communication.

  • Train Users to Recognize the Signs: Encourage employees to verify unexpected requests for fund transfers or sensitive data, especially when urgency is emphasized.

  • Perform Social Engineering Penetration Testing: Regularly test employees with simulations to improve their ability to recognize and report BEC attempts.

The Bottom Line: AI-Powered Attacks Need Powerful AI Solutions

AI is fundamentally changing the landscape of cybercrime, making business email compromise more scalable, sophisticated, and effective. Organizations that rely on traditional security measures are increasingly vulnerable, as AI-powered threats can bypass conventional detection methods.

Abnormal Security offers a next-generation approach to email security—leveraging AI to analyze identity, behavior, and context in real time to stop these attacks before they reach employee inboxes. In doing so, it can detect attacks that other solutions miss and keep humans protected from the attacks targeting them.

See how Abnormal Security can protect your organization from AI-driven BEC attacks. Request a demo today.

Schedule a Demo
BEC in the Age of AI: The Growing Threat

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Posts

B Proofpoint Customer Story Blog 13
Learn how a trusted fuel and convenience retailer blocked 2,300+ attacks missed by Proofpoint and reclaimed 300+ employee hours per month by adding Abnormal.
Read More
B BEC in the Age of AI
Business email compromise (BEC) has seen growth due to criminals adopting AI tools. See the trends and discover how to protect your business from cybercriminals.
Read More
B Phish Resistant
Discover how cybercriminals are adapting to phish-resistant authentication, using session hijacking, info-stealer malware, and consent phishing to bypass security controls.
Read More
B Fortune500
Discover why 20% of the Fortune 500 trust Abnormal Security’s behavioral AI to protect their people against advanced email threats.
Read More
ABN Innovate Blog 5 L1 R1
Uncover the future of AI-driven cybercrime in 2025. Our expert insights reveal how cybercriminals are leveraging AI to enhance their tactics and impact security.
Read More
B Fed Blog
Explore the role of AI in preventing nation-state email attacks, ensuring federal agencies are equipped to combat sophisticated cyber threats before they escalate.
Read More