chat
expand_more

AI vs. AI: What Attackers Know Could Hurt You

Uncover the dangers of AI-driven scams. Our ethical hacker demonstrates real-time social engineering attacks, highlighting essential cybersecurity strategies for 2025.
February 10, 2025

We hired an ethical hacker to stage real-time social engineering attacks with AI powering the engine. The results confirm that it's a golden age for scammers… and show you exactly where to put your cyber security budget in 2025.

Anyone who has worked at a major organisation has likely undergone training on how to spot business email compromise (BEC)—deceptive messages that impersonate senior executives or trusted external partners to trick employees into transferring funds, downloading malware-laden files, or sharing sensitive data. Unlike mass phishing campaigns, BEC attacks are highly personalised for the recipient. They often involve weeks of reconnaissance on the target, creating a significant time and cost barrier for scammers.

But what if I told you that generative AI can slash 40 hours of manual work down to just two minutes? And what if the spoofed emails it creates are more advanced, harder to spot, and significantly more dangerous than ever before?

Well, steel your nerves because that's exactly where scammers are at right now. To prove it, we brought an ethical hacker along to a recent roadshow event in London—which may be coming to a city near you soon! Before a live audience and using currently available GenAI tools, he showed how bad actors can build and automate social engineering attacks at scale with alarming ease.

1,200 Times the Speed of Human Hackers

Spear phishing has long been a thorn in the side of security teams. These attacks, which exploit human psychology, are escalating at an alarming rate—rising 300% year over year.

In fact, as many as 88% of data breaches are caused by employees inadvertently handing over passwords and other credentials. Unfortunately, today's scammers no longer need to break in—they log in.

Historically, the one limiting factor for spear phishing was time. It takes roughly 40 hours of research to compile an Open Source Intelligence or OSINT report, which is a detailed dossier about a single victim gleaned from various public (and sometimes harder-to-find) sources. Now, with AI embedded in the threat actor's arsenal, the research phase has become almost instant. In fact, our hacker was able to generate a highly targeted OSINT report in under two minutes.

Speed has major implications for the scale of BEC attacks. To date, they have been relatively low volume compared to other types of scams, since a bad actor could realistically only attack one or two targets at a time.

But a two-minute OSINT dossier changes the playing field.

Now, any scammer equipped with free tools and minimal effort can transform one-off social engineering attacks into mass-produced operations.

Imitation is the Sincerest Form of Hackery

While scalability is a major issue, AI can also make attacks harder to discern, as they can appear more convincing.

Take, for example, the tactics used by a BEC group that is known for impersonating big-name law firms to trick recipients into approving overdue invoice payments. The group uses typosquatting—fake domains resembling genuine law firm sites—to send out professional-looking emails via an address that appears legitimate at first glance. If recipients hesitate, the hackers follow up with a new email mimicking a company executive and “authorising” the employee to proceed with the payment.

This type of multi-layered scam is made much easier with AI.

In our hacker’s live phishing simulation, AI not only suggested the best type of attack for manipulating a particular victim based on the information AI curated about them but also automated the creation of a highly convincing attack infrastructure. Lookalike domains and email environments were spun up in seconds. And they were so fine-tuned and realistic, they would be virtually undetectable using traditional security measures.

Deep Fakes Are Here, and They're Terrifyingly Real

But maybe you’re thinking, of course, this is all true, but I’ve trained my users… they know to double-check via other methods before approving payments or changing banking details. And this may be true. But deepfakes are on the rise.

Barely three years ago, scammers needed 20,000 images and substantial computing power to create a believable deepfake. Today, all it takes is 10 seconds of audio to clone a voice and one image to generate a simulated video. Our ethical hacker demonstrated this using my likeness, creating a live deepfake so convincing even I couldn’t tell it wasn’t real. Imagine the consequences if this was used during one of those ‘verifications’ you’ve trained your users to do.

Plus, deepfakes are becoming more convincing by the day. Generative adversarial networks (GANs) are trained on collected data to learn facial features, expressions, and movements, and can produce synthetic images that look exactly like the target person's face. Our hacker could even adjust lighting dynamically, making the videos appear natural and fooling the human eye. With this level of sophistication, organisations can no longer rely on traditional trust policies.

Where To Put Your 2025 Budget To Stop AI-Generated Attacks

BEC attacks are a multi-billion-dollar problem. According to the FBI, BEC fraud was responsible for $2.9 billion in losses in 2023—far surpassing ransomware, which gets far more media attention.

This significant financial damage stems from the Trojan-horse nature of these scams. The direct financial losses suffered when employees are tricked into making large wire transfers are just the tip of the iceberg. When attackers gain authenticated access to systems, they can move freely through your payroll, intellectual property, client data, and so on, for whatever length of time your default credentials allow.

These numbers were collected as AI was just taking off. So, as costly as attacks are now, they're about to get much worse. For me, if there’s one key takeaway from our hacking experiment, it’s this: wherever a choice exists, hackers have an opportunity to exploit it.

Human users can choose to click on a link, scan a QR code, or reply to an email—and they can be manipulated into making the choice the attacker wants them to make. AI enables attackers to find those opportunities and exploit them relentlessly, at unprecedented speeds and scales.

In other words, if your cyber security defences kick in only after the employee has a chance to make a choice, you're already too late.

In response, the big trend we're seeing for 2025 is proactive defence systems—solutions that intercept and neutralise threats before employees are even presented with a choice. To help combat these AI-powered threats, focus your budget on BEC detection tools that deliver in these four key areas:

#1: Use "good" AI to battle "bad" AI.

AI might supercharge the scale and credibility of BEC attacks, but it can also help thwart them. “Good” AI analyses millions of data points to baseline the normal behaviour of your users and detect anomalies. This baseline of normal behaviour enables it to sniff out even subtle irregularities, making it your best weapon against socially-engineered emails.

It does this with as much speed and accuracy as “bad” AI. The faster the AI can neutralise threats—before employees even see them—the better. No choice = no opportunity for manipulation.

#2: Modern threats need modern tools.

Traditional secure email gateways (SEGs) might scan around 1,000 signals to decide whether to flag an email as malicious, making them ill-equipped to combat AI-augmented adversaries. Attackers now create fresh, dynamic signals with every delivery attempt, outpacing outdated tools—even those retrofitted with AI.

Your best bet is to look for tools that are built with AI at their core and can process tens of thousands of real-time signals. For instance, Abnormal’s email detection engine tracks 40,000+ signals per email to stop what SEGs miss. As an AI-native solution, it’s designed to learn as it goes. Put your budget into tools like this, and you get greater protection over time for no additional expenditure.

#3: A self-managing system saves time and resources.

Currently, there's an upside-down relationship when it comes to email security: attack volume is skyrocketing, but security resources are diminishing and policy-ware solutions are part of the problem. They require an army of analysts to keep them running, yet your ability to defend against an emerging email threat is limited to how fast you can operationalise a policy to address it. It's the equivalent of having an inefficient stopwatch in a world that demands complex, always-on sensors.

What CISOs need right now are low-maintenance, low-overhead solutions that "just work" for every attack AI can throw at them. Solutions that automatically detect threats as soon as they're identified, without the need for human intervention, save hundreds of hours per week in security team and end-user time.

Intervene Early, Spend Wisely

AI is pushing the envelope for hackers and other bad actors, but it's also giving security professionals a powerful new weapon. In 2025, the biggest threat will be to organisations that ignore the writing on the wall and continue to pour resources into solutions that are no match for AI-driven attacks.

The good news? You don’t need to anticipate every type of attack to protect your organisation. By investing in AI-native, self-learning, and self-managing tools, your defences can evolve as quickly as the threats.

In the battle of humans versus AI, fighting AI with AI is how you win.

Interested in learning more about Abnormal's AI platform? Schedule a demo today!

Schedule a Demo
AI vs. AI: What Attackers Know Could Hurt You

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Posts

B Exploiting Google Services Blog
Cybercriminals misuse Google services for phishing, ad hijacking, and more. Learn five attack methods and how to protect your accounts.
Read More
B AI vs AI
Uncover the dangers of AI-driven scams. Our ethical hacker demonstrates real-time social engineering attacks, highlighting essential cybersecurity strategies for 2025.
Read More
B Misclassification Adaptation Blog
Learn how Abnormal Security minimizes false positives and false negatives with a multi-layered approach to cyberattack detection and email security.
Read More
B Docusign Phish
Threat actors are exploiting Docusign to bypass traditional email security, but Abnormal Security’s AI-powered platform stops these attacks by detecting behavioral anomalies in real time.
Read More
B Phishing Loop Bypass MFA Compromise Accounts Blog
A new phishing campaign targeting Microsoft ADFS bypasses MFA with social engineering and technical deception. Learn how attackers take over accounts—and how to stop them.
Read More
B MKT579z Images for Proofpoint Customer Story Blog 12 New York Presbyterian Hospital
Discover how Abnormal's AI helped a leading hospital system by detecting 2,181 malicious messages that Proofpoint missed.
Read More