chat
expand_more

Our Shield Against Bad AI Is Good AI… But Are Your Vendors AI-Native or AI-Hype?

Explore how AI-native security like Abnormal fights back against AI-powered cyberattacks, protecting your organization from human-targeted threats.
November 11, 2024

Artificial intelligence has opened up a Pandora’s box of opportunity for cyber criminals. But far from being a gangster's paradise, AI could usher in a new era of cyber defensiveness and automated security operations—as long as it's the right kind of AI.

You might not know it, but your company has a new adversary.

Not the typical adversary that poaches your talent or pips you to a new release. But an adversary that uses bots and hackers, trying to slip through chinks in your cyber defences.

Their trump card? Artificial intelligence (AI).

The Modern Attacker Doesn’t Break In—They Log In

Humans are the primary line of attack for today's cyber criminals, with more than 90% of all successful cyber attacks starting with a phishing email. And, according to the Cyber Security Breaches Survey 2024, 83% of businesses in the United Kingdom that experienced a cyber attack in 2022 said phishing was the cause.

The volume of attacks is exploding: between April 2023 and April 2024, phishing incidents targeting European enterprises shot up by 91.5%. And this isn't just hitting smaller players. In July 2024, someone impersonating a legitimate supplier defrauded a Singapore-based commodity firm out of over $40 million via a classic business email compromise scam.

Email-based social engineering attacks are tricky to defend against because we’re wired to trust one another. You can have the best software, filters, and patches, but they won’t help if a human is tricked into handing over the keys to your castle.

AI Is the Ultimate Tool—for Both Sides

Until now, social engineering attacks haven’t been the easiest cyber crime to commit. It took time and meticulous research to profile the target, impersonate someone they trust, and craft a convincing message that tricked them into acting.

But times have changed.

Now, the world's awash with generative AI. And it has all the qualities needed to research anything you like in the blink of an eye, delivering enriched insights at scale. That's great for open access to knowledge—but in the wrong hands and stripped of its ethical safeguards, it's also a potential weapon.

In the age of AI, threat actors don't have to be skilled to launch attacks with a hyper-targeted audience in mind. They just get GenAI to profile the target and write a detailed, convincing email. For any hacker, an attack is worth a try when it takes just minutes, not weeks.

Sounding the alarm on GenAI, the National Cyber Security Centre has made it clear that the more intelligent AI becomes, the harder scams will be to detect. Their research shows AI-generated phishing emails have higher open rates than manually crafted phishing emails. They also warn that it's a self-learning system: AI can analyse exfiltrated data and use it to train AI models. So it's not just the rising volume of attacks we should have eyes on, but also their precision and impact.

So, how do we counteract this fresh challenge?

The answer lies in AI itself. Only good AI is powerful enough to match the speed and agility of this next-gen adversarial AI. And cyber security analysts who don’t use it defensively can quickly find themselves outsmarted by AI-aided attackers.

Don't just take my word for it: in the last three months, some 80% of cyber security companies have become “AI companies” because they’ve had the same realisation.

But there's a catch.

Not all AI is created equal, and there are more variables at play here than you might think.

No One Wants Security Tech They Have to Check Up On

Let's zoom out for a moment, and talk about a broader issue in cyber security. Problems need solutions that take action—not just that alert someone to follow up.

I learned this in 2007 when a little-known company called Palo Alto Networks came to the managed service provider I was working for and showed us a new way of providing firewalls: application layer visibility, which was a radically different approach to the IP table configurations we used. I remember our head of R&D saying, “If I built a firewall from scratch, that's how I'd do it.”

What was special about the solution? Honestly, it just worked. Customers got better visibility into threats without increasing procurement costs or the employees needing to make sense of it all. The product was less about ones and zeros, and more about users and applications.

A couple of years later, CrowdStrike did the same thing in the endpoint space. Their lightweight agent architecture evolved endpoint security beyond burdening end-user devices. It became an invisible protective layer, working in real time in the background.

This is the kind of approach we take today at Abnormal, with email security. Because for technology to be valuable, it must be consumable. Time-stretched CISOs don’t need alerts and flashing lights; they need technology that takes action and solves the problem it’s designed to solve.

How to Fight What We Can’t Predict?

Still, cyber security's biggest challenge is that you can’t fight what you don't know—and no one knows what the next big threat will be. Last year, QR code phishing (quishing) emerged as a major headache for security teams. The year before, it was payloadless malware attacks.

Manipulation-based exploits like these are unpredictable by nature. So, you rely on your system to flag irregularities (there’s that flashing light). Then your teams have to get to work, investigating if it's a false positive or legitimate risk.

There are a few issues with that approach. First, manually filtering the signal from the noise can take weeks. There could be thousands of potential deviations to investigate, so how do you know which reports to prioritise?

Second, traditional user education ironically adds to the burden. Training your employees to spot and report suspicious emails is better than nothing. But an analyst still has to triage and investigate every report. What if that’s at the expense of more serious events, or a more value-adding task?https://cms.abnormalsecurity.c...

Third, analysts often close tickets by sending users a templated response—e.g., “This was a phishing email,” or “No need to worry.” They have to work like this to manage volume. But this doesn't really teach users anything or reduce their likelihood of falling for scams in the future. That’s a major missed opportunity.

But this is what happens when cyber solutions don’t wholly solve the problem they’re supposed to. It creates inefficiencies that CISOs could really do without.

Instead, We Need Good AI to Tackle Bad AI

In an age where bad actors use AI to bypass conventional behavioural detection techniques, corporations can equally use AI to sniff out abnormalities. With just as much speed and accuracy, organisations can beat these adversaries at their own game.

One of the standout features of Abnormal’s AI detection engine is its ability to analyse enormous amounts of behavioural signals and identify anomalies in no time. It operates at a scale far beyond the typical security operations environment and with the consistency teams need to trust it. It directs your analysts' attention to high-priority events based on intelligent risk assessment, so they can work smarter and faster.

But that's only one part of the story.

Like Palo Alto Networks and CrowdStrike before it, Abnormal’s products have been scratch-built to just work. Specifically, to take instant action by hooking threats out as they’re identified.

Your teams can investigate later if they want to, but the risk is gone. And clients tell us they've redeployed a big chunk of their team to proactive work, as the tool takes automatic action with zero negative impact on end users.

Our AI-native technology is also a self-improving, self-evolving system. As it learns users’ habits, your detection capabilities will improve without you having to program it. And I'm not just talking about the AI getting smarter—it helps your humans, too. The system trains end users on attack markers as they happen, so they can better retain these learnings, spot anomalies faster next time, and help reinforce your defences.

So, remember I mentioned most cyber security companies have recently become “AI companies”? Make sure it’s not just flashy marketing. An “AI-driven tool” that still requires policy configuration on the back end is not AI. It’s a policy-ware product. An “AI security company” that still needs an army of people sitting at the back end, monitoring user environments and making changes, is not an AI security company. It’s just an expensive service provider.

The quality of the engineering matters. The detection efficacy matters. The detail matters.

The volume, precision, and impact of cyber attacks are beyond anything we've seen before. Slapping AI onto 20-year-old gateway architectures, with decades of technical debt, cannot scale with the growing volume of attacks. And you won't get the power, intelligence, or “it just works” magic that AI-native technology brings.

Bottom line? The individual cyber security solutions you invest in matter a great deal. AI is a big umbrella, so look for tools that beat the bots at their own game.

Discover how Abnormal's AI-powered solution protects organisations from their biggest risk—humans. Learn more about the Abnormal Human Behavior AI Platform.

Get the White Paper
Our Shield Against Bad AI Is Good AI… But Are Your Vendors AI-Native or AI-Hype?

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Posts

B Podcast Blog
Explore insights on AI, collaboration, career growth, and unforgettable stories from industry leaders shaping the future of cybersecurity.
Read More
B AI Vendor
Learn how to evaluate transparency, risks, scalability, and ethical considerations to make informed cybersecurity decisions.
Read More
B SOC Prod
Learn how AI-driven automation boosts SOC productivity by reducing false positives, addressing skills gaps, and enhancing threat detection. Discover strategies to future-proof your SOC and strengthen cybersecurity defenses.
Read More
B Proofpoint Customer Story F500 Insurance Provider
A Fortune 500 insurance provider blocked 6,454 missed attacks and saved 341 SOC hours per month by adding Abnormal to address gaps left by Proofpoint.
Read More
B Malicious AI Platforms Blog
What happened to WormGPT? Discover how AI tools like WormGPT changed cybercrime, why they vanished, and what cybercriminals are using now.
Read More
B MKT748 Open Graph Images for Cyber Savvy 7
Explore insights from Brian Markham, CISO at EAB, as he discusses cybersecurity challenges, building trust in education, adapting to AI threats, and his goals for the future. Learn how he and his team are working to make education smarter while prioritizing data security.
Read More