chat
expand_more

Exploring the Power of Generative AI in Advanced Attacks

Learn how cybercriminals use AI for targeted social engineering attacks in this recap from Chapter 1 of our Convergence of AI + Cybersecurity series.
October 5, 2023

Artificial intelligence has been all over the news this past year as people discuss its positive and negative effects. Without a doubt, generative AI tools like ChatGPT can make us all more productive. Businesses in nearly every industry have been using this technology to speed up content creation, simplify coding, and more

But just as generative AI can help improve productivity and efficiency for legitimate professionals, it can do the same for cybercriminals. As a result, organizations have begun receiving a growing number of AI-generated attacks that have a previously unseen level of sophistication.

To better understand the opportunities and risks created by this evolving technology, Abnormal has partnered with Fast Company, CrowdStrike, and Guidepoint Security to host a three-part webinar series titled, The Convergence of AI and Cybersecurity.

Chapter one of this series, Facing Your Fears: How Attackers Can Use Generative AI, focused on what generative AI means for your organization and how hackers weaponize it to launch attacks. Here are a few key takeaways from the presentation.

Mixed Emotions Around Generative AI

Generative AI leverages machine learning to detect patterns within large bodies of data and produce new content based on prompts from the user. New use cases for these tools appear every day, but some of the more common examples include generating high-quality text copy, images, and code. Users love this technology because it’s efficient and easy to use.

But while generative AI has plenty of constructive uses, there’s potential for malicious use cases as well. “Generative AI is going to make everything easier for everybody in every way, and that's naturally going to include the bad guys,” said former black hat hacker, Kevin Poulsen.

“I'm both excited and nervous about AI,” said threat researcher, Ronnie Tokazowski. “We've seen AI do some really awesome things. But on the flip side, we also see cases where scammers can use it to generate really well-written emails that are difficult to detect.”

Our recent email security survey of 300 cybersecurity stakeholders found that 80% of security stakeholders believe that their organizations have been exposed to AI-generated email attacks.

Suspicious emails often get flagged by recipients and security tools due to poor spelling and grammar. But with a few prompts, generative AI creates error-free messages in seconds. Traditional security solutions simply cannot stand up to the task.

Thankfully, more sophisticated email security meets these challenges head-on by adopting machine learning and AI to increase efficiency and speed.

“Part of the struggle in cybersecurity is that there’s an unlimited amount of data that we have to sift through,” said Abnormal CISO, Mike Britton. “And so if I have generative AI helping me piece together the various events and items that I see in a log, that's also helping me find problems faster.”

Using AI for Every Phase of an Attack

Generative AI is effective at crafting effective social engineering attacks because large language models are experts in writing. Additionally, with generative AI tools, threat actors can quickly and easily scale their attacks—rather than having to write these messages manually.

“The big change here is that attackers can automate very well-crafted, personalized attacks and scale them up,” said Poulsen. “They can let it run unattended and reach an unlimited number of victims and probably with a really high success rate.”

While some generative AI tools have built-in measures to prevent the malicious use of their products, these safeguards are fairly easy to circumvent by hosting the tools on a personal device. Poulsen explains that hackers can download a version of Facebook’s Llama 2, a powerful large language model, on their own computers.

“If you ask it outright to write a phishing email, it'll give you a long lecture about why phishing is bad and you should do something more constructive with your life,” said Poulsen. But by changing the prompts, it’s easy enough to get it to construct a phishing email anyway.

Even worse, hackers can use generative AI search tools, like Google Bard, to pull up-to-date information on their targets. This helps attackers create more convincing scams. For example, an attacker can identify other individuals within the organization to target or identify personas inside or outside the organization that the hacker can spoof.

“These chatbots are useful for basically every phrase of social engineering attacks,” said Poulsen. “Starting with gathering information on your targets so you can incorporate that information into a targeted spear phishing email.

From there, Poulsen explains, one could simply drop the information gleaned from Bard into Llama 2 to produce highly targeted phishing emails. The information regarding the target’s work experience, contacts, and current organization is accurate, making the scam all the more convincing.

“That didn’t take very long, and without me having to do any manual research,” said Poulsen. “This could be a one-click process, which means I could feed it a list of names and have it compose endless emails to people.”

Since social engineering attacks are largely a numbers game, the hacker can assume that some portion of the targets will fall for the scam. This is especially scary when it comes to business email compromise (BEC) attacks which attempt to convince targets to transfer money directly from a company’s bank account to the attacker.

“This would be pretty easy for even unskilled hackers,” said Poulsen. To make matters worse, threat actors can prompt the AI to generate messages in other languages, thus increasing their available pool of targets even more.

Fighting Back with Sophisticated Email Security

While generative AI tools like ChatGPT or Llama 2 can be used for nefarious purposes, they are designed for legitimate use cases. Unfortunately, there’s a whole other class of generative AI tools built for hackers. These include FraudGPT and WormGPT.

“There’s a market for that and scammers want to purchase it,” said Tokazowski. “It's going to become a case of battle bots where you have the good versus the bad AI going back and forth at each other.”

The good news is that sophisticated email security solutions are evolving too. AI-based detection systems like Abnormal use AI and machine learning to understand the signals of known good behavior. This creates a baseline for each user and each organization, then blocks emails that deviate from these norms.

“We are seeing real-world examples. And we can actually tell in post-detection if an email is likely to have been generated by generative AI,” said Britton.

Britton says we are just at the cusp of what hackers can do with generative AI. Rather than let trepidation about this nascent technology fester, it’s time for organizations to face their fears and take steps to protect themselves against email-based threats.

For additional insights and to see a demo of what ChatGPT-generated threats can look like, watch the on-demand recording of Facing Your Fears: How Attackers Can Use Generative AI.

Watch the Webinar
Exploring the Power of Generative AI in Advanced Attacks

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

 

See the Abnormal Solution to the Email Security Problem

Protect your organization from the full spectrum of email attacks with Abnormal.

 
Integrates Insights Reporting 09 08 22

Related Posts

B 04 30 24 Anomalies Top 8
Secure email gateways simply cannot block modern attacks. Here are the top eight cyber threats bypassing your SEG.
Read More
B MKT530 Cyber Savvy 3 Open Graph Images
Discover how CIO Roger Morris protects popular restaurant brands, including Taco Bell, Whataburger, 7 Brew, and more, from cyber threats.
Read More
B 4 9 24 Send Grid
SendGrid and Mailtrap credentials are being sold on cybercrime forums for as little as $15, and they are used to send phishing emails and bypass spam filters. Learn how infostealers and checkers enable this underground market.
Read More
B Convergence S2 Recap Blog
Season 2 of our web series has come to a close. Explore a few of the biggest takeaways and learn how to watch all three chapters on demand.
Read More
B 1500x1500 Adobe Acrobat Sign Attack Blog
Attackers attempt to steal sensitive information using a fraudulent electronic signature request for a nonexistent NDA and branded phishing pages.
Read More
B 4 15 24 RBAC
Discover how a security-driven RBAC design pattern allows Abnormal customers to maximize their user setup with minimum hurdles.
Read More