chat
expand_more

Uncovering AI-Generated Email Attacks: Real-World Examples from 2023

See how attackers are using generative AI in their email attacks to bypass email security solutions and trick employees.
December 19, 2023

The past year witnessed revolutionary advancements in the field of generative artificial intelligence, or generative AI. Shortly after their launch, these platforms gained significant popularity—with ChatGPT reaching its first 1 million users in only five days. The unfortunate news, especially for those in the information security space, is that the accessibility of generative AI has also created opportunities for cybercriminals to exploit the technology to create sophisticated cyberthreats, often with email as the first attack vector.

To illustrate how AI is being weaponized, we’ve collected real-world examples of likely AI-generated malicious emails our customers have received in the last year. These examples point to a startling conclusion: threat actors have clearly embraced the malicious use of AI. This also means that organizations must respond in kind—by implementing AI-powered cybersecurity solutions to stop these attacks before they reach employee inboxes.

How and Why AI Is Being Weaponized for Email Attacks

Previously, many cybercriminals relied on formats or templates to launch their campaigns. Because of this, a large percentage of attacks share common indicators of compromise that can be detected by traditional security software, as they use the same domain name or the same malicious link. Generative AI, however, allows scammers to craft unique content in milliseconds, making detection that relies on matching known malicious text strings infinitely more difficult.

Generative AI can also be used to significantly increase the overall sophistication of social engineering attacks and other email threats. For instance, bad actors can abuse the ChatGPT API to create realistic phishing emails, polymorphic malware, and convincing fraudulent payment requests. And even as Open AI has placed limits on what ChatGPT can produce, cybercriminals have responded by creating their own malicious forms of generative AI. For example, WormGPT lacks the guardrails found in the open source tools that prevent bad actors from using them unethically and similarly, FraudGPT is a subscription-based platform that uses refined machine learning algorithms to generate deceptive content.

Real-World Attacks (Likely) Generated by AI

Over the past year, Abnormal detected a number of attacks that were likely generated by AI. As with all AI models, understanding whether an attack was created by AI with 100% certainty is nearly impossible. However, certain tools like CheckGPT provide strong indications of AI involvement. Below you’ll find multiple examples of attacks caught by Abnormal, with additional examples available in our latest research report.

Attacker Poses as Insurance Company to Attempt Malware Delivery

In this malware attack, the threat actor poses as an insurance representative and informs the recipient that attached to the email contains benefits information, as well as an enrollment form that must be completed in its entirety and returned. If the recipient fails to do so, they are told they may lose coverage.

The perpetrator uses a seemingly genuine display name (“Customer Benefits Insurance Group”) and sender email (“alerts@pssalerts[.]info”), but replies are redirected to a Gmail account controlled by the attacker. Despite a professional facade, our platform determined the attachment likely contains malware, putting the recipient's computer at risk of viruses and credential theft.

Gen AI 1

Analysis of the attack, powered by Giant Language Model Test Room or GLTR, shows the likelihood of it being generated by AI. The models color-code the words based on how likely each word would be predicted given the context to the left. Green indicates a word is one of the top 10 predicted words while yellow indicates a top 100 predicted word. Words in red are ranked among the top 1,000 predicted words, with all other words shown in purple.

Gen AI 2

As you can see, the majority of the text is highlighted green, indicating that it was likely generated by AI rather than created by a human. You’ll notice that there are also no typos or grammatical errors—signs that have historically been indicative of an attack.

Netflix Impersonator Compromises Legitimate Domain in Credential Phishing Attack

In this phishing attack, the threat actor poses as a customer service representative from Netflix and claims that the target’s subscription is expiring. To continue service, the recipient is told they need to renew their subscription using the provided link. However, the URL leads to a malicious site where sensitive information is at risk.

The attacker employs social engineering to create a sense of urgency. They also add sophistication to the scam by leveraging what appears to be an authentic helpdesk domain associated with Teeela, an online toy shopping app. The use of an email hosted on Zendesk, a trusted customer support platform, may deceive recipients into thinking the email is legitimate and thus increase the attack's effectiveness.

Gen AI 3
Gen AI 5

Again, the majority of the text is highlighted green, indicating that it was likely generated by AI, rather than created by a human. One thing of interest here is that the AI review highlighted the interesting phone number, where the attacker seems to have missed updating the phone number to a legitimate one.

Cosmetics Brand Impersonator Attempts Invoice Fraud

In this billing account update attempt, the attacker poses as a business development manager for cosmetics company LYCON and informs the recipient of irregularities in their balance sheet noticed during a mid-year audit. They explain that due to a crash during a system upgrade, they no longer have access to account statements and must now request all open or overdue invoices. The attacker also advises a halt to payments to previous accounts and promises new banking details for future transactions once the audit is complete.

The scam aims to extract sensitive financial information and reroute payments to the attacker’s bank account. No links or attachments are present, and the email is written in an official tone, utilizing several social engineering techniques to deceive the recipient.

Gen AI 4
Gen AI 6

Once again, the majority of the text is highlighted green, indicating that it was likely generated by AI, rather than created by a human. Admittedly, there is more red here than in previous examples, likely due to the fact that this attack uses more formal language than average.

Stopping AI-Generated Email Attacks

Because these emails are often sent from a legitimate email service provider, are text-based, and rely on social engineering to compel the recipient to take action, it is challenging for traditional email security solutions to detect them as attacks. As such, they land in employee inboxes where they are forced to make a decision on whether or not to engage. And with AI completely eliminating the grammatical errors and typos that were historically telltale signs of an attack, humans are much more likely to fall victim than ever before.

Despite the fact that generative AI has only been used widely for a year, it is obvious that the potential is there for widespread abuse. For security leaders, this is a wakeup call to prioritize cybersecurity measures to safeguard against these threats before it is too late. The attacks shown here are well-executed, but they are only the beginning of what is possible.

We’ve reached a point where only AI can stop AI, and where preventing these attacks and their next-generation counterparts requires using AI-native defenses. To stay ahead of threat actors, organizations must look to email security platforms that rely on known good rather than known bad. By understanding the identity of the people within the organization and their normal behavior, the context of the communications, and the content of the email, AI-native solutions can detect attacks that bypass legacy solutions. In fact, this is the only way forward—it is still possible to win the AI arms race, but security leaders act now to prevent these threats.

To discover more, including how Abnormal stopped each individual attack, download the white paper: AI Unleashed: 5 Real-World Email Attacks (Likely) Generated by AI in 2023.

Uncovering AI-Generated Email Attacks: Real-World Examples from 2023

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

 

See the Abnormal Solution to the Email Security Problem

Protect your organization from the full spectrum of email attacks with Abnormal.

 
Integrates Insights Reporting 09 08 22

Related Posts

B 1500x1500 MKT477 Energy Infrastructure Data Blog
Energy and infrastructure organizations face an increased risk of business email compromise and vendor email compromise attacks. Learn more.
Read More
B Mr Wonderful Talks AI
Explore the future of AI and cybersecurity and learn why prioritizing security investments is crucial with Kevin O’Leary of Shark Tank fame.
Read More
B 1500x1500 MKT468a Open Graph Images for Phishing Subjects Blog
Discover the most engaging phishing email subjects, according to Abnormal data, and how to protect your organization from these scams.
Read More
B Threat Report BEC VEC Blog
Our H1 2024 Email Threat Report revealed significant year-over-year increases in both business email compromise and vendor email compromise. Learn more.
Read More
B 2 7 24 Product Update
Abnormal product enhancements improve detection efficacy, reporting on QR code attacks, productivity, and protection from account takeover.
Read More
B 1500x1500 Quishing Stats Blog 02 05 24
Today we released our H1 2024 Email Threat Report, which examines the threat landscape and dives into the latest evolution in phishing: QR code attacks.
Read More