chat
expand_more

Ethical Hacker Kevin Poulsen Demonstrates How Threat Actors Can Exploit Generative AI

See firsthand how generative AI empowers attackers to create convincing phishing emails and scale operations for widespread success.
Ethical Hacker Kevin Poulsen Demonstrates How Threat Actors Can Exploit Generative AI

By exploiting the ability of generative AI to create realistic content, attackers can now craft personalized messages that bypass traditional security measures and entice even the most cautious recipients to click on malicious links.

In Chapter 1 of our Convergence of AI + Cybersecurity web series, Kevin Poulsen, an expert ethical hacker, demonstrated the potential dangers of generative AI in the wrong hands. Kevin showed how AI-powered chatbots like Bard can assist in gathering real-time information and provide attackers with a deeper understanding of the target's background and interests. He also explained how large language models, such as Facebook's Llama 2, can be harnessed to automate the creation of highly tailored phishing emails.

Watch the video to discover the game-changing impact of generative AI as it enables attackers to scale up their operations and reach an unlimited number of potential victims with alarming success rates.

To learn even more about how hackers can weaponize generative AI, watch the full webinar on demand.

Watch the Webinar

To see how Abnormal can help your organization block modern threats, reduce spend, and prevent emerging attacks, schedule a demo.

Schedule a Demo

Get AI Protection for Your Human Interactions

Protect your organization from socially-engineered email attacks that target human behavior.
Request a Demo
Request a Demo

Related Resources