Multi-Stage Phishing Attack Exploits Gamma, an AI-Powered Presentation Tool
AI-powered content generation platforms are reshaping how we work—and how threat actors launch attacks.
In this newly uncovered campaign, attackers weaponize Gamma, a relatively new AI-based presentation tool, to deliver a link to a fraudulent Microsoft SharePoint login portal. Capitalizing on the fact that employees may not be as familiar with the platform (and thus not aware of its potential for exploitation), threat actors create a phishing flow so polished it feels legitimate at every step.
This clever, multi-stage attack shows how today’s threat actors are taking advantage of the blind spots created by lesser-known tools to sidestep detection, deceive unsuspecting recipients, and compromise accounts.
Breaking Down the Gamma Phishing Attack
The first step is an innocuous-appearing email. In the example analyzed in this blog post, the malicious email is sent from a legitimate, compromised email account belonging to the founder of a special education school.

The email includes a brief, generic message with an invitation to view the attachment. Our research found that filenames usually include the name of the company being impersonated. We also observed that the referenced document is always formatted to appear as a PDF attachment but is, in reality, just a hyperlink.
Should the recipient click on the purported PDF, they are redirected to a presentation hosted on Gamma, an AI-powered online presentation builder.

The presentation features the impersonated organization’s logo, a message designed to appear as a notification regarding the shared file, and a prominent call-to-action button—typically labeled something like “View PDF” or “Review Secure Documents.” Hovering over the CTA reveals it is a link to a subdomain containing the impersonated company’s name.
Upon clicking the call-to-action, the target is sent to an intermediary splash page that contains impersonated Microsoft branding and a Cloudflare Turnstile, a CAPTCHA-free bot detection tool. This ensured that only real users—not basic automated security tools—could access the site.

Should the recipient complete the verification test, they are taken to a phishing page disguised as a Microsoft SharePoint sign-in portal. The design features a modal-style login window over a blurred background, indicating the rest of the site is inaccessible until credentials are entered and imitating Microsoft's UI patterns. While the branding on the background content is slightly outdated, the overall experience does help support the semblance of authenticity.

Entering an email address and clicking “Next” redirects the target to a second fraudulent login portal with a prompt to enter their password.

If mismatched credentials are provided, it triggers an “Incorrect password” error, which indicates the perpetrators are using some sort of adversary-in-the-middle (AiTM) for validating credentials in real time.

What Makes This Attack Unique
This campaign is part of a growing trend of what are known as file-sharing phishing attacks or "living-off-trusted-sites" (LOTS) attacks, which exploit a legitimate service to host malicious content. Like previous attacks leveraging Canva, Lucidchart, and Figma, this technique helps make the initial email appear more credible and evade legacy security tools.
File-sharing phishing attacks already represent an elevated approach to credential theft attempts. What makes this campaign stand out even amongst these impressive attacks is the slight tweaks to the formula the threat actors made.
First, Gamma is a relative newcomer to the scene, having been launched less than five years ago. Organizations are becoming increasingly familiar with file-sharing phishing attacks in general, and some may have even begun incorporating examples into their security awareness training. That being said, it’s highly likely that the percentage of companies that have updated their cybersecurity education to include this type of phishing is low—and the number that use examples of attacks other than those exploiting household brands like Docusign and Dropbox is even lower. Thus, this kind of attack may not set off alarm bells that encourage a higher level of scrutiny from employees the way an attack that exploits Canva or Google Drive might.
Additionally, rather than sending the malicious email via the platform itself, the perpetrators simply copy the link and embed it in a message sent from compromised or spoofed email accounts. Sharing via Gamma’s own system could trigger internal content scanning or abuse detection. Plus, some security tools treat automated shared emails from unfamiliar services as suspicious and automatically quarantine them. But if a phishing link is embedded in a regular-looking email from an email that passes all authentication checks, it’s far more likely to reach the inbox—and be trusted.
The use of a Cloudflare Turnstile also makes this attack stand out and serves a dual purpose. First, it prevents automated link crawling and URL analysis by basic security tools. Second, because Turnstile is a legitimate service associated with Cloudflare—a well-known provider of web infrastructure and security—its presence increases perceived legitimacy, as users are accustomed to seeing security checks before accessing sensitive documents.
The final contributor to the noteworthiness of this attack is the apparent utilization of an adversary-in-the-middle (AiTM) framework. In an AiTM attack, the threat actor positions themselves between the victim and the legitimate authentication server, acting as an invisible proxy. This setup allows them to relay the provided credentials to Microsoft’s real login portal and capture the responses.
By executing an AiTM attack, cybercriminals can validate credentials in real time. This not only confirms the accuracy of the stolen credentials but also enables the attacker to capture session cookies. With these session cookies, the threat actor can bypass multi-factor authentication (MFA) and gain full, unauthorized access to the target’s account as if they were the actual user.
Why Is This Attack Difficult to Detect?
This attack flow is built specifically to evade both traditional security tools and human intuition.
One of the primary reasons this attack is so difficult to detect is that it originates from a legitimate, compromised email account. Because the sender’s domain is authentic, the message passes standard authentication checks like SPF, DKIM, and DMARC—allowing it to slip past security filters that rely on sender reputation.
From there, the attack leverages Gamma, a reputable and widely used AI-powered presentation platform. Security systems are less likely to flag content hosted on Gamma because the domain carries no history of malicious activity. And since the content doesn’t contain any overt malware or known phishing infrastructure, it appears benign to both automated tools and human recipients.
The phishing flow is also cleverly layered. Rather than linking directly to a credential-harvesting page, the attackers route the user through several intermediary steps: first to the Gamma-hosted presentation, then to a splash page protected by a Cloudflare Turnstile, and finally to a spoofed Microsoft login page. This multi-stage redirection hides the true destination and makes it difficult for static link analysis tools to trace the attack path.
Further complicating detection is the use of the Cloudflare Turnstile. As a CAPTCHA-free bot mitigation tool, Turnstile prevents basic crawlers and automated scanners from reaching the final phishing page. This allows the malicious site to remain accessible to human users but invisible to most automated defenses.
Defending Against Phishing Attacks Leveraging Trusted Platforms
By embedding malicious content within a legitimate platform, impersonating trusted brands, and exploiting human behavior, attackers are creating phishing flows that evade even the most vigilant users—and legacy security tools.
Stopping these attacks requires more than traditional defenses. Organizations can no longer rely on static indicators like domain reputation, known phishing URLs, or rule-based filters. Instead, effective protection depends on understanding context: what’s normal for your employees, your vendors, and your organization as a whole.
Abnormal takes a fundamentally different approach to email security—one rooted in behavioral AI. By analyzing thousands of signals to establish a baseline of known-good behavior, Abnormal can detect even subtle deviations that signal an attack, identifying and remediating advanced phishing attempts before users have a chance to engage.
As phishing tactics continue to evolve, only AI-native solutions like Abnormal can stay ahead of the attackers—by understanding people, not just patterns.
For even more insights into the threat landscape and predictions for where it’s headed, download our report, Inbox Under Siege: 5 Email Attacks You Need to Know for 2025.