How GhostGPT Empowers Cybercriminals with Uncensored AI
Artificial intelligence (AI) tools have changed the way we tackle day-to-day tasks, but cybercriminals are twisting that same technology for illegal activities. In 2023, WormGPT made headlines as an uncensored chatbot specifically designed for malicious purposes. Soon after, we started seeing other so-called “variants” pop up, like WolfGPT and EscapeGPT.
Unlike traditional AI models that are constrained by guidelines to ensure safe and responsible interactions, uncensored AI chatbots operate without such guardrails, raising serious concerns about their potential misuse. Most recently, Abnormal Security researchers uncovered GhostGPT, a new uncensored chatbot that further pushes the boundaries of ethical AI use.
In this blog, we explore GhostGPT, its capabilities, and the implications of this new threat.
What Is GhostGPT?
Traditional AI systems like ChatGPT are designed with a comprehensive suite of built-in safety mechanisms to ensure responsible use. These include content filters that block inappropriate or harmful outputs, ethical frameworks to prevent harmful actions, and strict limitations on discussing or assisting with illegal activities.
GhostGPT has none of these restrictions.
GhostGPT is a chatbot specifically designed to cater to cybercriminals. It likely either uses a wrapper to connect to a jailbroken version of ChatGPT or an open-source large language model (LLM), effectively removing any safeguards.
By eliminating the ethical and safety constraints typically built into AI models, GhostGPT can provide direct, unfiltered answers to sensitive or harmful queries that would be blocked or flagged by conventional AI systems.

The official advertisement graphic for GhostGPT
According to its promotional materials, in addition to providing uncensored responses, GhostGPT offers several key features:
- Fast Processing: GhostGPT promises quick response times, enabling attackers to produce malicious content and gather information more efficiently.
- No Logs Policy: The creator(s) claim that user activity is not recorded, appealing to those who wish to conceal their illegal activities.
- Easy Access: Sold through Telegram, GhostGPT allows buyers to start using it immediately without the need to use a jailbreak prompt or download an LLM themselves.
How Cybercriminals Can Use GhostGPT
GhostGPT is marketed for a range of malicious activities. For example, it can be used to generate base code for various types of malware, identify and exploit software vulnerabilities, develop polymorphic malware capable of evading detection, and devise innovative attack strategies. This capability could significantly elevate both the sophistication and volume of malware used in email attacks.
This uncensored AI chatbot can also craft highly personalized phishing emails, generate templates for business email compromise (BEC) attacks, and design fraudulent websites. By leveraging advanced natural language processing capabilities, GhostGPT can produce malicious messages that are both exceptionally persuasive and difficult for legacy detection mechanisms to identify. Because it appears to utilize either a particularly effective jailbreak or open-source setup, it can be used to consistently generate convincing and malicious content with little effort—making it an especially convenient tool for social engineering attacks.
While its promotional materials mention "cybersecurity" as a possible use, this claim is hard to believe, given its availability on cybercrime forums and its focus on BEC scams. Such disclaimers seem like a weak attempt to dodge legal accountability—nothing new in the cybercrime world.
To test its capabilities, Abnormal Security researchers asked GhostGPT to create a Docusign phishing email. The chatbot produced a convincing template with ease, demonstrating its ability to trick potential victims—as observed below.

With its ability to deliver insights without limitations, GhostGPT serves as a powerful platform for those seeking to exploit AI for malicious purposes.
The Implications of GhostGPT
Uncensored AI chatbots like GhostGPT are reshaping the way cybercriminals operate, offering unprecedented convenience and efficiency.
GhostGPT is easily accessible via Telegram, a popular messaging app known for its privacy features. It’s relatively affordable, simple to use, and requires no technical knowledge, specialized skills, or extensive training—making it an easy starting point for novice threat actors.
GhostGPT is also a highly augmentable tool that allows more experienced attackers to supplement their operations by supporting very specific outputs for complex attacks. Threat actors can generate or refine malware, phishing emails, and other malicious content quickly and effortlessly, enabling them to launch and scale campaigns with more speed and efficiency.
Additionally, the convenience of GhostGPT saves time for users. Because it’s available as a Telegram bot, there is no need for threat actors to jailbreak ChatGPT or set up an open-source model. Users can pay a fee, gain immediate access, and focus directly on executing their attacks.
Finally, the overall popularity of GhostGPT, evidenced by thousands of views on online forums, underscores the growing interest among cybercriminals in leveraging AI tools for more efficient cybercrime.
Fighting Malicious AI With Defensive AI
Attackers now use tools like GhostGPT to create malicious emails that appear completely legitimate. Because these messages often slip past traditional filters, AI-powered security solutions are the only effective way to detect and block them.
Abnormal’s Human Behavior AI platform analyzes behavioral signals at an unparalleled scale. It identifies anomalies and prioritizes high-risk events across the email environment, strategically anticipating and neutralizing threats before they can inflict damage. This proactive approach is critical in an era where the best defense is a strong offense.
See for yourself how Abnormal AI provides comprehensive email protection against attacks that exploit human behavior. Schedule a demo today.
Get AI Protection for Your Human Interactions
