How GhostGPT Empowers Cybercriminals with Uncensored AI
Artificial intelligence (AI) tools have changed the way we tackle day-to-day tasks, but cybercriminals are twisting that same technology for illegal activities. In 2023, WormGPT made headlines as an uncensored chatbot specifically designed for malicious purposes. Soon after, we started seeing other so-called “variants” pop up, like WolfGPT and EscapeGPT.
Unlike traditional AI models that are constrained by guidelines to ensure safe and responsible interactions, uncensored AI chatbots operate without such guardrails, raising serious concerns about their potential misuse. Most recently, Abnormal Security researchers uncovered GhostGPT, a new uncensored chatbot that further pushes the boundaries of ethical AI use.
In this blog, we explore GhostGPT, its capabilities, and the implications of this new threat.
What Is GhostGPT?
GhostGPT is a chatbot specifically designed to cater to cybercriminals. It likely uses a wrapper to connect to a jailbroken version of ChatGPT or an open-source large language model (LLM), effectively removing any ethical safeguards. By eliminating the ethical and safety restrictions typically built into AI models, GhostGPT can provide direct, unfiltered answers to sensitive or harmful queries that would be blocked or flagged by traditional AI systems.
According to its promotional materials, in addition to providing uncensored responses, GhostGPT offers several key features:
- Fast Processing: GhostGPT promises quick response times, enabling attackers to produce malicious content and gather information more efficiently.
- No Logs Policy: The creator(s) claim that user activity is not recorded, appealing to those who wish to conceal their illegal activities.
- Easy Access: Sold through Telegram, GhostGPT allows buyers to start using it immediately without the need to use a jailbreak prompt or download an LLM themselves.
GhostGPT is marketed for a range of malicious activities, including coding, malware creation, and exploit development. It can also be used to write convincing emails for business email compromise (BEC) scams, making it a convenient tool for committing cybercrime.
While its promotional materials mention "cybersecurity" as a possible use, this claim is hard to believe, given its availability on cybercrime forums and its focus on BEC scams. Such disclaimers seem like a weak attempt to dodge legal accountability—nothing new in the cybercrime world.
To test its capabilities, Abnormal Security researchers asked GhostGPT to create a Docusign phishing email. The chatbot produced a convincing template with ease, demonstrating its ability to trick potential victims—as observed below.
With its ability to deliver insights without limitations, GhostGPT serves as a powerful tool for those seeking to exploit AI for malicious purposes.
The Implications of GhostGPT
GhostGPT poses several issues that extend beyond this specific bot to similar variants in general.
First, it lowers the barrier to entry for new cybercriminals, allowing them to buy access via Telegram without needing specialized skills or extensive training. This easy access makes it simpler for less skilled attackers to engage in cybercrime.
Second, GhostGPT augments the capabilities of attackers by enabling them to generate or refine malware, phishing emails, and other malicious content quickly and effortlessly. This means that attacks can be launched with more speed and efficiency.
The convenience of GhostGPT also saves time for users. Because it’s available as a Telegram bot, there is no need to jailbreak ChatGPT or set up an open-source model. Users can pay a fee, gain immediate access, and focus directly on executing their attacks.
Finally, the overall popularity of GhostGPT, evidenced by thousands of views on online forums, underscores the growing interest among cybercriminals in leveraging AI tools for more efficient cybercrime.
Fighting Malicious AI With Defensive AI
Attackers now use tools like GhostGPT to create malicious emails that appear completely legitimate. Because these messages often slip past traditional filters, AI-powered security solutions are the only effective way to detect and block them.
Abnormal’s Human Behavior AI platform analyzes behavioral signals at an unparalleled scale. It identifies anomalies and prioritizes high-risk events across the email environment, strategically anticipating and neutralizing threats before they can inflict damage. This proactive approach is critical in an era where the best defense is a strong offense.
See for yourself how Abnormal AI provides comprehensive email protection against attacks that exploit human behavior. Schedule a demo today.