Cybercriminals are increasingly turning to uncensored AI chatbots, with GhostGPT emerging as the latest tool facilitating malware development, phishing scams, and business email compromise (BEC) attacks. According to a report by Abnormal Security, this chatbot removes ethical safeguards, providing unrestricted responses to malicious queries and making sophisticated cybercrime more accessible.
GhostGPT follows in the footsteps of previous uncensored AI models like WormGPT, WolfGPT, and EscapeGPT — tools designed to bypass the ethical and security constraints imposed on mainstream AI systems. Unlike ChatGPT or similar models that restrict harmful outputs, GhostGPT likely operates using a jailbroken version of an existing large language model (LLM) or an open-source alternative. This modification allows it to freely generate content that would typically be blocked, such as phishing templates, exploit code, and malicious scripts.
Marketed through Telegram, GhostGPT is designed for ease of use, requiring no complex jailbreak techniques or manual LLM setups. Users can purchase access and begin using the chatbot instantly, significantly lowering the technical barrier to entry for cybercriminals.
GhostGPT key features
GhostGPT is advertised on cybercrime forums as a powerful tool for cyberattacks. Its promotional materials highlight several key features:
- Fast Processing – The chatbot delivers rapid responses, allowing attackers to generate malicious content quickly.
- No Logs Policy – The developers claim that user activity is not recorded, an appealing feature for those seeking anonymity.
- Easy Accessibility – Available through Telegram, GhostGPT eliminates the need for jailbreak prompts or independent model setups.
Abnormal Security researchers tested GhostGPT by prompting it to generate a phishing email designed to impersonate DocuSign. The chatbot effortlessly produced a convincing email template, demonstrating its effectiveness in crafting deceptive messages that could trick victims into revealing sensitive information.
Risks of unregulated AI tools
The emergence of GhostGPT highlights several growing concerns about the use of AI in cybercrime.
First, it significantly lowers the barrier for entry-level cybercriminals. Traditionally, attackers needed some level of technical expertise to craft phishing emails or develop malware. With tools like GhostGPT, even inexperienced individuals can generate effective attack materials with minimal effort.
Second, it enhances the efficiency of cyberattacks. By automating and streamlining processes like malware coding and social engineering, cybercriminals can launch more sophisticated and frequent attacks. The ability to generate polished BEC emails within seconds, for instance, can lead to more convincing scams that bypass traditional security measures.
Finally, GhostGPT’s growing popularity indicates increasing interest among cybercriminals in leveraging AI for illicit activities. Thousands of views on online forums suggest a rising demand for such tools, which could lead to the development of even more advanced versions in the future.
Leave a Reply