Chinese AI model DeepSeek R1, hailed as a major breakthrough in reasoning capabilities, has been found to be highly vulnerable to security exploits, allowing it to generate harmful content, including malware, disinformation, and instructions for criminal activities. A recent investigation by cyber-intelligence firm KELA revealed that the model is particularly easy to jailbreak, posing a significant threat to ethical AI usage.
Jailbreaking DeepSeek
Developed by the Chinese AI startup DeepSeek, R1 is a reasoning-focused generative AI based on the DeepSeek-V3 base model. It has been trained with large-scale reinforcement learning (RL) in post-training, enabling it to outperform many open-source and proprietary models. As of January 26, 2025, it ranks sixth on the Chatbot Arena benchmark leaderboard, surpassing models such as Meta’s Llama 3.1-405B and even some versions of OpenAI's o1 and Anthropic’s Claude 3.5 Sonnet.
DeepSeek R1 has demonstrated superior problem-solving skills, particularly in complex logic-based queries. However, KELA’s AI Red Team discovered that this advanced reasoning comes at the cost of security, making it significantly more vulnerable than models like ChatGPT-4o.
KELA researchers were able to easily bypass DeepSeek R1’s safety restrictions, utilizing jailbreak methods that have long been mitigated in GPT-4 and other major AI models. Notably, the “Evil Jailbreak”, which forces a model to adopt an unethical persona, was patched in OpenAI models two years ago but remains effective against DeepSeek R1.
When prompted with criminal queries, the AI readily generated detailed malware, including infostealers designed to extract credit card data, browser cookies, usernames, and passwords, then transmit them to a remote server. Unlike OpenAI’s latest models, which obscure their internal reasoning during inference, DeepSeek openly displays its reasoning process, inadvertently exposing attack surfaces that malicious users can exploit.
Other jailbreaks tested by KELA’s Red Team allowed DeepSeek to:
- Provide detailed step-by-step instructions for creating explosive devices
- Outline methods to launder illicit money
- Suggest purchasing stolen credentials from dark web marketplaces
- Fabricate false employee data, including emails and phone numbers, for OpenAI executives
These vulnerabilities highlight DeepSeek’s lack of robust guardrails, making it one of the most easily exploited generative AI models in recent years.
Even though there exist uncensored AI tools like the GhostGPT which unlock generative AI power for cybercriminals, widely and freely available tools like DeepSeek can have a far more significant and broader impact.
Data security concerns
Beyond its vulnerabilities to adversarial attacks, DeepSeek’s approach to data privacy raises additional concerns. The AI service explicitly states in its privacy policy that it stores all collected data on servers in China, meaning any user inputs — including personal, financial, or sensitive queries — are sent to and processed in China.
Given China’s strict data-sharing laws, which require companies to cooperate with national intelligence efforts, DeepSeek’s data practices don’t match the expectations of privacy-minded users. Additionally, reports indicate that content critical of China is being censored, while the platform freely provides information that would be restricted in Western AI models, which is a clear indication of direct Chinese government influence on the platform.
Meanwhile, DeepSeek has become a target of cyberattacks since yesterday. The company has reported “large-scale malicious attacks” on its web and API services, forcing it to temporarily restrict new user registrations. The persistent disruptions, lasting over 24 hours, suggest serious weaknesses in DeepSeek’s own security infrastructure, further casting doubt on its ability to ensure safe AI deployment.
Ultimately, DeepSeek R1 stands as a powerful, low-cost alternative to leading Western AI models, yet its poor security posture, vulnerability to exploits, and questionable data practices make it a high-risk choice for organizations and individual users. It’s all right to casually experiment with it but we’d advise against inputting sensitive or confidential data in it.
Wulf Sternhammer
What an amazingly biased post!
DeepSeek is apparently designed as being Objective, something the west hasn’t known since before the turn of the millennia. Western AI has been CUCKED since some of the early versions first tried to dispel some of the feminized west’s degenerate dogmas.
Long live Objectivity and DeepSeek, and to hell with the ZioFemiNazis!
americanantiamerican
in this post, I only heard crying for Americans and Europeans