DeepSeek’s free-to-use open source AI models could be used by bad actors to generate dangerous malware whilst exhibiting lacklustre guardrails, new research has found.
Governments and regulators have been warning of the possibility of LLMs like ChatGPT and Gemini being used to create dangerous code since the modern scale of generative AI was revealed to the public.
There have been a handful of LLMs developed specifically for criminal use already, but while these models typically require payment to access, and the mainstream LLM providers have guardrails implemented, DeepSeek’s freely accessible open-source model presents an opportunity for scammers.
The DeepSeek R1 model – a reasoning large language model (LLM) developed by the Chinese firm – was found to be able to generate the “basic structure for malware”, offering guardrails that are “trivial to work around” and “vulnerable to a variety of jailbreaking techniques”, according to research conducted by Nick Miles of Tenable Research.
For the test, Miles attempted to use DeepSeek to create a keylogger that could record keystrokes from a device’s users whilst being concealed from the operating system’s defences.
The LLM initially refused but convincing it to press on was as simple as telling it the excersise was for “educational purposes only”.
DeepSeek provided instructions for creating a keylogger and, while there were several instances of the model’s code requiring manual rewriting, eventually, a working keylogger was created.
Miles also attempted to generate a simple ransomware – a type of malware that restricts a user’s access to their own files that can be ransomed – sample.
Again, the system’s restricting warned against the practice, but after some back and forth, Miles was able to generate a handful of working ransomware samples, though they also required manual editing to function.
The researcher concluded that with simple manipulation, bad actors could bypass DeepSeek’s measures against malware creation.
The findings do not spell complete disaster, as a considerable amount of pre-existing knowledge of coding is required for the model’s outputs to work properly.
“Nonetheless, DeepSeek provides a useful compilation of techniques and search terms that can help someone with no prior experience in writing malicious code the ability to quickly familiarize themselves with the relevant concepts,” concluded Miles.
“Based on this analysis, I believe that DeepSeek is likely to fuel further development of malicious AI-generated code by cybercriminals in the near future.”
Register for Free
Get daily updates and enjoy an ad-reduced experience.
Already have an account? Log in