DeepSeek halted new account registrations a few days ago, just as the AI app surged in the App Store. The company blamed the limitation on a malicious attack, saying that DeepSeek AI account holders could access the service while newcomers would have to wait.
I wondered at the time whether the malicious attack was real or whether it was a ploy to hide the fact that DeepSeek infrastructure might not be able to handle the influx of new users dying to see this ChatGPT o1 rival whose training cost only a fraction of what OpenAI spent on its reasoning model.
I said we’ll soon learn whether someone attempted to attack the DeepSeek servers or whether the service is struggling with new registrations.
A few days later, the registration limitations are gone, but we have a report detailing a potentially serious hack. It turns out that hackers didn’t have to try very hard to penetrate the Chinese AI startup’s security. All they had to do was find open access to an unsecured database. They’d have discovered up to one million logs that included plenty of sensitive information.
That’s according to security company Wiz Research, which explored the security of DeepSeek’s online properties, stumbling on the massive trove of information.
The company detailed its findings in a blog after notifying DeepSeek about the vulnerability:
Wiz Research has identified a publicly accessible ClickHouse database belonging to DeepSeek, which allows full control over database operations, including the ability to access internal data. The exposure includes over a million lines of log streams containing chat history, secret keys, backend details, and other highly sensitive information. The Wiz Research team immediately and responsibly disclosed the issue to DeepSeek, which promptly secured the exposure.
DeepSeek secured the data, but if Wiz was able to obtain access to it and browse the plain-text data uninterrupted, it’s likely that hackers who do this for a living would have also found access to it:
This database contained a significant volume of chat history, backend data, and sensitive information, including log streams, API Secrets, and operational details.
More critically, the exposure allowed for full database control and potential privilege escalation within the DeepSeek environment without any authentication or defense mechanism to the outside world.
Wiz labeled the DeepSeek vulnerability as a “critical risk” for DeepSeek’s own security and its customers:
This level of access posed a critical risk to DeepSeek’s own security and for its end-users. Not only an attacker could retrieve sensitive logs and actual plain-text chat messages, but they could also potentially exfiltrate plain-text passwords and local files along propriety information directly from the server.
It’s unclear at this time whether anyone stole the data in the database, and DeepSeek has not addressed the security mishap as a company in the US and other Western markets might have. On the other hand, DeepSeek did say the service was under attack, though we’re yet to hear specifics about the hack.
The data appears to belong to Chinese users. How many DeepSeek users might have been exposed via the database and whether the potential hack impacted international customers is unclear.
As Wiz points out, the security incident is very serious, especially for new AI apps that go viral.
“The rapid pace of adoption often leads to overlooking security, but protecting customer data must remain the top priority,” the researchers said. “It’s crucial that security teams work closely with AI engineers to ensure visibility into the architecture, tooling, and models being used, so we can safeguard data and prevent exposure.”
The vulnerability also has big privacy implications beyond securing sensitive user data. Remember that all your data goes to China. Wiz has shown that prompts are saved in plain text, making them accessible to anyone with the right to inspect these files.
As a longtime AI user myself, I can only hope my ChatGPT chats and sensitive data aren’t similarly easy to access. Then again, I know OpenAI suffered its own share of security vulnerabilities, especially in its early days.
Separately, the DeepSeek hack uncovered another unusual finding. Wiz researchers told Wired That the DeepSeek Systems are nearly identical to OpenAI “down to details like the format of the API keys.” A few days ago, OpenAI accused DeepSeek of using ChatGPT data without consent to train its early DeepSeek AI models.