The White House is reportedly weighing a new rule that would prohibit the installation of DeepSeek’s app on government devices.
The Wall Street Journal today cited sources as saying that the regulation is “likely” to be implemented. According to the report, the move is motivated by concerns about how DeepSeek processes users’ data. The Chinese artificial intelligence lab doesn’t disclose details such as who has access to the information it collects.
DeepSeek rose to prominence earlier this year with the release of DeepSeek-R1, an open-source large language model optimized for reasoning. It can outperform OpenAI’s competing o1 reasoning algorithm across a range of tasks. Moreover, DeepSeek claims that R1 cost less to train than many earlier LLMs.
Alongside R1, DeepSeek provides a ChatGPT-like chatbot app for consumers. That service is the focus of the ban reportedly being considered by the Trump administration. At one point, DeepSeek was the most downloaded app on both the App Store and Google Play.
DeepSeek’s mobile client is based not on R1 but rather DeepSeek-V3, an LLM the AI lab open-sourced in December. The latter algorithm has more limited reasoning capabilities. At the architecture level, however, the two models have many similarities because R1 is based on V3.
Both models include 671 billion parameters. Those parameters are organized into subnets, neural networks that each focus on a different set of tasks. When a user enters a prompt, the answer is generated by only one of the neural networks to reduce hardware use.
DeepSeek trained V3 on 14.8 billion tokens’ worth of data. One token corresponds to a few letters or numbers. LLMs usually generate output one token at a time, but the AI lab took a different approach with V3: during training, the model was configured to generate multiple tokens at once. DeepSeek says that this configuration helped boost V3’s performance.
R1, the company’s best reasoning model, is a version of V3 that has been trained more extensively. The extra training consisted partly of supervised fine-tuning, which involves supplying an LLM with examples of how it should perform tasks. DeepSeek used reinforcement learning to further hone R1’s capabilities.
According to the Journal, the DeepSeek ban the White House is weighing could extend beyond government devices. Officials are also considering prohibiting app store operators from distributing the chatbot service. Another step under consideration is “putting limits” on how U.S. cloud providers can offer DeepSeek models to customers. Discussion about the latter two moves are said to be in an early stage.
It’s unclear if the curbs on cloud providers would only cover R1 and V3 or also extend to DeepSeek’s other, less capable LLMs.
In January, the company released a reasoning model called R1-Zero that was trained entirely using reinforcement learning. Typically, LLM developers also use supervised fine-tuning. DeepSeek says R1-Zero is the first open-source model to “validate that reasoning capabilities of LLMs can be incentivized purely through RL.”
The company has also open-sourced a number of so-called distilled models based on R1. Their training datasets incorporate some of R1’s knowledge. The distilled models, which are based on the Llama and Qwen open-source LLM families, range in size from 1.5 billion to 70 billion parameters.
The U.S. Navy and NASA already prohibit personnel from installing DeepSeek’s app on work devices. Texas, New York and Virginia rolled out similar rules for state employees in recent weeks. In South Korea and Italy, meanwhile, privacy regulators have blocked app stores from offering DeepSeek to consumers.
Image: Unsplash
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU