April 1, 2025 • 4:40 pm ET
DeepSeek shows the US and EU the costs of failing to govern AI
DeepSeek’s breakthrough has made the West reflect on its artificial intelligence (AI) strategies, specifically regarding cost and efficiency. But the West must also urgently consider what DeepSeek’s R1 model means for the future of democracy in the AI era.
That is because the R1 model shows how China has taken the lead in open-source AI: systems that are made available to users to use, study, modify, and share the tool’s components, from its codes to its datasets, at least according to the Open Source Initiative (OSI), a California-based nonprofit. While there are varying definitions of open-source, its application for AI has immense potential, as it can encourage greater innovation among developers and empower individuals and communities to create AI-driven solutions in sectors such as education, healthcare, and finance. The technology, ultimately, accelerates economic growth.
However, according to reports, R1 appears to censor and withhold information from users. Thus, democracies not only risk the loss of the AI technological battle; they also risk falling behind in the race to govern AI and could fail to ensure that democratic AI proliferates more widely than systems championed by authoritarians.
Therefore, the United States must work with its democratic allies, particularly the European Union (EU), to set global standards for open-source AI. Both powers should leverage existing legislative tools to initiate an open-source governance framework. Such an effort would require officially adopting a definition of open-source AI (such as OSI’s) to increase governance effectiveness. After that, the United States and EU should accelerate efforts to ensure democratic values are embedded in open-source AI models, paving the way for an AI future that is more open, transparent, and empowering.
How China overtook the lead
Part of DeepSeek’s success can be understood by the Chinese Communist Party’s (CCP’s) showing signs of incorporating the norm-building of open-source AI into its legal framework. In April 2024, the Model AI Law—a multi-year expert draft led by the Chinese Academy of Social Sciences, which is influential in the country’s lawmaking process—laid out China’s support for an open-source AI ecosystem. Article 19 states that the CCP “promotes construction of the open source ecosystem” and “supports relevant entities in building or operating open source platforms, open source communities, and open source projects.” It encourages companies to make “software source code, hardware designs, and application services publicly available” to foster industry sharing and collaboration. The draft also highlights reducing or removing legal liability for the provision of open-source AI models, providing that individuals and organizations have established a governance system compliant with “national standards” and “have taken corresponding safety measures.” This is a notable contrast to China’s past laws governing AI that explicitly stated the goal of protecting those rights. The specific provisions in the Model AI Law, albeit a draft, shouldn’t be overlooked, as they essentially serve as a blueprint of how open-source AI is deployed in the country and what China’s models exported globally would look like.
Furthermore, the AI Safety Governance Framework, a document that China aims to use as a guide to “promote international collaboration on AI safety governance at a global level,” echoes the country’s assertiveness on open-source AI. The document was drafted by China’s National Technical Committee 260 on Cybersecurity, a body working with the Cyberspace Administration of China, whose cybersecurity standard practice guidelines were adopted by CCP in September 2024. The framework reads, “We should promote knowledge sharing in AI, make AI technologies available to the public under open-source terms, and jointly develop AI chips, frameworks, and software.” Appearing in a document meant for global stakeholders, the statement reflects China’s ambition to lead in this area as an advocate.
What about the United States and EU?
In the United States, advocates have touted the benefits of open source for some time, and AI industry leaders have called for the United States to focus more on open-source AI. For example, Mark Zuckerberg launched the open-source model Llama 3.1 last year, and in doing so, he argued that open-source “represents the world’s best shot” at creating “economic opportunity and security for everyone.”
Despite this advocacy, the United States has not established any law to promote open-source AI. A US senator did introduce a bill in 2023 calling for building a framework for open-source software security, but the bill has not progressed since then. Last year, the National Telecommunications and Information Administration published a report on dual-use AI foundation models with open weights (meaning the models are available for use, but are not fully open source). It advised the government to more deeply monitor the risks of open-weight foundation models in order to determine appropriate restrictions for them. The Biden administration’s final AI regulatory framework was friendlier to open models: It set restrictions for the most advanced closed-weight models while excluding open-weight ones.
The future of open-source models remains unclear. US President Donald Trump has not yet created any guidance for open-source AI. So far, he has repealed Biden’s AI executive order, but the executive order that replaced it has not outlined any initiative that guides the development of open-source AI. Overall, the United States has been overly focused on playing defense by developing highly capable models while working to prevent adversaries from accessing them, without considering the wider global reach of those models.
Since unveiling the General Data Protection Regulation (GDPR), the EU has established itself as a regulatory powerhouse in the global digital economy. Across the board, countries and global companies have adopted EU compliance frameworks for the digital economy, including the AI Act. However, the EU’s effort on open-source AI is lacking. Although Article 2 of the AI Act briefly mentions open-source AI as an exemption from regulation, the actual impact seems minor. The exemption is even absent for commercial-purpose models.
In other EU guidance documents, the same paradox can be found. The latest General-Purpose AI Code of Practice published in March 2025 acknowledged how open-source models have a positive impact on the development of safe, human-centric, and trustworthy AI. However, there is no meaningful elaboration promoting the development and use of open-source AI models. Even in the EU Competitiveness Compass—a framework targeting overregulation, regulatory complexity, and strategic competitiveness in AI—“open source” is absent.
The EU’s cautious approach to regulating open-source AI stems from the challenge of defining it. Open-source AI is different from traditional open-source software in that it includes pre-trained AI models rather than simply source code. And, of course, the definition from OSI has not yet been acknowledged in the international legal community. The debate over what constitutes open-source AI creates legal uncertainty that the EU is likely uncomfortable to accept. Yet the real driver of inactivity lies deeper. The EU’s regulatory successes, like GDPR, make the Commission wary of exemptions that could weaken its global influence over a technology still so poorly defined. This is a gamble Brussels has, so far, had no incentive to take.
The new power imbalance in AI geopolitics
China’s push to become technologically self-sufficient, a push which has included solidifying open-source AI strategies, is partly motivated by US export controls on advanced computing and semiconductors dating back at least to 2018. These measures stemmed from US concerns about national security, economic security, and intellectual property, while China’s countermeasures also reflect the broader strategic competition in technological superiority between both countries. The EU, on the other hand, asserts itself in the race by setting the global norms of protecting fundamental rights and a host of democratic values such as fairness and redistribution, which ultimately have shaped the policies of leading global technology companies.
By positioning itself as a leader in open-source AI, China has turned the export and policy challenge into an opportunity to sell its version of AI to the world. The rise of DeepSeek, along with other domestic rival companies such as Alibaba, is shifting the pendulum by reducing the world’s appetite for closed AI models. DeepSeek has released smaller models with fewer parameters for less powerful devices. AI development platform Hugging Face has started replicating DeepSeek-R1’s training process to enhance its models’ performance in reinforcement learning. Microsoft, OpenAI, and Meta have embraced model distillation, a technique that drew much attention with the DeepSeek breakthrough. China has advanced the conversation around openness, with the United States adapting to the discourse for the first time and the EU being trapped in legal inertia, leaving a power imbalance in open-source AI.
China is offering a concerning version of open-source AI. The CCP strategically deploys a “two-track” system that allows greater openness for AI firms while limiting information and expression for public-facing models. Its openness is marked by the country’s historical pattern that restricts the architecture of a model, such as requiring the input and output to align with China’s values and a positive national image. Even in its global-facing AI Safety Governance Framework (in which Chinese authorities embrace open-source AI), the CCP says that AI-generated content poses threats to ideological security, hinting at the CCP’s limited acceptance of freedom of speech and thought.
Without a comprehensive framework based on the protection of democracy and fundamental rights, the world could see China’s more restrictive open-source AI models reproduced widely. Autocrats and nonstate entities worldwide can build on them to censor information and expression while touting that they are promoting accessibility. Simply focusing on the technological performance of China is not sufficient. Instead, democracies should respond by leading with democratic governance.
Transatlantic cooperation is the next step
The United States and EU should consider open-source diplomacy, advancing the sharing of capable AI models across the globe. In doing so, they should create a unified governance framework and work toward shaping a democratic AI future by forming a transatlantic working group on open-source AI. Existing structures, including the Global Partnership on Artificial Intelligence (GPAI), can serve as a vehicle. But it’s essential that technology companies and experts from both sides of the Atlantic are included in the framework development process.
Second, the United States and EU should, through funding academic institutions and supporting startups, promote the development of open-source AI models that align with democratic values. Such models, free from censorship and security threats, would set a powerful contrast to the Chinese models. To promote such models, the United States and EU will need to recognize that the benefits of such models outweigh the risks in the broader picture. Similarly, the EU must also continue leveraging its regulatory advantage; it must also be more decisive about governing open-source AI, even if it means embracing some uncertainty about its legal definition, in order to outpace China’s momentum.
The United States and EU may currently have a rocky relationship. However, US-EU collaboration rather than competition is crucial with China’s ascendence in open-source AI. To take back leadership in this pivotal arena, the United States and European Union must launch a transatlantic initiative on open-source AI that employs forward-thinking policy, research, and innovation in setting the global standard for a rights-respecting, transparent, and creative AI future.
Ryan Pan is a project assistant at the GeoTech Center.
Kolja Verhage is a senior manager of AI governance and digital regulations at Deloitte.
The views reflected in the article are the author’s views and do not necessarily reflect the views of their employers.
The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.
Image: A smartphone screen displays the Deepseek AI logo, with a background featuring the Chinese flag and the word ”AI” surrounded by blue flames on March 17, 2025. Photo by Matteo Della Torre/NurPhoto via Reuters Connect.