The United States are shifting from non-regulated Cyberspace to cross-border checks.
With a regulation of a BIDEN administration issued at the beginning of this year, the United States became one of the first countries to regulate artificial intelligence (AI) through export checks.
The move is consistent with similar efforts from other countries to set limits where their most advanced technology can be exported, including Canada, the European Union, the United Kingdom and Australia. But the American rule is innovative because of its specific focus on AI. The Carnegie Endowment for International Peace, a think tank for foreign policy, described the rule as an “ambitious act of economic and technological policy.”
The US Department of Trade published the final rule in January 2025, with the aim of enabling American companies to safely export “AI technology abroad”, as described by Michael C. Horowitz at the Council of Foreign Relations.
Broadly speaking, the rule of American companies can export AI chips and possibilities abroad and exempt our allies from rules that limit the amount of exportable chips. Under the rule, on the other hand, the United States would not export AI chips to ‘Landen of Care’, which means that countries where the United States have arms embargos. The rule was reinforced by the current administration when President Donald J. Trump issued an executive order that was intended to “identify and eliminate meshes in existing export controls”.
Historically, the regulation of AI in the United States is limited by the fear that excessive regulations could kill AI, just as it starts, as the industry states.
But in recent years, the United States has shifted to checks on emerging technology. At the end of last year, the BIDEN administration issued a final rule that change export checks for semiconductors. Gregory C. Allen at the Center for Strategic and International Studies writes that the rule of 2024 aims to prevent China from having access to advanced AI chips and the ability of China to obtain alternatives or to produce inland.
The rule of this year’s trade department follows comparable measures that the US government has taken to limit the access from China to artificial intelligence and advanced semiconductor technologies, which has been a government policy focus since 2022.
Digital sovereignty or “the ability to have control over your own digital destination”, as Sean Fleming at the World Economic Forum says so, is a matter of growing importance. As scientists claim, the competition reflects for control over infrastructure, data and designing technology broader debates on sovereignty by countries that want to control their own affairs.
In August last year, the European Union adopted a regulation to ‘harmonize’ the development, commercialization and use of AI technologies. According to the European Commission, the rules are intended to “promote reliable AI” in Europe. The Regulation classifies AI applications in various risk categories based on their potential impact on individuals and society.
In contrast to the US rule issued this year, the European approach does not focus on certain countries and focuses on regulating the risk category of the type AI, instead. Scholars claim that the European Regulation has the potential to become a global benchmark for governance and regulation of AI.
China, on the other hand, uses China an approach run by the State, with a top priority to ‘retain control of information’. China also encourages domestic innovation in generative AI. In a report to the congress, the US -CHina Economic Security Review Commission reported that China invests in non -state actors, including companies, to promote its objectives for technology development and policy objectives. The committee was founded in 2000 to report to the congress about the implications of the national security of the relationship between the US and China.
The approach to China for digital sovereignty and regulation of AI has contrasted with the traditional approach in the United States, where advocates for a non-regulated cyberspace, especially within Big Tech, play a dominant role.
But the new rule of the United States that limits the export of AI reflects a fundamental shift in how the United States approaches the regulation of the internet and technology, making it closer to a approach led by the government.
Google has stated that it builds up compliance with the regulations in product development and prefers regulations to be stable and predictable. For this reason, which approach major players in AI technology are going in the future, companies such as Google prefer the approach to be coordinated, as noted by the Digital Watch Observatory. The Center for International Governance Innovation, a Canadian think tank, states that companies such as non-state actors can play a role in new efforts in multilateral cooperation or in standard setting efforts.
Ultimately, the Unilateral regulations of the United States can encourage other countries to take comparable measures, so that the world comes closer to AI regulation, as the Brookings Institution notes. But, as scientists claim, the universal cooperation in AI regulation is unlikely.