By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Second-order impacts of civil artificial intelligence regulation on defense: Why the national security community must engage
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Second-order impacts of civil artificial intelligence regulation on defense: Why the national security community must engage
News

Second-order impacts of civil artificial intelligence regulation on defense: Why the national security community must engage

News Room
Last updated: 2025/06/30 at 10:25 AM
News Room Published 30 June 2025
Share
SHARE

Report

June 30, 2025 • 10:00 am ET


Second-order impacts of civil artificial intelligence regulation on defense: Why the national security community must engage

By
Deborah Cheverton

Table of contents

Executive summary

Civil regulation of artificial intelligence (AI) is hugely complex and evolving quickly, with even otherwise well-aligned countries taking significantly different approaches. At first glance, little in the content of these regulations is directly applicable to the defense and national security community. The most wide-ranging and robust regulatory frameworks have specific carve-outs that exclude military and related use cases. And while governments are not blind to the need for regulations on AI used in national security and defense, these are largely detached from the wider civil AI regulation debate. However, when potential second-order or unintended consequences on defense from civil AI regulation are considered, it becomes clear that the defense and security community cannot afford to think itself special. Carve-out boundaries can, at best, be porous when the technology is inherently dual use in nature. This paper identifies three broad areas in which this porosity might have a negative impact, including 

  • market-shaping civil regulation that could affect the tools available to the defense and national security community; 
  • judicial interpretation of civil regulations that could impact the defense and national security community’s license to operate; and 
  • regulations that could add additional cost or risk to developing and deploying AI systems for defense and national security. 

This paper employs these areas as lenses through which to assess civil regulatory frameworks for AI to identify which initiatives should concern the defense and national security community. These areas are grouped by the level of resources and attention that should be applied while the civil regulatory landscape continues to develop. Private-sector AI firms with dual-use products, industry groups, government offices with national security responsibility for AI, and legislative staff should use this paper as a roadmap to understand the impact of civil AI regulation on their equities and plan to inject their perspectives into the debate. 

Introduction

Whichever side of this argument—or the gray and murky middle ground—one tends toward, it is clear that artificial intelligence (AI) is an enormously consequential technology in at least two ways. First, the AI revolution will change the way people work, live, and play. Second, the development and adoption of AI will transform the way future wars are fought, particularly in the context of US strategic competition with China. These conclusions, brought to the fore by the seemingly revolutionary advances in generative AI—as typified by ChatGPT and other large multimodal models—are natural conclusions drawn from decades of incremental advances in basic science and digital technologies. As public interest in AI and fears of its misuse rise, governments have started to regulate it. 

Much like AI itself, the global discussion on how best to regulate AI is complex and fast-changing, with big differences in approach seen even between otherwise well-aligned countries. Since the Organisation for Economic Co-operation and Development (OECD) published the first internationally agreed-upon set of principles for the responsible and trustworthy development of AI policies in 2019, the organization has identified more than 930 AI-related policy initiatives across 70 jurisdictions. The comparative analysis presented here reveals huge variation across these initiatives, which range from comprehensive legislation like the European Union (EU) AI Act to loosely managed voluntary codes of conduct, like that agreed to between the Biden administration and US technology companies. Most of the initiatives aim to improve the ability of their respective countries to thrive in the AI age; some aim to reduce the capacity of their competitors to do the same. Some take a horizontal approach focusing on specific sectors, use cases, or risk profiles, while others look vertically at specific kinds of AI systems, and some try to do bits of both. Issues around skills, supply chains, training data, and algorithm development feature varying degrees of emphasis. Almost all place some degree of responsibility on developers of AI systems, albeit voluntarily in the loosest arrangements, but knotty problems around accountability and enforcement remain. 

The defense and national security community has largely kept itself separate from the ongoing debates around civil AI regulation, focusing instead on internally directed standards and processes. The unspoken assumption seems to be that regulatory carve-outs or special considerations will insulate the community, but that view fails to consider the potential second-order implications of civil regulation, which will be market shaping and will affect a whole swath of areas in which defense has significant equity. Furthermore, the race to develop AI tools is itself now an arena of geopolitical competition with strategic consequences for defense and security, with the ability to intensify rivalries, shift economic and technological advantage, and shape new global norms. Relying on regulatory carve-outs for the development and use of AI in defense is likely to prove ineffective at best, and could seriously limit the ability of the United States and its allies to reap the rewards that AI offers as an enhancement to military capabilities on and off the battlefield. 

This paper provides a comparative analysis of the national and international regulatory initiatives that will likely be important for defense and national security, including initiatives in the United States, United Kingdom (UK), European Union, China, and Singapore, as well as the United Nations (UN), OECD, and the Group of Seven (G7). The paper assesses the potential implications of civil AI regulation on the defense and national security community by grouping them into three buckets. 

  • Be supportive: Areas or initiatives that the community should get behind and support in the short term. 
  • Be proactive: Areas that are still maturing but in which greater input is needed and the impact on the community could be significant in the medium term.  
  • Be watchful: Areas that are still maturing but in which uncertain future impacts could require the community’s input.  

Definitions

To properly survey the international landscape, this paper takes a relatively expansive view of regulation and what constitutes an AI system. 

The former is usually understood by legal professionals to mean government intervention in the private domain or a legal rule that implements such intervention. In this context, that definition would limit consideration to so-called “hard regulation,” largely comprising legislation and rules enforced by some kind of government organization, and would exclude softer forms of regulation such as voluntary codes of conduct and non-enforceable frameworks for risk assessment and classification. For this reason, this paper interprets regulation more loosely to mean the controlling of an activity or process, usually by means of rules, but not necessarily deriving from government action or subject to formal enforcement mechanisms. When in doubt, if a policy or regulation says it is aimed at controlling the development of AI, this paper takes it at its word. 

To define AI, this paper follows the National Artificial Intelligence Act of 2020, as enacted via the 2021 National Defense Authorization Act, which defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” This definition neatly encompasses the current cutting edge of narrow AI systems based on machine learning. At a later date, it might also be expected to include theorized, but not yet realized, artificial general intelligence or artificial superintelligence systems. This paper deliberately excludes efforts to control the production of advanced microchips as a precursor technology to AI, as there is already significant research and commentary on that issue. 

National and supranational regulatory initiatives

United States

Thus far, the US approach to AI regulation can perhaps best be characterized as a patchwork attempting to balance public safety and civil rights concerns with a widespread assumption that US technology companies must be allowed to innovate for the country to succeed. There is consensus that government must play a regulatory role, but a wide range of opinions on what that role should look like.

Overview

Regulatory approach

Overall, the regulatory approach is technology agnostic and focused on specific use cases, especially those relating to civil liberties, data privacy, and consumer protection. 

It should be supplemented in some jurisdictions by additional guidelines for models that are thought to present particularly severe or novel risks. The latter includes generative AI and dual-use foundation models. 

Scope of regulation

Focus on outcomes generated by AI systems with limited consideration of individual models or algorithms, except dual-use foundation model elements that use a compute-power threshold definition. 

At the federal level, heads of government agencies are individually responsible for the use of AI within their organizations, including third-party products and services. This includes training data, with particular focus on the use of data that are safety, rights, or privacy impacting as defined in existing regulation. 

Type of regulation

At the federal level, regulation should entail voluntary arrangements with industry and incorporation of AI-specific issues into existing hard regulation through adapting standards, risk management, and governance frameworks. 

Some states have put in place bespoke hard regulation of AI, including disclosure requirements, but this is generally focused on protecting existing consumer and civil rights regimes.

Target of regulation

At the federal level, voluntary arrangements are aimed at developers and deployers of AI-enabled systems and intended to protect the users of those systems, with particular focus on public services provided by or through federal agencies. Service providers might not be covered due to Section 230 of the Communications Act.

At the state level, some legislatures have placed more specific regulatory requirements on developers and deployers of AI-enabled systems to their populations, but the landscape is uneven and evolving. 

Coverage of defense and national security

Defense and national security are covered by separate regulations at the federal level, with bespoke frameworks for different components of the community. State-level regulation does not yet incorporate sector-specific use cases, but domestic policing, counterterrorism, and the National Guard could fall under future initiatives.  

Federal regulation

At the federal level, AI has been a rare area of bipartisan interest and relative agreement in recent years. The ideas raised in 2018 by then President Donald Trump in Executive Order (EO) 13845 can be traced through subsequent Biden-era initiatives, including voluntary commitments to manage the risks posed by AI, which were agreed upon with leading technology companies in mid-2023. However, other elements of the Biden approach to AI—such as the 2022 Blueprint for an AI Bill of Rights, which focused on potential civil rights harms of AI, and the more recent EO14110 Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence—were unlikely to survive long, with the latter explicitly called out for reversal in the 2024 Republican platform. Trump was able to follow through on this easily because, while EO14110 was a sweeping document that gave elements of the federal government 110 specific tasks, it was not law and was swiftly overturned.

While EO14110 was revoked, it is not clear what might replace it. It seems likely that the Biden administration’s focus on protecting civil rights as laid out by the Office of Management and Budget (OMB) will become less prominent, but the political calculus is complicated and revising Biden-era AI regulation is not likely to be at the top of the Trump administration’s to-do list. So, the change of administration does not necessarily mean that all initiatives set in motion by Biden will halt. Before EO14110 was issued, at least a dozen federal agencies had already issued guidance on the use of AI in their jurisdictions and more have since followed suit. These may well survive, especially the more technocratic elements like the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework (NIST Framework), which is due to be expanded to cover risks that are novel to, or exacerbated by, the use of generative AI. The NIST Framework, along with guidance on secure software development practices related to training data for generative AI and dual-use foundation models, and a plan for global engagement on AI standards, are voluntary tools and generally politically uncontentious.

In Congress, then-Senate Majority Leader Chuck Schumer (D-NY) led the AI charge with a program of educational Insight Forums, which led to the Bipartisan Senate AI Working Group’s Roadmap for AI Policy. Some areas of the roadmap support the Biden administration’s approach, most notably support for NIST, but overall it is more concerned with strengthening the US position vis-à-vis international competitors than it is with domestic regulation. No significant legislation on AI is on the horizon, and the roadmap’s level of ambition is likely constrained by dynamics in the House of Representatives, given that Speaker Mike Johnson is on the record arguing against overregulation of AI companies. A rolling set of smaller legislative changes is more likely than an omnibus AI bill, and the result will almost certainly be a regulatory regime more complex and distributed than that in the EU. This can already be seen in the defense sector, where the 2024 National Defense Authorization Act (NDAA) references AI 196 times and includes provisions on public procurement of AI, which were first introduced in the Advancing American AI Act. These provisions require the Department of Defense (DoD) to develop and implement processes to assess its ethical and responsible use of AI and a study analyzing vulnerabilities in AI-enabled military applications.

Beyond the 2024 NDAA, the direction of travel in the national security space is less clear. The recently published National Security Memorandum (AI NSM) seemingly aligns with Trump’s worldview. Its stated aims are threefold: first, to maintain US leadership in the development of frontier AI systems; second, to facilitate adoption of those systems by the national security community; and third, to build stable and responsible frameworks for international AI governance. The AI NSM supplements self-imposed regulatory frameworks already published by the DoD and the Office of the Director of National Intelligence. But, unlike those existing frameworks, the AI NSM is almost exclusively concerned with frontier AI models. The AI NSM mandates a whole range of what it calls “deliberate and meaningful changes” to the ways in which the US national security community deals with AI, including significant elevation in power and authority for chief AI officers across the community. However, the vast majority of restrictive provisions are found in the supplementary Framework to Advance AI Governance and Risk Management in National Security, which takes an EU-style, risk-based approach with a short list of prohibited uses (including the nuclear firing chain), a longer list of “high-impact” uses that are permitted with greater oversight, and robust minimum-risk management practices to include pre-deployment risk assessments. Comparability with EU regulation is unlikely to endear the AI NSM to Trump, but it is interesting to note that Biden’s National Security Advisor Jake Sullivan argued that restrictive provisions for AI safety, security, and trustworthiness are key components of expediting delivering of AI capabilities, saying, “preventing misuse and ensuring high standards of accountability will not slow us down; it will actually do the opposite.” An efficiency-based argument is likelier with a Trump administration focused on accelerating AI adoption. 

State-level regulation

According to the National Conference of State Legislators, forty-five states introduced AI bills in 2024, and thirty-one adopted resolutions or enacted legislation. These measures tend to focus on consumer rights and data privacy, but with significantly different approaches seen in the three states with the most advanced legislation: California, Utah, and Colorado.

Having previously been a leader in data privacy legislation, the California State Legislature in 2024 passed what would have been the most far-reaching AI bill in the country before it was vetoed by Governor Gavin Newsom. The bill had drawn criticism for potentially imposing arduous, and damaging, barriers to technological development in exactly the place where most US AI is developed. However, Newsom supported a host of other AI-related bills in 2024 that will place significant restrictions and safeguards around the use of AI in California, indicating that the country’s largest internal market will remain a significant force in the domestic regulation of AI.

Colorado and Utah both successfully enacted AI legislation in 2024. Though both are consumer rights protection measures at their core, they take very different approaches. The Utah bill is quite narrowly focused on transparency and consumer protection around the use of generative AI, primarily through disclosure requirements placed on developers and deployers of AI services. The Colorado bill is more broadly aimed at developers and deployers of “high-risk” AI systems, which here means an AI system that is a substantial factor in making any decision that can significantly impact an individual’s legal or economic interests, such as decisions related to employment, housing, credit, and insurance. This essentially gives Colorado a separate anti-discriminatory framework just for AI systems, which imposes reporting, disclosure, and testing obligations with civil penalties for violation. This puts Colorado, not California, at the leading edge of state-level AI regulation, but that does not necessarily mean that other states will take the Colorado approach as precedent. In signing the law, Governor Jared Polis made clear that he had reservations, and a similar law was vetoed in Connecticut. Some states might not progress restrictive AI regulation at all. For example, Virginia Governor Glenn Youngkin recently issued an executive order aiming to increase the use of AI in state government agencies, law enforcement, and education, but there is no indication that legislation will follow anytime soon.

However state-level legislation progresses, it is unlikely to have any direct impact on military or national security users. There is also a risk that public fears around AI could be stoked and lead to more stringent state-level regulation, especially if AI is seen to “go wrong,” leading to tangible examples of public harm. As discussed below in the context of the European Union, the use of AI in law enforcement is among the most controversial use cases. This can only be more relevant in the nation with some of the most militarized police forces in the world and a National Guard that can also serve a domestic law-enforcement role.

International efforts

The United States has been active in a number of international initiatives relating to AI regulation, including through the UN, NATO, and the G7 Hiroshima process, which are covered later in this paper. The final element of the Biden administration’s approach to AI regulation, and the one that might be the least likely to carry through into 2025, was the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. The declaration is a set of non-legally binding guidelines that aims to promote responsible behavior and demonstrate US leadership in the international arena. International norms are notoriously hard to agree upon and even harder to enforce. Unsurprisingly, the declaration makes no effort to restrict the kinds of AI systems that signatories can develop in their pursuit of national defense. According to the DoD, forty-seven nations have endorsed the declaration, though China, Russia, and Iran are notably not among that number.

China

The Chinese approach to AI regulation is relatively straightforward compared to that of the United States, with rules issued in a top-down, center-outward manner in keeping with the general mode of Chinese government.

Overview

Regulatory approach

China has a vertical, technology-driven approach with some horizontal, use-case, and sectoral elements. 

It is focused on general-purpose AI, with some additional regulation for specific use cases.

Scope of regulation

The primary unit of regulation is AI algorithms, with specific restrictions on the use of training data in some cases. 

Type of regulation

China uses hard regulation with a strong compliance regime and significant room for politically interested interpretation in enforcement.

Target of regulation

Regulation is narrowly targeted to privately owned service providers operating AI systems within China and those entities providing AI-enabled services to the Chinese population. 

Coverage of defense and national security

These areas are not covered and unlikely to be covered in the future. 

Domestic regulation

Since 2018, the Chinese government has issued four administrative provisions intended to regulate delivery of AI capabilities to the Chinese public, most notably the so-called Generative AI Regulation, which came into force in August 2023. This, and preceding provisions on the use of algorithmic recommendations in service provision and the more general use of deep synthesis tools, is focused on regulating algorithms rather than specific use cases. This vertical approach to regulation is also iterative, allowing Chinese regulators to build skills and toolsets that can adapt as the technology develops. A more comprehensive AI law is expected at some point but, at the time of writing, only a scholars’ draft released by the Chinese Academy of Social Sciences (CASS) gives outside observers insight into how the Chinese government is thinking about future AI regulation.

The draft proposes the formation of a new government agency to coordinate and oversee AI in public services. Importantly, and unlike in the United States, the use of AI by the Chinese government itself is not covered by any proposed or existing regulations, including for military and other national security purposes. This approach will likely not change, as it serves the Chinese government’s primary goal, which is to preserve its central control over the flow of information to maintain internal political and social stability. The primary regulatory tool proposed by the scholars’ draft is a reporting and licensing regime in which items that appear on a negative list would require a government-approved permit for development and deployment. This approach is a way for the Chinese government to manage safety and other risks while still encouraging innovation. The draft is not clear about what items would be on the list, but foundational models are explicitly referenced. In addition to an emerging licensing regime and ideas about the role of a bespoke regulator, Chinese regulations have reached interim conclusions in areas in which the United States and others are still in debate. For example, the Generative AI Regulation explicitly places liability for AI systems on the service providers that make them available to the Chinese public.

Enforcement is another area in which the Chinese government is signaling a different approach. As one commentator notes, “Chinese regulation is stocked with provisions that are straight off the wish list for AI to support supposed democratic values [. . .] yet the regulation is clearly intended to strengthen China’s authoritarian system of government.” Analysis from the East Asia Forum suggests that China is continuing to refine how it balances innovation and control in its approach to AI governance. If this is true, then the vague language in Chinese AI regulations, which would give Chinese regulators huge freedom in where and how they make enforcement decisions, could be precisely the point.

International efforts

As noted above, China has not endorsed the United States’ Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy, but China is active on the international AI stage in other ways. At a 2018 meeting relating to the United Nations Convention on Certain Conventional Weapons, the Chinese representative presented a position paper proposing a ban on lethal autonomous weapons (LAWS). But Western observers doubt the motives behind the proposal, with one commentator saying it included “such a bizarrely narrow definition of lethal autonomous weapons that such a ban would appear to be both unnecessary and useless.” China has continued calling for a ban on LAWS in UN forums and other public spaces, but these calls are usually seen in the West as efforts to appear as a positive international actor while maintaining a position of strategic ambiguity—there is little faith that the Chinese government will practice what it preaches. This is most clearly seen in reactions to the Global Security Initiative (GSI) concept paper published in February 2023. Reacting to this proposal, which China presented as aspiring for a new and more inclusive global security architecture, the US-China Economic and Security Review Commission (USCC) responded with scorn, saying, “the GSI’s core objective appears to be the degradation of U.S.-led alliances and partnerships under the guise of a set of principles full of platitudes but empty on substantive steps for contributing to global peace.”

Outside of the military sphere, Chinese involvement in international forums attracts similar critique. In the lead-up to the United Kingdom’s AI Safety Summit, the question of whether China would be invited, and then whether Beijing’s representatives would attend, caused controversy and criticism. However, that Beijing is willing to collaborate internationally in areas where it sees benefit does not mean that Beijing will toe the Western line. In fact, Western-led international regulation might not even be a particular concern for China. Shortly after the AI Safety Summit, Chinese President Xi Jinping announced a new Global AI Governance Initiative. As with the GSI, this effort has been met with skepticism in the United States, but there is a real risk that China’s approach could split international regulation into two spheres. This risk is especially salient because of the initiative’s potential appeal to the Global South. More concerningly, there is some evidence that China is pursuing a so-called proliferation-first approach, which involves pushing its AI technology into developing countries. If China manages to embed itself in the global AI infrastructure in the way that it did with fifth-generation (5G) technology, then any attempt to regulate international standards might come too late—those standards will already be Chinese.

European Union

The European Union moved early into the AI regulation game. In August 2024, it became the first legislative body globally to issue legally binding rules around the development, deployment, and use of AI. Originally envisaged as a consumer protection law, early drafts of the AI Act covered AI systems only as they are used in certain narrowly limited tasks—a horizontal approach. However, the explosion of interest in foundational models following the release of ChatGPT in late 2022 led to an expansion in the law’s scope to include these kinds of models regardless of how and by whom they are used.

Overview

Regulatory approach

The approach is horizontal, with a vertical element for general-purpose AI systems. 

Specific use cases are based on risk assessment. 

Scope of regulation

The scope is widest for high-risk and general-purpose AI systems. This includes data, algorithms, applications, and content provenance. 

Hardware is not covered, but general-purpose AI system elements use a compute-power threshold definition. 

Type of regulation

The EU uses hard regulation with high financial penalties for noncompliance. 

A full compliance and enforcement regime is still in development but will incorporate the EU AI Office and member states’s institutions. 

Target of regulation

The regulation targets AI developers, with more limited responsibilities placed on deployers of high-risk systems. 

Coverage of defense and national security

Defense is specifically excluded on institutional competence grounds, but domestic policing use cases are covered, with some falling into the unacceptable and high-risk groups.

Internal regulation

The AI Act is an EU regulation, the strongest form of legislation that the EU can produce, and is binding and directly applicable in all member states. The AI Act takes a risk-based approach whereby AI systems are regulated by how they are used, based on the potential harm that use could cause to an EU citizen’s health, safety, and fundamental rights. There are four categories of risk: unacceptable, high, limited, and minimal/none. Systems in the limited and minimal categories are subject to obligations around attribution and informed consent, i.e., people must know they are talking to a chatbot or viewing an AI-generated image. At the other end of the scale, AI systems that fall within the unacceptable risk category are completely prohibited. This includes any AI system used for social scoring, unsupervised criminal profiling, or workplace monitoring; systems that exploit vulnerabilities or impair a person’s ability to make informed decisions via manipulation; biometric categorization of sensitive characteristics; untargeted use of facial recognition; and the use of real-time remote biometric identification systems in public spaces, except for narrowly defined police use cases.

High-risk systems are subject to the most significant regulation in the AI Act and are defined as such by two mechanisms. First, AI systems used as a safety component or within a kind of product already subject to EU safety standards are automatically high risk. Second, AI systems are considered high risk if they are used in the following areas: biometrics; critical infrastructure; education and vocational training; employment, worker management, and access to self-employment; access to essential services; law enforcement; migration, asylum, and border-control management; and administration of justice and democratic processes. The majority of obligations fall on developers of high-risk AI systems, with fewer obligations placed on deployers of those systems.

As mentioned, so-called general-purpose AI (GPAI) is covered separately in the AI Act. This addition was a significant bone of contention in the trilogue negotiation, as some member states were concerned that vertical regulation of specific kinds of AI would stifle innovation in the EU. As a compromise, though all developers of GPAI must provide technical documentation and instructions for use, comply with the Copyright Directive, and publish a summary about the content used for training, the more stringent obligations akin to those imposed on developers of high-risk systems are reserved for GPAI models that pose “systemic risk.” Open-license developers must comply with these restrictions only if their models fall into this last category.

It is not yet clear exactly how the new European AI Office will coordinate compliance, implementation, and enforcement. As with all new EU regulation, interpretation through national and EU courts will be critical. One startling feature of the AI Act is the leeway it appears to give the technology industry by allowing developers to self-determine their AI system’s risk category, though the huge financial penalties those who violate the act  face might serve as sufficient deterrent to bad actors.

The AI Act does not, and could never, apply directly to military or defense applications of AI because the European Union does not have authority in these areas. As expected, the text includes a general exemption for military, defense, and national security uses, but exemptions for law enforcement are far more complicated and were some of the most controversial sections in final negotiations. Loopholes allowing police to use AI in criminal profiling, if it is part of a larger, human-led toolkit, and the use of AI facial recognition on previously recorded video footage have caused uproar and seem likely candidates for litigation, potentially placing increased costs and uncertainty on developers working in these areas. This ambiguity could have knock-on effects, given the increasing overlap between military technologies and those used by police and other national security actors, especially in counterterrorism. 

International efforts

The official purpose of the AI Act is to set consistent standards across member states in order to ensure that the single market can function effectively, but some believe that this will lead the EU to effectively become the world’s AI police. Part of this is the simple fact that it will be a lot easier for other jurisdictions to copy and paste a regulatory model that has already been proven, but concern comes from the way that the General Data Protection Regulation (GDPR) has had huge influence outside of the territorial boundaries of the EU by placing a high cost of compliance on companies that want to do business in or with the world’s second-largest economic market. Similarly, EU regulations on the kinds of charging ports that can be used for small electronic devices have resulted in changes well beyond its borders. However, more recently, Apple has decided to hold back on releasing AI features to users in the EU, indicating that cross-border influence can run both ways.

United Kingdom

Since 2022, the UK government has described its approach to AI regulation as innovation-friendly and flexible, designed to service the potentially contradictory goals of encouraging economic growth through innovation while also safeguarding fundamental values and the safety of the British public. This approach was developed under successive Conservative governments but is yet to change radically under the Labour government as it attempts to balance tensions between business-friendly elements of the party and more traditional labor activists and trade unionists.

Overview

Regulatory approach

The approach is horizontal and sectoral for now, with some vertical elements possible for general-purpose AI systems. 

Scope of regulation

The scope is unclear. Guidance to regulators refers primarily to AI systems with some consideration of supply chain components. It will likely vary by sector. 

Type of regulation

There is hard regulation through existing sectoral regulators and their compliance and enforcement regimes, with the possibility of more comprehensive hard regulation in the future. 

Target of regulation

The target varies by sector. Guidance to existing regulators generally focuses on AI developers and deployers. 

Coverage of defense and national security

Bespoke military and national security frameworks sit alongside a broader government framework. 

Domestic regulation

The UK’s approach to AI regulation was first laid out in June 2022, followed swiftly by a National AI Strategy that December and a subsequent policy paper in August 2023, which set out the mechanisms and structures of the regulatory approach in more detail. However, this flurry of policy publications has not resulted in any new laws. During the 2024 general election campaign, members of the new Labour government initially promised to toughen AI regulation, including by forcing AI companies to release test data and conduct safety tests with independent oversight, before taking a more conciliatory tone with the technology industry and promising to speed up the regulatory process to encourage innovation. Though its legislative agenda initially included appropriate legislation for AI by the end of 2024, this has not been realized. The prevailing view seems to be that, with some specific exceptions, existing regulators are best placed to understand the needs and peculiarities of their sectors.

Some regulators are already taking steps to incorporate AI into their frameworks. The Financial Conduct Authority’s Regulatory Sandbox allows companies to test AI-enabled products and services in a controlled environment and, by doing so, to identify consumer protection safeguards that might be necessary. The Digital Regulation Cooperation Forum (DRCF) recently launched its AI and Digital Hub, a twelve-month pilot program to make it easier for companies to launch new AI products and services in a safe and compliant manner, and to reduce the time it takes to bring those products and services to market.

Though the overall approach is sectoral, there is some central authority in the UK approach. The Office for AI has no regulatory role but is expected to provide certain central functions required to monitor and evaluate the effectiveness of the regulatory framework. Another centrally run AI authority, the AI Safety Institute (AISI), breaks from the sectoral approach and instead focuses on “advanced AI,” which includes GPAI systems as well as narrow AI models that have the potential to cause harm in specific use cases. While AISI is not a regulator, several large technology companies, including OpenAI, Google, and Microsoft, have signed voluntary agreements to allow AISI to test these firms’ most advanced AI models and make changes to them if they find safety concerns. However, now that AISI has found significant flaws in those same models, both AISI and the companies have stepped back from that position, demonstrating the inherent limitations of voluntary regimes. In recognition of this dilemma, the forthcoming legislation referenced above is expected to make existing voluntary agreements between companies and the government legally binding.

The most significant challenge to the current sector-based approach is likely to come from the UK Competition and Markets Authority (CMA). Having previously taken the view that flexible guiding principles would be sufficient to preserve competition and consumer protection, the CMA is now concerned that a small number of technology companies increasingly have the ability and incentive to engage in market-distorting behavior in their own interests. The CMA has also proposed prioritizing GPAI under new regulatory powers provided by the Digital Markets, Competition and Consumers Bill (DMCC). A decision to do so could have a huge impact on the AI industry, as the DMCC significantly sharpens the CMA’s teeth, giving it the power to impose fines for violation of up to 10 percent of global turnover without involvement of a judge, as well as smaller fines for senior individuals within corporate entities and consumer compensation.

As in the United States, it is expected that any UK legislative or statutory effort to expand the regulatory power of government over AI will have some kind of exemption for national security usage. But, as in the United States, it does not follow that the national security community will be untouched by regulation. The UK Ministry of Defence (UK MOD) published its own AI strategy in June 2022, accompanied by a policy statement on the ethical principles that the UK armed forces will follow in developing and deploying AI-enabled capabilities. Both documents recognize that the use of AI in the military sphere comes with a specific set of risks and concerns that are potentially more acute than those in other sectors. These documents also stress that the use of any technology by the armed forces and their supporting organizations is already subject to a robust regime of compliance for safety, where the Defence Safety Agency has enforcement authorities; and legality, where existing obligations under UK and international human rights law and the law of armed conflict form an irreducible baseline.  

The UK’s intelligence community does not have a director of national intelligence to issue community-wide guidance on AI, but the Government Communications Headquarters (GCHQ) offers some insight into how the relevant agencies are thinking about the issue. Published in 2021, GCHQ’s paper on the Ethics of Artificial Intelligence predates the current regulatory discussion but slots neatly into the sectoral approach. In the paper, GCHQ points to existing legislative provisions that ensure its work complies with the law. Most relevant for discussion of AI is the role of the Technology Advisory Panel (TAP), which sits within the Investigatory Powers Commissioner’s Office and advises on the impact of new technologies in covert investigations. The implicit argument underpinning both the UK MOD and GCHQ approaches is that specific regulations or restrictions on the use of AI in national security are needed only insofar as AI presents risks that are not captured by existing processes and procedures. Ethical principles, like the five to which the UK MOD will hold itself, are intended to frame and guide those risk assessments at all stages of the capability development and deployment process, but they are not in themselves regulatory. As civil regulation of AI develops, it will be necessary to continue testing the assumption that the existing national security frameworks are capable of addressing AI risks and to change them as needed, including to ensure that they are sufficient to satisfy a supply base, international community, and public audience that might expect different standards. 

International efforts

In addition to active participation in multilateral discussions through the UN, OECD, and the G7, the United Kingdom has held itself out to be a global leader in AI safety. The inaugural Global AI Safety Summit held in late 2023 delivered the Bletchley Declaration, a statement signed by twenty-eight countries in which they agreed to work together to ensure “human-centric, trustworthy and responsible AI that is safe” and to “promote cooperation to address the broad range of risks posed by AI.” The Bletchley Declaration has been criticized for its focus on the supposed existential risks of GPAI at the expense of more immediate safety concerns and for its lack of any specific rules or roadmap. But it gives an indication of the areas of AI regulation in which it might be possible to find common ground, which, in turn, might limit the risk of entirely divergent regulatory regimes.

Singapore

With a strong digital economy and a global reputation as pro-business and pro-innovation, Singapore is unsurprisingly approaching AI regulation along the same middle path between encouraging growth and preventing harms as the United Kingdom. Unlike the United Kingdom, Singapore has carefully maintained its position as a neutral player between the United States and China, and this positioning is reflected in its strategy documents and public statements.

Overview

Regulatory approach

The approach is horizontal and sectoral for now, with a future vertical element for general-purpose AI systems. 

Scope of regulation

The proposed Model AI Governance Framework for Generative AI includes data, algorithms, applications, and content provenance. 

In practice, it will vary by sector. 

Type of regulation

It is hard regulation through existing sectoral regulators and their compliance and enforcement regimes. 

Target of regulation

The targets include developers, application deployers, and service providers/hosting platforms. 

Responsibility is allocated based on the level of control and differentiated by the stage in the development and deployment cycle. 

Coverage of defense and national security

No publicly available framework. 

Domestic regulation

Government activity in the area is driven by the second National AI Strategy (NAIS 2.0), which is partly a response to the increasing concern over the safety and security of AI, especially GPAI. NAIS 2.0 clearly recognizes that there are security risks associated with AI, but it places relatively little emphasis on threats to national security. According to NAIS 2.0, the government of Singapore wants to retain agility in its approach to regulating AI, a position backed by public statements by senior government figures. Singapore’s approach to AI regulation is sectoral and based, at least for the time being, on existing regulatory frameworks. Singapore’s regulatory bodies have been actively incorporating AI into their toolkits, most notably through the Model AI Governance Framework jointly issued by the information communications and data-protection regulators in 2019 and updated in 2020. The framework is aimed at private-sector organizations developing or deploying AI in their businesses. It provides guidance on key ethical and governance issues and is supported by a practical Implementation and Self-Assessment Guide and Compendium of Use Cases to make it easier for companies to map the sector- and technology-agnostic framework onto their organizations. Singaporean regulators have begun to issue sector-specific guidelines for AI, including the advisory guideline on the use of personal data for AI systems that provide recommendations, predictions, and decisions. Like the wider framework, these are non-binding and do not expand the enforcement powers of existing regulators. 

Singapore has leaned heavily on technology industry partnerships in developing other elements of its regulatory toolkit, especially its flagship AI Verify product. AI Verify is a voluntary governance testing framework and toolkit that aims to help companies objectively verify their systems against a set of global AI governance and ethical frameworks so that participating firms can demonstrate to users that the companies have implemented AI responsibly. AI Verify works within a company’s own digital enterprise environment and, as a self-testing and self-reporting toolkit, it has no enforcement power. However, the government of Singapore hopes that, by helping to identify commonalities across various global AI governance frameworks and regulations, it can build a baseline for future international regulations. One critical limitation of AI Verify is that it cannot test GPAI models. The AI Verify Foundation, which oversees AI Verify, recognized this limitation and recently conducted a public consultation to expand the 2020 Model AI Governance Framework to explicitly cover generative AI. The content of the final product is not yet known, and there is no indication that the government intends to translate this new framework into a bespoke AI law, but the consultation document gives important clues about how Singapore is thinking about issues such as accountability; data, including issues of copyright; testing and assurance; and content provenance.

As mentioned, the government of Singapore places relatively little emphasis on national security in its AI policy documents, but that does not mean it is not interested or investing in AI for military and wider national security purposes. In 2022, Singapore became the first country to establish a separate military service to address threats in the digital domain. Unlike in the United States, where cyber and other digital specialties are spread across the traditional services, the Digital and Intelligence Service (DIS) brings together the whole domain, from command, control, communications, and cyber operations to implementing strategies for cloud computing and AI. The DIS also has specific authority to raise, train, and sustain digital forces. Within the DIS, the Digital Ops-Tech Centre is responsible for developing AI technologies, but publicly available information about it is sparse. Singapore has deployed AI-enabled technologies through the DIS on exercises, and the Defence Science and Technology Agency (DSTA) has previously stated that it wants to integrate AI into operational platforms, weapons, and back-office functions, but the Singaporean Armed Forces have not published any official position on the use of AI in military systems.

International efforts

Singapore is increasingly taking on a regional leadership role on AI regulation. As chair of the 2024 Association of South-East Asian Nations (ASEAN) Digital Ministers’ Meeting, Singapore was instrumental in developing the ASEAN Guide on AI Governance and Ethics. The guide aims to establish common principles and best practices for trustworthy AI in the region but does not attempt to force a common regulatory approach. In part, this is because the ASEAN region is so politically diverse that it would be almost impossible to reach agreement on hot-button issues like censorship, but also because member countries are at wildly different levels of digital maturity. At the headline level, the guide bears significant similarity to US, EU, and UK policies, in that it takes a risk-based approach to governance, but the guide makes concessions to national cultures in a way that those other approaches do not. It is possible that some ASEAN nations might move toward a more stringent EU-style regulatory framework in the future. But, as the most mature AI power in the region, Singapore and its pro-innovation approach will likely remain influential for now.

International regulatory initiatives

At the international level, four key organizations have taken steps into the AI regulation waters—the UN, OECD, the G7 through its Hiroshima Process, and NATO. 

OECD

The OECD published its AI Principles in 2019, and they have since been agreed upon by forty-six countries, including all thirty-eight OECD member states. Though not legally binding, the OECD principles have been extremely influential, and it is possible to trace the five broad topic areas through all of the national and supranational approaches discussed previously. The OECD also provides the secretariat for the Global Partnership on AI, an international initiative promoting responsible AI use through applied co-operation projects, pilots, and experiments. The partnership covers a huge range of activity through its four working groups, and, though defense and national security do not feature explicitly, there are initiatives that could be influential in other forums that consider those areas. For example, the Responsible AI working group is developing technical guidelines for implementation of high-level principles that will likely influence the UN and the G7, and the Data Governance working group is producing guidelines on co-generated data and intellectual-property considerations that could have an impact on the legal use of data for training algorithms. Beyond these specific areas of interest, the OECD will likely remain influential in the wider AI regulation debate, not least because it has built a wide network of technical and policy experts to draw from. This value was seen in practice when the G7 asked the Global Partnership on AI to assist in developing the International Guiding Principles on AI and a voluntary Code of Conduct for AI developers that came out of the Hiroshima Process.

Regulatory approach

The approach is horizontal and risk based.  

Scope of regulation

Regulation applies to AI systems and associated knowledge. In theory, this scope covers the whole stack. 

There is some specific consideration of algorithms and data through the Global Partnership on AI. 

Type of regulation

Regulation is soft, with no compliance regime or enforcement mechanism. 

Target of regulation

“AI actors” include anyone or any organization that plays an active role in the AI system life cycle. 

Coverage of defense and national security

None.  

G7

The G7 established the Hiroshima AI Process in 2023 to promote guardrails for GPAI systems at a global level. The Comprehensive Policy Framework agreed to by the G7 digital and technology ministers later that year includes a set of International Guiding Principles on Artificial Intelligence and a voluntary Code of Conduct for GPAI developers. As with the OECD AI Principles on which they are largely based, neither of these documents is legally binding. However, by choosing to focus on practical tools to support development of trustworthy AI, the Hiroshima Process will act as a benchmark for countries developing their own regulatory frameworks. There is some evidence that this is already happening and a suggestion that the EU might adopt a matured version of the Hiroshima Code of Conduct as part of its AI Act compliance regime. That will require input from the technology sector, including current and future suppliers of AI for defense and national security.  

The G7 is also taking a role in other areas that might impact AI regulation, most notably technical standards and international data flows. On the former, the G7 could theoretically play a coordination role in ensuring that disparate national standards do not lead to an incoherent regulatory landscape that is time consuming and expensive for the industry to navigate. However, diverging positions even within the G7 might make that difficult. The picture emerging in the international data flow space is only a little more optimistic. The G7 has established a new Institutional Arrangement for Partnership (IAP) to support its Data Free Flow with Trust (DFFT) initiative, but it has not yet produced any tangible outcomes. The EU-US Data Privacy Framework has made some progress in reducing the compliance burden associated with cross-border transfer of data through the EU-US Data Bridge and its UK-US extension, but there is still a large risk that the Court of Justice of the European Union will strike it down over concerns that it violates GDPR.

Regulatory approach

The approach is vertical. The Hiroshima Code of Conduct applies only to general-purpose AI. 

Scope of regulation

The scope is GPAI systems, with significant focus on data, particularly data sharing and cross-border transfer. 

Type of regulation

Regulation is soft, with no compliance regime or enforcement mechanism. 

Target of regulation

Developers of GPAI are the only target. 

Coverage of defense and national security

None.  

United Nations

The UN has been cautious in its approach to AI regulation. The UN Educational, Scientific, and Cultural Organization (UNESCO) issued its global standard of AI ethics in 2021 and established the AI Ethics and Governance Lab to produce tools to help member states asses their relative preparedness to implement AI ethically and responsibly, but these largely drew on existing frameworks rather than adding anything new. Interest in the area ballooned following the release of ChatGPT, such that Secretary-General António Guterres convened an AI Advisory Body in late 2023 to provide guidance on future steps for global AI governance. That report, published in late 2024 and titled “Governing AI for Humanity,” did not recommend a single governance model, but it proposed establishing a regular AI policy dialogue within the UN to be supported by an international scientific panel of AI experts. Specific areas of concern include the need for consistent global standards for AI and data, and mechanisms to facilitate inclusion of the Global South and other currently underrepresented groups in the international dialogue on AI. A small AI office will be established within the UN Secretariat to coordinate these efforts.  

At the political level, the General Assembly has adopted two resolutions on AI. The first, Resolution 78/L49 on the promotion of “safe, secure and trustworthy” artificial AI systems, was drafted by the United States but drew co-sponsorship support from a wide range of countries, including some in the Global South. The second, Resolution 78/L86, drafted by China and supported by the United States, calls on developed countries to help developing countries strengthen their AI capacity building and enhance their representation and voice in global AI governance. Adoption of both resolutions by consensus could indicate global support for Chinese and US leadership on AI regulation, but the depth of that support remains unclear. Notably, following the adoption of Resolution 78/L86, two separate groups were established, one led by the United States and Morocco, and the other by China and Zambia.

There is also disagreement over the role of the UN Security Council (UNSC) in addressing AI-related threats. Resolution 78/L49 does not apply to the military domain but, when introducing the draft, the US permanent representative to the UN suggested that it might serve as a model for dialogue in that area, albeit not at the UNSC. The UNSC held its first formal meeting focused on AI in July 2023. In his remarks, the secretary-general noted that both military and non-military applications of AI could have implications for global security and welcomed the idea of a new UN body to govern AI, based on the model of the International Atomic Energy Agency. The council has since expressed its commitment to consider the international security implications of scientific advances more systematically, but some members have raised concerns about framing the issue narrowly within a security context. At the time of writing, this remains a live issue.

Regulatory approach

The approach is horizontal with a focus on the Sustainable Development Goals.

Scope of regulation

AI systems are broadly defined, with particular focus on data governance and avoiding biased data. 

Type of regulation

Regulation is soft, with no compliance regime or enforcement mechanism. 

Target of regulation

Resolutions refer to design, development, deployment, and use of AI systems. 

Coverage of defense and national security

Resolutions exclude military use, but there have been some discussions in the UNSC. 

NATO

NATO is not in the business of civil regulation, but it plays a major role in military standards and is included here for completeness. 

The Alliance formally adopted its first AI strategy in 2021, well before the advent of ChatGPT and other forms of GPAI. At that time, it was not clear how NATO intended to overcome different approaches to governance and regulatory issues among allies, nor was it obvious which of the many varied NATO bodies with an interest in AI would take the lead. The regulatory issue has, in some ways, become more settled with the advent of the EU’s AI Act, in that the gaps between European and non-European allies are clearer. Within NATO itself, the establishment of the Data and Artificial Intelligence Review Board (DARB) under the auspices of the assistant secretary-general for innovation, hybrid, and cyber places leadership of the AI agenda firmly within NATO Headquarters rather than NATO Allied Command Transformation. One of the DARB’s first priorities is to develop a responsible AI certification standard to ensure that new AI projects meet the principles of responsible use set out in the 2021 AI Strategy. Though this certification standard has not yet been made public, NATO is clearly making some progress in building consensus across allies. However, NATO is not a regulatory body and has no enforcement role, so it will require member states to self-police or transfer that enforcement role to a third-party organization.

NATO requires consensus to make decisions and, with thirty-two members, consensus building is not straightforward or quick, especially on contentious issues. Technical standards might be easier for members to agree on than complex, normative issues, and technical standards are an area in which NATO happens to have a lot of experience. The NATO Standardization Office (NSO) is often overlooked in discussions of the Alliance’s successes, but its work to develop, agree to, and implement standards across all aspects of the Alliance’s operational and capability development has been critical. As the largest military standardization body in the world, NSO is uniquely placed to determine which civilian AI standards apply to military and national security use cases and identify areas where niche standards are needed. 

Regulatory approach

The approach is horizontal. AI principles apply to all types of AI. 

Scope of regulation

AI systems are broadly defined. 

Type of regulation

Regulation is soft. NATO has no enforcement mechanism, but interoperability is a key consideration for member states and might drive compliance. 

Target of regulation

The target is NATO member states developing and deploying AI within their militaries.

Coverage of defense and national security

The regulation is exclusively about this arena. 

Analysis

The regulatory landscape described above is complex and constantly evolving, with big differences in approach seen even between otherwise well-aligned countries. However, by breaking various approaches into their component parts, it is possible to see some common themes.  

Common themes

Regulatory approach

The general preference seems to be for a sectoral or use-case-based approach, framed as a pragmatic attempt to balance competing requirements to promote innovation while protecting users. However, there is increasing concern that some kinds of AI, notably large language models and other forms of GPAI, should be regulated with a vertical, technology-based approach. China looks like an outlier here, in that its approach is vertical with horizontal elements rather than the other way around, but in practice the same regulatory ground could be covered. 

Scope

There is little consensus around which elements of AI should be regulated. In cases where the framework refers simply to “AI systems” without saying explicitly whether that includes training data, specific algorithms, packaged applications, etc., it is possible to infer the intended scope through references in implementation guidance and other documentation. This approach makes sense in jurisdictions where the regulatory approach relies on existing sectoral regulators with varying focus. For example, a regulator concerned with the delivery of public utilities might be concerned with the applications deployed by the utilities providers, whereas a financial services regulator might need to look deeper into the stack to consider the underlying data and algorithms. China is again the outlier, as its regulation is specifically focused on the algorithmic level, with some coverage of training data in specific cases. 

Type of regulation

The EU and China are, so far, the only jurisdictions to have put in place hard regulations specifically addressing AI. Most other frameworks rely on existing sectoral regulators incorporating AI into their work, voluntary guidelines and best practices, or a combination of both. It is possible that the EU’s AI Act will become a model as countries increasingly turn to a legislative approach, but practical concerns and lengthy timelines mean that most compliance and enforcement regimes will remain fragmented for now. 

Target group

Almost all of the frameworks place some degree of responsibility on developers of AI systems, albeit voluntarily in the loosest arrangements. Deployers of AI systems and the service providers that make them available are less widely included. There is some suggestion that assignment of responsibility might vary across the AI life cycle, though what this means in practice is unclear, and only Singapore suggests differentiating between ex ante and ex post responsibility. Even in cases in which responsibility is clearly ascribed, it is likely that questions of legal liability for misuse or harm will take time to be worked out through the relevant judicial system. China is again an outlier here, but a more comprehensive AI law could include developers and deployers. 

Impact on defense and national security

At first glance, little of the civil regulatory frameworks discussed above relates directly to the defense and national security community, but there are at least three broad areas in which the defense and national security community might be subject to second-order or unintended consequences. 

  • Market-shaping civil regulations could affect the tools available to the defense and national security community. This area could include direct market interventions, such as modifications to antitrust law that might force incumbent suppliers to break up their companies, or second-order implications of interventions that affect the sorts of skills available in the market, the sorts of problems that skilled AI workers want to work on, and the data available to them. 
  • Judicial interpretation of civil regulations could impact the defense and national security communities’ license to operate, either by placing direct limitations on the use of AI in specific use cases, such as domestic counterterrorism, or more indirectly through concerns around legal liability. 
  • Regulations could add hidden cost or risk to the development and deployment of AI systems for defense and national security use. This area could include complex compliance regimes or fragmented technical standards that must be paid for somewhere in the value chain, or increased security risks associated with licensing or reporting of dual-use models. 

By using these areas as lenses through which to assess the tools and approaches found within civil regulatory frameworks, it is possible to begin picking out specific areas and initiatives of concern to the defense and national security community. The tables below make an initial assessment of the potential implications of civil regulation of AI on the defense and national security community by grouping them into three buckets. 

  • Be supportive: Areas or initiatives that the community should get behind and support in the short term. 
  • Be proactive: Areas that are still maturing but in which greater input is needed and the impact on the community could be significant in the medium term. 
  • Be watchful: Areas that are still maturing but in which uncertain future impacts could require the community’s input. 

The content of these tables is by no means comprehensive, but it gives an indication of areas in which the defense and national security community might wish to focus its resources and attention while the civil regulatory landscape continues to develop.

Be supportive

Areas or initiatives that the community should get behind and support in the short term

Technical standards Defense and national security technical standards should, as far as possible, align with civil-sector standards to minimize the cost of compliance, maximize interoperability, and allow efficient adoption of civil solutions to specialist problems. 

Action on: chief information officers, chief AI officers, standard-setting bodies, and AI developers in the public and private sectors. 

Risk-assessment tools Adopting tools and best practices developed in the civil sector could save time and money that could be better spent on advancing capability or readiness. 

Action on: chief information officers, chief AI officers, risk-management professionals including auditors, system integrators, and AI developers in the public and private sectors. 

Safety and assurance tools As above, adopting tools and best practices developed in the civil sector could be more efficient, but there could also be reputational and operational benefits to equivalency in some areas like aviation, in which military and civil users of AI systems might need to share airspace. 

Action on: chief information officers, chief AI officers, compliance officers, and domain safety specialists. 

Be proactive

Areas that are still maturing but in which greater input is needed and the impact on the community could be significant in the medium term.

Regulation of adjacent sectors and use cases Restrictions on the use of AI in domestic security and policing could limit development of capabilities of use to the defense and national security community or increase the cost of capabilities by limiting economies of scale. This is especially concerning in technically complex areas such as counterterrorism, covert surveillance and monitoring, and pattern detection for intelligence purposes. 

Action on: chief information officers, chief AI officers, legal and operational policy advisers, and AI developers in the public and private sectors. 

Data sharing and transfer Regulatory approaches that impact, in policy or practical terms, the ability of the defense and national security community to share data between allies across national borders could limit or impose additional costs on collaborative capability development and deployment.
 
Action on: chief information officers, chief AI officers, data-management specialists, and export-control policymakers.  
Specialty regulatory provisions for generative AI Regulations placed on the general-purpose AI systems that underpin sector-specific applications could impact the capabilities available to defense and national security users, even if those use cases are themselves technically exempt from such restrictions. 

Action on: chief information officers, chief AI officers, standard-setting bodies, legal and operational policy advisers, and AI developers in the public and private sectors. 

Be watchful

Areas that are still maturing but in which uncertain future impacts could require the community’s input

Licensing and registration databases Such databases could easily exclude algorithms and models developed specifically for defense or national security purposes. However, registering the open-source or proprietary models on which those tools are based could still pose a security risk if malign actors accessed the registry. 

Action on: chief information officers, chief AI officers, risk-management professionals, and counterintelligence and security policymakers. 

Data protection, privacy, and copyright regulations AI systems do not work without data. Domestic regulation of privacy, security, and rights-impacting data, as well as interpretations of fair use in existing copyright law, could limit access to training data for future AI systems. 

Action on: chief information officers, chief AI officers, privacy and data-protection professionals, and AI developers in the public and private sectors. 

Market-shaping regulation The AI industry, especially at the cutting edge of general-purpose AI, is heavily dominated by a few incumbents, most of which operate internationally. Changes to the substance or interpretation of domestic antitrust regulations could impact the supply base available to the defense and national security community. 

Action on: chief information officers, chief AI officers, commercial policymakers, and legal advisers. 

Legal liability Like any other capability, AI systems used by the military and national security community in an operational context are covered by the law of armed conflict and broader international humanitarian law, not domestic legislation. However, in nonoperational contexts, judicial interpretation of civil laws could impact particularly questions of criminal, contractual, or other liability.

Action on: chief information officers, chief AI officers, legal and operational policy advisers. 

Conclusion

The AI regulatory landscape is complex and fast-changing, and likely to remain so for some time. While most of the civil regulatory approaches described here exclude defense and national security applications of AI, the intrinsic dual-use nature of AI systems means that the defense and national security community cannot afford to think of or view itself in isolation. This paper has attempted to look beyond the rules and regulations that the community chooses to place on itself to identify areas in which the boundary with civil-sector regulation is most porous. In doing so, this paper has demonstrated that regulatory carve-outs for defense and national security uses must be part of a broader solution ensuring the community’s needs and perspectives are incorporated into civil frameworks. The areas of concern identified are just a first cut of the potential second-order and unintended consequences that could limit the ability of the United States and its allies to reap the rewards that AI offers as an enhancement to military capability on and off the battlefield. Private-sector AI firms with dual-use products, industry groups, government offices with national security responsibility for AI, and legislative staff should use this paper as a roadmap to understand the impact of civil AI regulation on their equities and plan to inject their perspectives into the debate. 

About the author

Deborah Cheverton is a nonresident senior fellow in the ’s Forward Defense program within the Scowcroft Center for Strategy and Security and a senior trade and investment adviser with the UK embassy. 

Acknowledgements

The author would like to thank Primer AI for its generous support in sponsoring this paper. It would not have been possible without help and constructive challenge from the entire staff of the Forward Defense program, especially the steadfast support of Clementine Starling-Daniels, the editorial and grammatical expertise of Mark Massa, and the incredible patience of Abigail Rudolph.

Related content

Explore the program

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

Image: US Army Soldiers, assigned to the 6th Squadron, 8th Cavalry Regiment, and the Artificial Intelligence Integration Center, conduct drone test flights and software troubleshooting during Allied Spirit 24 at the Hohenfels Training Area, Joint Multinational Readiness Center, Germany, March 6, 2024.

Allied Spirit 24 is a US Army exercise for its NATO Allies and partners at the Joint Multinational Readiness Center near Hohenfels, Germany. The exercise develops and enhances NATO and key partners interoperability and readiness across specified warfighting functions. (US Army photo by Micah Wilson)

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article No, You Probably Don’t Need a MacBook Pro
Next Article Google is opening its NotebookLM AI tools to students under 18
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Canada needs an economic statecraft strategy to address its vulnerabilities
News
Intel’s FFmpeg Cartwheel Brings Experimental Panther Lake Support
Computing
Shopping hacks: how to level up your home with deep discount premium gear
News
Is Artificial General Intelligence the Future? | HackerNoon
Computing

You Might also Like

News

Canada needs an economic statecraft strategy to address its vulnerabilities

39 Min Read
News

Shopping hacks: how to level up your home with deep discount premium gear

8 Min Read
News

2 unannounced iOS 26 features are still on the way, report claims

3 Min Read
News

Total Wireless deal knocks $300 off iPhone 16e — here’s how to get it

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?