On October 24, 2024, the White House issued the first-ever National Security Memorandum on Artificial Intelligence (AI), outlining a comprehensive strategy for deploying AI to meet U.S. national security needs while prioritizing safety, security and its reliability. This guidance also aims to maintain U.S. leadership in advancing international consensus and governance around AI, building on progress made over the past year at the United Nations and at the AI Safety Summits in Bletchley and Seoul. In particular, this memorandum directly fulfills the obligation to provide further direction on the use of AI in national security systems, as defined in paragraph 4.8 of the Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, issued last October .
The guidance underlines the need to balance responsible AI use with flexibility, and ensure that its potential is not unnecessarily limited – especially in high-stakes national security applications. While the memorandum has broader implications for AI governance, the following cybersecurity-related measures are particularly notable and essential for advancing AI resilience in national security applications:
1. Establish a comprehensive framework to promote AI governance and risk management in national security
A central pillar of this memorandum is the introduction of the Framework to Advance AI Governance and Risk Management in National Security. This guidance framework, parallel to the Office of Management and Budget’s previous memorandum on Advancing the Responsible Acquisition of AI in Government, provides a structured, comprehensive approach to managing the layered risks associated with the use of AI . For example, the framework mandates continuous testing, monitoring and evaluation of AI systems, ensuring vulnerability assessments and security compliance throughout the AI lifecycle. The framework also requires robust data management standards, including the secure processing, documentation, and retention of AI models, in addition to standardized practices for data quality assessment post-deployment.
Crucially, the framework provides targeted guidance for identifying prohibited AI uses and managing ‘high-impact’ AI systems. This approach ensures that agencies employ strict and holistic risk management practices, especially when deploying AI applications that have a significant impact on U.S. national security.
2. Protect the security and integrity of the AI system against foreign interference risks and cyber threats
Recognizing that foreign adversaries are increasingly turning to AI innovations to advance their own national objectives, the memorandum directs the National Security Council and the Office of the Director of National Intelligence (ODNI) to review national intelligence priorities to identify and improve assessment of foreign intelligence threats. focused on the US AI ecosystem (section 3.2(b)(i)). Additionally, ODNI, in coordination with the Department of Defense (DOD), Department of Justice, and other agencies, is responsible for identifying critical nodes in the AI supply chain that could be disrupted or compromised by foreign actors and ensuring that proactive and coordinated measures have been taken to limit such risks (paragraph 3.2(b)(ii)). To mitigate the risk of gray zone methods, the Committee on Foreign Investment in the United States is also charged with assessing whether foreign access to proprietary information from U.S. AI poses a security threat and provides a regulatory mechanism to prevent harmful transactions prohibit (paragraph 3.2(d). )(i)).
In particular, the Artificial Intelligence Safety Institute (AISI) is taking on extensive responsibilities to promote AI resilience. In particular, AISI is charged with providing specialized guidance to AI developers on managing safety, security, and reliability risks in dual-use models; establishing benchmarks for AI capability assessments; and serves as a primary channel for communicating risk mitigation recommendations (section 3.3(e)). Through these combined efforts to detect, assess, and block supply chain risks, the United States is strengthening its commitment to protect its technological advantage and leadership.
3. Harness the potential of AI in offensive and defensive US cyber operations
To leverage the potential of AI to enhance both offensive and defensive U.S. cyber operations, the memorandum directs the Department of Energy (DOE) to launch a pilot project to evaluate the performance and efficiency of federated AI and data sources, which are essential for groundbreaking AI scale training, refinement and inference (section 3.1(e)(iii)). This project aims to refine AI capabilities that could improve cyber threat detection, response, and offensive operations against potential adversaries, in line with the findings presented in the Senate AI Policy Roadmap.
In addition, the Department of Homeland Security (DHS), the Federal Bureau of Investigation, the National Security Agency, and the Department of Defense, as appropriate, are charged with publishing unclassified guidance on known vulnerabilities, threats, and best practices in the field of AI cybersecurity to avoid, detect and mitigate. risks during the training and deployment of AI models (section 3.3(h)(ii)). These guidelines are also expected to cover the integration of AI into other software systems, thereby contributing to the safe deployment of AI in operational environments. Together, these actions have the potential to strengthen the United States’ ability to deploy AI in cyber operations, allowing it to maintain a decisive technological edge over adversaries actively seeking to use AI to undermine our security.
4. Secure AI in critical infrastructure
The memorandum also underscores the importance of securing AI within U.S. critical infrastructure, recognizing the risks AI can pose in sensitive sectors, including nuclear, biological and chemical environments. Working with the National Nuclear Security Administration and other agencies, the DOE is tasked with developing infrastructure capable of systematically testing AI models to assess their potential to generate or exacerbate nuclear and radiological risks (section 3.3( f)(iii)). This initiative includes maintaining classified and unclassified testing capabilities, incorporating red-teaming exercises, and ensuring the secure transfer and evaluation of AI models.
Additionally, the memorandum requires DOE, together with DHS and other agencies, to develop a roadmap for classified assessments of AI’s potential to create new chemical and biological threats or enhance existing ones, ensuring rigorous testing and proactively identifying sensitive information protected (section 3.3(g)(i)). Through these efforts, the memorandum aims to protect the United States’ critical infrastructure from emerging AI-related vulnerabilities, ensuring resilience against both unintended risks and deliberate attacks.
5. Attract, build and retain a top-tier AI workforce
The memorandum underlines the critical importance of cultivating and maintaining a robust AI talent pipeline to retain expertise vital to national security – a long-standing struggle, especially in the field of cybersecurity, where the government has already targeted recruitment initiatives has launched to close talent shortages. For example, paragraphs 3.1(c)(i) and 4.1(c) outline provisions to attract international AI experts, including accelerating visa processes and addressing recruitment hurdles. Specifically, the DOD, State Department, and DHS are directed to review hiring policies to ensure they attract AI-related technical talent and align with national security missions. This includes offering accelerated security clearances and fellowship programs aimed at building technical expertise within the government.
These workforce initiatives also align with the findings of the Senate AI Insight Forums, which highlighted the need to provide opportunities for international students and entrepreneurs to remain in the United States after college, take advantage of tax incentives and strong protections of patents and intellectual property to promote innovation.
Looking ahead:
In light of the rapid pace at which foreign adversaries are seeking to leverage AI to usurp U.S. technological leadership, military advantage, and international influence, the publication of this long-awaited memorandum marks an important and strategic milestone in the field of AI management and cybersecurity. By aligning ambitious AI innovation and integration goals with targeted cybersecurity and national security guidelines, the memorandum seeks a balanced approach that seeks to avoid the dangers of self-imposed barriers such as over-regulation and bureaucratic delays, while maximizing the technological edge of the land is preserved.
The memorandum responds strongly to insights from recent forums and working groups and signals an ongoing commitment to refining AI governance through collaboration and cutting-edge research. However, as AI technology and global threats evolve, regular reassessment will be essential to maintain the MoU’s balance between promoting innovation, rapid integration, and protecting national security interests. Maintaining this momentum will be imperative to fully achieve the objectives set out in this memorandum.