The development of artificial intelligence (AI) is at a crucial juncture. AI’s capabilities continue to rapidly improve, with the best performing AI systems poised to eclipse the performance of human experts in critical domains such as math, coding and science. Amid this rapidly changing technological landscape, AI has become a source of intense geopolitical competition, especially between the United States and China. Countries around the world are investing in AI to improve their military, intelligence and other national security capabilities. AI policy has become a central issue for US national security and industrial policy. How U.S. policymakers address these challenges, and how other countries and private companies respond, could have profound implications for the future of AI development. The CNAS AI Security and Stability project aims to inform government decision-making on the most critical AI policy issues that will impact the future of AI development. Key lines of effort include:

  • Understand and mitigate the risks of advanced AI capabilities relevant to national security: These risks may include dangerous capabilities in areas such as cyber operations, biological weapons, nuclear stability and the financial sector; risks of future misalignment or loss of control by agentic AI systems; or systemic risks from AI competition between the US and China.
  • Understand and shape computer management opportunities: Compute is emerging as a key lever for AI governance due to technological and geopolitical trends. This line of effort will provide concrete policy recommendations to maintain U.S. lead and influence in computer hardware and, by extension, the most capable cross-border AI systems.
  • Improve US military processes to ensure safe, secure, and trusted AI: This line of effort aims to identify concrete ways in which military AI systems can fail and how the U.S. military can create policies that ensure AI capabilities are safe and reliable.
  • Understand Chinese decision-making on AI and stability: This line of effort focuses on ways in which advances in AI are contributing to risks in the U.S.-China security relationship and how U.S. and allied policymakers can mitigate these risks.
  • Inform about the use of U.S. economic security tools to shape AI development and proliferation: This line of effort analyzes the policy and commercial drivers of AI diffusion globally, develops a U.S. economic security policy framework for responsible AI proliferation, and assesses the long-term risks to U.S. AI leadership from unintended consequences of broader U.S. economic policy decisions.

This cross-program effort includes the CNAS Technology and National Security, Defense, Indo-Pacific Security and Energy, Economics and Security programs. CNAS experts will share their findings in public reports and policy briefs with recommendations for policymakers. This project is made possible thanks to the generous support of Coefficient Giving.

Read more

CNAS experts

  • Paul Scharre

    Executive Vice President

  • Pettyjohn Station

    Senior Fellow and Director, Defense Program

  • Andrea Kendall Taylor

    Senior Fellow and Director of the Transatlantic Security Program

  • Emily Kilcrease

    Senior Fellow and director of the Energy, Economy and Security programme

  • Jacob Stokes

    Senior Fellow and Deputy Director of the Indo-Pacific Security Program

  • Janet Egan

    Senior Fellow and Deputy Director, Technology and National Security Program

  • Jos Wallin

    Fellow, Defense Program

  • Michael Depp

    Research Associate, AI Security and Stability Project

  • Caleb Withers

    Research Associate, Technology and National Security Program

  • Liam Epstein

    Research Assistant, Artificial Intelligence Security and Stability Project

  • Tim Fist

    Senior Adjunct Fellow, Technology and National Security Program