The rise of large language models (LLMs) and multimodal foundation models has already begun to reshape the character of warfare. For evidence, look no further than the battlefields of Russia’s war on Ukraine. During “Operation Spiderweb” in June, for example, Ukrainian quadcopters switched to autonomous navigation assisted by artificial intelligence (AI) to strike multiple Russian airfields. After standard GPS and communication links were disabled by Russian jammers, built-in sensors and pre-programmed decision-making meant that “backup AI targeting” took over. The strike, Ukraine’s longest-range assault of the conflict to date, resulted in the destruction of billions of dollars’ worth of Russian aircraft.
But automation and data-processing speed—image identification, logistics, and pattern detection—are only one part of the story. An arguably more significant transformation is underway, toward synthetic cognition within AI systems.
Adversary simulation
The US Army’s Mad Scientist Initiative and NATO’s Strategic Foresight Analysis program have both identified AI-based adversary simulation as critical for preparing joint forces for contested decision environments. This involves mapping adversary biases, illuminating internal cognitive blind spots, and forecasting narrative-driven escalations. The idea is to promote what has been called “strategic empathy”—the disciplined effort to understand how adversaries perceive their interests, threats, and opportunities—and to reduce inadvertent escalation risks.
Everyday AI chatbots such as GPTs are already spontaneously displaying the rudiments of theory of mind—that is, the ability to infer that others can hold beliefs different from one’s own. This capability has been demonstrated in LLMs through successful completion of false-belief tasks, such as recognizing that a person can search for an object where they mistakenly believe it to be, rather than where it actually is—a benchmark long associated with childhood cognitive development and a function regarded as unique to the species. In military contexts, if carefully constrained and validated, such capabilities are likely to soon allow for real-time simulation of adversarial logic, strategic ambiguity, and reputational calculus.
The capacity to accurately interpret and anticipate adversaries’ behaviors and strategic intent may prove to be the ultimate determinant of cognitive overmatch, understood here as the demonstrable ability to emulate, predict, and outpace adversary decision cycles. In practice, this is measured in reduced decision time, greater accuracy in escalation forecasting, and validated against observed behavior in falsifiable scenario outcomes. In an era defined by the contest of perceptions, safely and successfully integrating synthetic cognition into defense capabilities may well prove decisive. As such, embedding cultural, historical, and ideological nuance into cognitive-emulative systems will be important to ensure strategic superiority for the United States. After all, China is already reportedly investing in culturally informed AI frameworks for military use.
Taught versus nurtured consciousness
The crux of efforts to simulate adversarial reasoning emerges from a cognitive duality between taught consciousness and nurtured consciousness. This is not standard AI terminology, but a conceptual framework we have introduced to distinguish between two modes of reasoning. Taught consciousness refers to structured learning, facts, and procedural logic. Nurtured consciousness, by contrast, arises from culture, history, trauma, identity, and emotional reinforcement—the forces that shape how an actor interprets risk, legitimacy, and legacy.
To “think better,” AI must move beyond structured data alone; it must incorporate historical memory, cultural worldviews, symbolic interpretations, and ideological drivers of conflict. For example, a People’s Liberation Army (PLA) commander influenced by the 1979 Sino-Vietnam War may exhibit caution in mountainous terrain, a detail invisible to most automated models but accessible to LLMs trained on PLA memoirs, doctrine, and historiography.
As a recent report we both worked on details, military decisions are rarely made in isolation from personal or collective history. Strategy is often shaped by deep-seated narrative logic, encompassing national myths, identities, and ideology. Beyond procedural logic and battlefield geometry, war is fought through perception: how each actor experiences shame, fear, honor, legitimacy, and memory. These variables do not exist in intelligence, surveillance, and reconnaissance feeds or probability tables. They are present in the minds of adversaries, shaped by decades, if not centuries, of history, trauma, and political indoctrination. This is the cognitive substrate of strategic action, and it cannot be approximated through taught knowledge alone.
Consider the threat from jihadist groups such as the Islamic State of Iraq and al-Sham, or ISIS, and Boko Haram, which do not adhere to classical strategic logic; their behaviors are shaped by religious eschatology, historical grievances, and narrative theater. They use spectacular violence and ritualized fear to sustain their ideological appeal, often engaging in an epistemic war against perceived Western influence and employing brutality as part of the construction of identity. A purely data-driven model might focus on the number of fighters, frequency of attacks, or intercepted chatter while missing the symbolic logic animating those patterns.
A system that incorporates cognitive elements layers in the importance of sacred geography, the modeling of theological escalation ladders where martyrdom is incentivized, and the role of online radicalization, where command structures are replaced by narrative contagion. Nurtured AI systems trained on religious texts, ideological manifestos, and martyr testimonials might be able to simulate the decision logic of these “nonrational” actors, providing predictive insights into, for example, when a symbolic event might trigger a suicide bombing, or when leadership decapitation may lead to fragmentation and the splintering toward more extreme offshoots.
Inhabiting the fog
Without nurtured consciousness, even the most advanced AI-driven systems risk failing to accurately interpret complex adversarial behaviors, symbolic intentions, and cultural thresholds, thereby undermining strategic effectiveness.
While taught consciousness enables a model to replicate tactical planning or doctrinal norms, nurtured consciousness simulates how a decision maker understands risk, perceives adversaries, and weighs personal legacy against national mythology. This is what allows an AI system to reason like a human in a real-world context, rather than merely replicating surface-level behavior. Combined, taught and nurtured consciousness deepen strategic empathy.
However, as AI systems with synthetic cognition begin to dynamically shape military operations, they will require accountability frameworks, multidisciplinary oversight, and governance protocols. Failure to establish clear guidelines risks strategic misalignment, ethical ambiguity, and unanticipated escalation, ultimately weakening their utility and credibility. Therefore, cognitive-emulative systems must remain auditable, strategically aligned with values, and guided by transparent governance structures involving regional experts and ethicists to ensure responsible deployment. Given rapid advances by the United States’ near-peer adversaries, Washington needs technical and doctrinal oversight of nurtured consciousness, as well as clearly defined international norms governing its use.
Prussian General Carl von Clausewitz observed that “war is the realm of uncertainty; three quarters of the factors on which action in war is based are wrapped in a fog of greater or lesser uncertainty.” Conscious-model AI does not dispel the fog, it inhabits it. It reasons, reacts, and remembers within it. This capability is what turns an information advantage into a conscious advantage, and it has the potential to set the standard for strategic dominance in the twenty-first century.
John James is a technologist, deep-tech investor, and founding partner of BOKA Capital Ltd, which has investments in military AI companies.
Alia Brahimi, PhD, is a nonresident senior fellow on the Middle East Programs.
Tue, Aug 19, 2025
A vision for US hypersonic weapons
Issue Brief
By
Hypersonic weapons, if fielded in sufficient numbers to defeat critical targets necessary to degrade adversary capabilities, will enable effective use of traditional weapon systems and allow for future battlefield dominance. A layered defeat construct must be deployed to defend against ballistic and hypersonic missiles targeting US assets.
Image: U.S. Marines with Battalion Landing Team 3/5, 11th Marine Expeditionary Unit, prepare an explosive charge as part of a breaching practical application exercise during Raid Leaders Course 25-4 hosted by Expeditionary Operations Training Group, I Marine Expeditionary Force, on Marine Corps Base Camp Pendleton, California, Sept. 5, 2025. RLC trains Marines in the tactics, techniques and procedures to serve as small-unit leaders within the assault element of a raid force. (U.S. Marine Corps photo by Sgt. Trent A. Henry)