The current conflict in the Middle East is first mass test of AI-assisted warfare. Bloomberg has interviewed the spokesperson for the US Central Command to evaluate its scope, to what extent the machines are the ones making the decisions and other important collateral aspects that are current, such as the responsibility of the business giants when it comes to providing their technology.
The war against Iran continues and as almost all analysts warned (except its promoters in the United States and Israel) it is being terrible. In human victims (most of them civilians), in displaced peoplein incalculable economic damage and repercussions throughout the world and all sectors, including the supply of semiconductors.
Furthermore, the consequences are unpredictable because it is not being as fast or as surgical as they predicted. Hearing the commander in chief (Trump) lurch to one side and the other with contradictory statements as the air goes, does not predict a short-term solution that must go through a ceasefire and negotiations that put an end to the conflict and lays the foundations to relax the powder keg in the region.
Project Maven: World-class AI for war
Getting into the matter at hand, explain that a decade ago we learned about the ‘Maven’ project. A program of the United States Department of Defense, which together with the creation of the Joint Artificial Intelligence Centeraimed to research and implement the latest artificial intelligence and machine learning technologies in military environments. The program was and is highly controversial. It came to light when a large group of Google employees (6,400 signatories) They rebelled against his company through an open letter where they asked that the Internet giant break the contract with the Pentagon that provided data to the project.
The Google employees’ petition, supported by 300 scientists and academics (300 experts in robotics, artificial intelligence, international relations, security, ethics and law) cited ethical issues, political decisions and the erosion of user trust. If security is already a questionable point, ethical issues are no less so. Scientists against the Maven project warned that «Department of Defense contracts They signaled a dangerous alliance between the private technology industry, currently in possession of large amounts of sensitive personal data collected from people around the world, and a country’s military..
Google responded and canceled the contract, although other companies such as Microsoft and Amazon took over and the project continued. US military spokespersons confirmed that the Maven project was active in real battle situations after Hamas terrorists attacked Israel in October 2023 and it devastated Gaza.
AI-assisted warfare leads attacks on Iran
Captain Timothy Hawkins, a spokesman for Central Command, has assured Bloomberg that the AI tools used by the US military in operations in Iran “they do not make decisions about objectives or replace humans”. However, they do help “make better decisions faster”.
In recent years, US military exercises have focused on a specific phrase: “a thousand decisions,” in which commanders practice filtering a series of data to identify 1,000 objects as friend or foe within an hour. In the first 24 hours of the war with Iran, The United States bombed 1,000 targetsa huge number of attacks in such a short time. Admiral Brad Cooper, head of Central Command, described it as almost double the magnitude of the US attack on Iraq in 2003.
Hawkins stated that “the use of AI by the military follows a rigorous process”consistent with US policy, military doctrine and legislation. Artificial intelligence helps analysts identify key aspects, generating points of interest and facilitating decision-making in operations in Iran. AI also helps extract data from systems and organize information for clarity.
Among the AI technologies used in the campaign against Iran is Maven Smart Systema digital mission control platform produced by Palantir Technologies Inc., according to sources familiar with U.S. operations who spoke on condition of anonymity to avoid revealing sensitive information. This platform emerged from the aforementioned Project Maven, the project promoted by DARPA to develop battlefield AI.
Among the main linguistic models installed in the system is Anthropic’s Claude AI tool, according to the sources, who say that has become essential in US operations against Iran and to accelerate the development of Maven. Interestingly, Anthropic is at the center of a dispute with the Department of Defense over limitations placed on the software and the government’s response to considering Anthropic “a risk to the supply chain”a designation typically reserved for America’s adversaries.
The decision, which the company has already announced it will challenge in court, threatens to disrupt both the company and the military, leaving the future of America’s artificial intelligence warfare experiment yet to be determined.
Risks and Ethics
The use of artificial intelligence or what we call AI-assisted warfare has as many critics as it promotes. Stop Killer Robotsa coalition of 270 human rights groups, have contested the strategy in Iran, arguing that artificial intelligence-based decision support systems narrow the gap between recommending and executing an attack on “a dangerously thin line”.
Furthermore, the effectiveness of AI in these types of conflicts is unclear. The New York Times He assured last week that The United States was responsible for the deaths at the Shajarah Tayyebeh Iranian primary school, which left some 175 victims, most of them girls, but also teachers and parents. NYT attributed it to “a selection error due to outdated information provided by the AI”.
Dozens of Democratic senators have demanded answers from the Trump administration, as evidence pointing to US responsibility mounts. The letter from more than 45 senators questions Defense Secretary Pete Hegseth about whether the United States was to blame for the attack and what prior analysis of the building had been done. Senators also expressed concern that the Pentagon has reduced staff in a congressionally mandated office created specifically to reduce civilian casualties, while continuing to boost investment in AI. The Pentagon says it is investigating its responsibility in what could be the worst massacre of civilians caused by the United States since 2003.
Where will all this end? Autonomous weapons?
The military is working to demystify the supposed “killing” capabilities of new object recognition algorithms, claiming that every step involving AI ends with human validation. The Pentagon tested an AI recommendation engine that suggested attack plans and the best weapons to use during operations. However, the results were reportedly not up to human standards.
But the US Department of Defense (and the rest of the powers) is willing to move forward with the deployment of “smart” technology on the battlefield. The Pentagon has already begun integrating large language models (LLM) into actual combat decisions. And that will be a critical leap from which we surely cannot return.
Advances in Artificial Intelligence have been enormous in recent years and in multiple fields. Some are extraordinarily hopeful, such as those in medical research that address complex problems by developing advanced algorithms to search for cures for diseases. Others are scary, very scarysuch as those related to autonomous weapons and soldier robots.
AI is not going to stop learning and growing and critical media warn that today we are closer than yesterday to an outcome like the one described in Cameron’s science fiction film, ‘Terminator’. There are many authoritative voices that continue to warn of the lack of controls (technical, legal, ethical and security) of everything related to AI.
Experts in technology and artificial intelligence have signed open letters and petitions to the United Nations to the use of autonomous weapons would be completely banned. Should we develop non-human minds that could end up outnumbering us, being smarter than us, making us obsolete and replacing us? Should we risk losing control of our civilization?they wondered.
The first large-scale test of AI-assisted warfare in Iran unfortunately previews what is to come. The world lacks more “human intelligence”.
