The Future of AI in Warfare: Iran and Beyond

Following the Gulf War, a common belief was that cutting-edge technology enabled the United States and its allies to defeat Iraq’s armed forces.

Authors: Christopher Jackson and Aaron Spitler*

Following the Gulf War, a common belief was that cutting-edge technology enabled the United States and its allies to defeat Iraq’s armed forces. Specifically, advanced microelectronics were thought to have been central to the international coalition’s triumph. Powerful computer chips transformed military operations, from streamlining strategic planning to improving weapons targeting. In contrast, Iraq lacked the technical resources to punch back. This narrative of the U.S.-led alliance’s victory crystalized the notion that technological prowess determines battlefield success.

Artificial intelligence (AI) has revived this conversation. The current U.S.-Israeli intervention in Iran has been described as “the first AI war” given how the technology has been used in the campaign. Strategists have noted that it has made their operation more efficient and effective. However, their reliance on AI raises ethical concerns around the lack of oversight and accountability. Policymakers must act before this technology defines how wars are fought in the future.

The Operational Reality of AI

AI has offered numerous benefits to decision-makers in the U.S. and Israel. Yet it has been exceptionally useful in simplifying the “kill chain,” referring to the process by which attacks are coordinated. Its deployment in the aerial assault that began the war, where 1,000 targets were hit in a single day, has been touted as a compelling use case. At this juncture, it has become clear that AI provides an advantage on the battlefield when decisions must be made without delay.

However, embracing this technology comes at a cost. AI systems can make targeting decisions at a dizzying pace via predictive analytics. Expecting a human operator to properly vet their recommendations in the heat of battle may be an impossible ask. To avoid these scenarios, many have called for a human to remain in the loop when an AI is used in combat. What meaningful control over these solutions looks like in kinetic warfare, however, remains ill-defined despite its importance to current U.S. doctrine.

Error and Accountability

The risks associated with AI in these situations are no longer theoretical. In the aftermath of the Minab school bombing, in which 168 people were killed, suspicions were raised that AI identified the site as a target for the U.S. military. While investigations into the strike, and AI’s role in it, are ongoing, the incident has nevertheless prompted discussion on who should be held accountable when machines make mistakes on the battlefield.

Decision-makers convinced of AI’s value to military interventions may influence how blame for these tragedies is assigned. The operator behind an AI-powered system may find themselves in a “moral crumple zone” in which they absorb responsibility for the technology following an accident. In the Minab case, the cause of the attack is irrelevant. The U.S. may frame the strike as the result of human error and consider the matter resolved, thereby eliminating any possibility of accountability.

Government-Industry Nexus

The war has revealed the increasingly coercive and opaque relationship between governments and technology firms. The clearest illustration of this dynamic is the U.S. government’s recent confrontation with Anthropic in the lead-up to strikes on Iran. Anthropic protested the use of its models for autonomous targeting and surveillance, citing internal safety policies. However, this resistance triggered significant pushback from the Pentagon, which sought rapid integration of advanced AI capabilities into combat operations. This went beyond negotiation into coercion, where firms would be excluded from defense contracts if they failed to meet the Pentagon’s demands.

This episode reveals a deeper erosion of ethical safeguards within the AI industry when national security imperatives are invoked. Recently, leading AI firms have attempted to articulate principles governing how technology is used in various ways, from powering autonomous weapons to enhancing public surveillance. Yet the Iran conflict has demonstrated how fragile these commitments are when governments exert pressure. The threat of blacklisting, or loss of government access, creates a powerful incentive to dilute or reinterpret ethical guidelines for AI.

The Pentagon has also begun to fast-track smaller vendors into defense contracts, often with fewer established governance structures or ethical reviews. While this accelerates AI innovation and deployment, it also allows the military to avoid dependence on a single provider that may impose its own technological constraints. Furthermore, this approach enables the government to avoid grappling with the ethics of leveraging AI in combat. If one firm objects to how their solution might be used, officials can simply select another vendor that is more aligned with their plans.

The collaboration between government and industry in AI warfare has also become increasingly opaque. The details of many joint projects are hidden in classified contracts, limiting public visibility and legislative oversight. Congress has struggled to keep pace with the evolution of AI, resulting in a widening gap between accountability mechanisms and real-world deployment.

Broader Policy Implications

Existing international humanitarian law and national policies were not designed for algorithmic systems that compress decision-making into milliseconds. As legal scholars note, there is neither a universally accepted definition for autonomous weapons systems nor a binding regime governing their deployment in the field. As a result, there is significant ambiguity around their lawful use. This creates a gap between deployment and regulation. There is an urgent need to restore transparency and oversight between government and the AI industry.

Policy cannot simply be voluntary. Governments should mandate auditable AI systems and embed human accountability across the full lifecycle of AI deployment. Meaningful control for AI requires human judgment gate checks from design to execution. Secondly, states should work towards binding international agreements that regulate autonomous weapons. Thirdly, AI firms should be required to disclose defense-related applications of their technologies, addressing opacity concerns regarding their agreements with entities like the Pentagon. Absent these reforms, AI will continue to remain embedded in warfare without oversight, effectively operating in a black box.

Balancing Safety and Security

As countries navigate another technological revolution, they must also grapple with the benefits and drawbacks that come with innovation. For instance, while AI can be an asset in battle, it can be a liability if used inappropriately. Moreover, mechanisms for accountability when technology fails are non-existent, and the appeal of deflecting attention away from its defects can be strong.

Governments must move beyond rhetorical commitments toward enforceable guardrails for AI development and deployment. This means prioritizing human accountability within the decision-making processes, mandating independent audits of AI systems, and ensuring transparency in public-private partnerships. Without these measures, militaries will race to adopt AI, all while minimizing the dangers posed by its misuse. The Iran War has made it clear that this powerful technology will define how conflicts are waged for years to come. It is now up to governments, working with their partners in industry, to have the discipline required to deploy AI ethically and responsibly.

*Aaron Spitler is a researcher whose interests lie at the intersection of human rights, democratic governance, and digital technologies. He has worked with numerous organizations in this space, from the International Telecommunication Union (ITU) to the International Republican Institute (IRI). He is passionate about ensuring technology can be a force for good. You can reach him on LinkedIn

Christopher Jackson
Christopher Jackson
Christopher Jackson is an international strategist and author specializing in cybersecurity, artificial intelligence, and digital democracy. He has advised the United Nations, U.S. agencies, and global governments on technology governance and cyber strategy, with over a decade of experience across the U.S., Europe, Middle East, and Asia.