While machines designed to fight like soldiers may seem like a fiction, many companies are striving to make them a reality. A market has emerged for autonomous weapons systems (AWS), referring to technologies that identify and engage targets on the battlefield with limited input from humans, enhanced by artificial intelligence (AI). Governments have flocked to vendors selling products promising to transform how war is waged, captivated by the pitch that AI-powered AWS can give them an edge on their rivals. They may also find the notion that these technologies reduce the human cost of war intriguing, as acquiring advanced AWS would change the calculus for states deciding whether to enter a conflict. Given the possibilities, officials may see these solutions as too good to pass up.
The emergence of AI-driven AWS has implications that will rankle policymakers for years to come. Debates around the ethics of using this technology, particularly who bears responsibility when making life-or-death decisions, continue to rage in capitals across the globe. All the while, governments continue to cut deals with companies in this space, potentially motivated by the fear of losing this new arms race to adversaries abroad. Considering the stakes, as well as the speed at which AI-enhanced AWS have proliferated, countries must act to mitigate the harms these technologies pose to both combatants and civilians alike. Without consensus on how these weapons should be deployed, or clarity on the role humans are to play, the consequences of their widespread and unrestrained use will be severe.
Raising Moral Questions
Although many believe AI-powered AWS can fire at will, the truth is that a person still pulls the trigger. However, determining who is at fault when these weapons go awry has left militaries vexed. Outlets like the MIT Technology Review have highlighted that the individual making the final call is most likely to take the fall, regardless of whether they were following their superior’s orders. In essence, imbalances in power favor those higher up the chain of command, a dynamic that essentially shields veteran officers from accountability. Those who are less senior, meanwhile, shoulder the blame, even if their decisions were informed by erroneous information provided by the system. Recourse for lower-ranking individuals caught in this situation is unclear. With that in mind, critics of AI-enhanced AWS maintain that responsibility for incidents should be diffused among all involved, not simply pinned on whoever occupies the lowest rung of the ladder.
Granting these technologies latitude on the battlefield also comes with risks. Experts alarmed by AI-enabled AWS’ capabilities argue that following this course of action would be a mistake due to the fact that these technologies cannot properly value human life. Tech Policy Press noted that these weapons effectively reduce individuals to targets on a grid, a feature that is in line with other AI products sold to militaries interested in streamlining decision-making processes. In exchange, commanders may find that these technologies fail to understand the nuances of combat, as they will methodically pursue those categorized as a threat unless instructed otherwise. Humans possess the capacity to recognize when their actions border on the inhumane; AI-driven AWS were created to engage targets with speed and efficiency. Ceding more control to these systems may, therefore, signal that a military is willing to abdicate a degree of responsibility for its conduct during conflict.
Navigating Geopolitical Currents
In spite of these ethical dilemmas, demand for AI-enhanced AWS has risen sharply. Governments throughout the international community have continued to forge ties with firms specializing in these products, convinced that they will be essential for the future of warfare. Wired spotlighted how this trend was playing out in the United States, drawing attention to how military planners feel that AI-powered AWS could make their operations more successful. Seeking to capitalize on the potential of these technologies, decision-makers in Washington have leapt into action, investing both time and resources into exploring AI’s military applications. Relatedly, these officials are also wary of competitors in this space, with many pushing measures that target countries like China who are focused on how AI could benefit their armed forces. In a volatile geopolitical environment, the United States’ strategy is predictable. Yet it may presage an arms race that could upturn the current global order.
At this juncture, major powers are already trying to integrate AI-powered AWS into their existing arsenals. Interested in how these technologies could be an asset to their soldiers on the frontlines, and unnerved at the idea of falling behind their adversaries, governments have spent considerable capital on securing AI-enhanced AWS. Undark unpacked how states, hoping to seize the moment, are rushing ahead to adopt these weapons, even as norms on their appropriate use have yet to be defined. Many of these countries have parroted the line that they intend for a human to be involved at all times when AI-powered AWS see action. However, this rhetoric has not always guided their actions, suggesting that governments’ policies on these products are shaped more by realpolitik rather than restraint. Without meaningful commitments from state actors to establish ethical standards, the likelihood of AI-enhanced AWS being used responsibly in combat grows increasingly slim.
Finding Middle Ground
For policymakers, AI-powered AWS are likely to remain front and center in discussions about the future of warfare. The lack of clarity regarding who precisely should be held accountable when these technologies cause incidents gives many decision-makers pause, as well as the prospect of allowing systems to operate without guardrails in place. At the same time, officials believe AI-driven AWS may be invaluable as threats to national security evolve and change, prompting many of them to forge partnerships with vendors whose products may enable their governments to stay ahead of their rivals. The competing forces of pressing ethical concerns and practical geopolitical realities underline the urgent need for multi-lateral cooperation. Without it, those caught in the crossfire of these uncontrolled weapons will suffer the consequences.
Leaders around the world are heeding this call. For instance, civil society organizations are pushing for an international treaty which would set rules on how militaries take advantage of AI-enabled AWS. This agreement could include provisions mandating humans have meaningful control over these weapons, especially when they are equipped to use force. Such measures would be a firm rebuke to the notion that AI-enhanced AWS needs more leeway. Furthermore, the treaty may prohibit the development of AI-powered AWS that have greater independence. This may cool the arms race that has only begun to accelerate. Governments must work together to craft a deal incorporating these points. Doing so would ensure that the harms caused by AI-driven AWS are not minimized amid the hype surrounding these technologies.

