ABSTRACT: Autonomous weapon systems (AWS) powered by AI are rapidly advancing, offering strategic military advantages but also raising profound moral and regulatory challenges. This paper explores the key ethical debates surrounding AWS through the lens of Just War Theory’s principles of distinction between combatants and non-combatants, and proportionality in harming civilians. It examines arguments from proponents and critics regarding the scope of harm to civilians, the accountability problem, the lack of humanity, and the transparency and control challenge. It then considers pathways for international regulation of AWS, drawing lessons from the nuclear arms control regime. This includes the potential for global norms to shape domestic politics, the role of expert communities, and accidents in complex systems (even under “meaningful human control”). AWS proponents answer most concerns with the expectation of a greater ability to avoid harming innocent. AWS could offer moral progress (“wars without casualties”) and clarity (programmed principles), but could also lead to catastrophes like unstoppable wars if overriding human cancelation ability is not assured.
Introduction: Autonomous Weapon Systems
In recent years, artificial intelligence (AI) technologies have seen a breakthrough and are becoming a general-purpose technology, integrated as an essential component in every field. Experts predict that AI will replace and outperform humans in tasks central to military forces, such as pattern recognition, prediction, optimization, and decision-making (Maas 2019, 286). In this context, autonomous weapon systems (AWS) that are currently being developed, are capable of operating independently from human supervision in target classification and engagement (Etzioni 2017, 72).
There is no consensus regarding the definition of AWS. However, the relationship between humans and weapon systems can be broadly classified into three types: human “in the loop”, human “on the loop”, and human “out of the loop”. The human in the loop decides to attack independently. For example, a soldier will determine whether to launch an Iron Dome interceptor, after the system has detected a rocket and predicted its trajectory. The human on the loop does not make the decision but can intervene in the system’s decisions. For example, the U.S. Navy has long used systems that autonomously identify and engage threats to ships without human command, but with the ability to abort the decision. The human out of the loop is not involved at all, while the system decides and operates completely independently without any direct supervision (Etzioni 2017, 78-79). In practice, the separation between full autonomy (“out of the loop”) and partial autonomy (“on the loop”) is increasingly blurred as AI systems become more complex and less comprehensible to humans, narrowing the human’s sphere of control (Shany 2024, 6-7).
The armies developing AWS gain tremendous strategic advantages from them. These advantages range from reducing the risk to combatants’ lives by removing them from combat and increasing the efficiency of each fighter; to accessing areas and locations unsuitable for humans; and ultimately achieving a higher level of warfare due to the superiority of AWS in perception, planning, learning, resilience, agility, power, and more (Etzioni 2017, 72-74). Additionally, despite their high R&D costs in the present, they enable tremendous economic savings in maintaining a modern army with less human operators (Antebi 2013, 58).
Despite the advantages of AWS, in recent years there has been an increase in campaigns and initiatives calling for restrictions on their development and use. For example, in 2013, a UN report recommended banning the testing, production, transfer, and deployment of AWS until an international legal framework was agreed upon. Also in 2013, a group of scientists and researchers from 37 countries called for a ban on AWS. In 2015, over 3,000 AI and robotics researchers, as well as prominent thinkers like Elon Musk, Steve Wozniak, Stephen Hawking, and Noam Chomsky, warned that AWS development would constitute “the third revolution in warfare” after gunpowder and nuclear weapons (Ibid, 74-75).
AWS change the nature of war in two ways: 1) From direct violent confrontation between combatants (despite being mediated by human-operated machines) to a confrontation between humans and machines. 2) From a complete human monopoly on decision-making in war to cooperation between humans and computers (Statman 2015, 1-2). This fundamental change raises the question of whether AWS can conform to the principles of Just War Theory (JWT). And if so – how should they be regulated? This paper will discuss moral arguments related to JWT by proponents and opponents of AWS, as well as considerations and possibilities for adapting international regulation to their challenges.
Moral Discussion
The moral discussion will be conducted in light of the two central principles of JWT regarding the conduct of war (Jus in bello): 1. Distinction – between combatants who are permissible targets and non-combatants who are not. 2. Proportionality – in unavoidable harm to non-combatants, relatively to the military advantage from an action (Statman 2015, 2). This chapter will review four issues related to the moral debate surrounding AWS and JWT: the scope of harm to non-combatants, the accountability problem, the lack of humanity, and the transparency and control challenge.
Scope of Harm to Non-Combatants
Both principles of Jus in Bello are related to minimizing harm to non-combatants. From a consequentialist perspective, they raise the question of whether the integration of AWS will increase or decrease the extent of harm to non-combatants compared to humans in the loop.
Opponents of AWS argue that in asymmetric warfare situations, where it is difficult to identify combatants who are not wearing uniforms or are outside defined locations, there are no clear criteria that can be programmed into AWS to ensure they only attack combatants. They claim that the ability to identify combatants in such situations is based on implicit cues, and the calculation of proportionality is a subjective assessment that is difficult for robots (Statman 2015, 5-6).
In contrast, proponents argue that whether AWS cause more violations depends on further development. However, they anticipate that in the long run, AWS will be able to implement the principles of distinction and proportionality better than humans. That may make AWS deployment a moral obligation (Shany 2024, 9-12). An analogy supporting this claim can be found in autonomous driving. It is considered a complex task requiring judgment and assessment which are very difficult to program, but is becoming safer than human driving (Statman 2015, 5-6). In addition, AWS enable the implementation of more advanced standards for protecting civilian lives, such as “second strike” (i.e., not being the first to fire out of a survival instinct), re-checking, and different proportionality settings including “zero civilian casualties”. Finally, while robots are not 100% error-free, neither are humans. Humans tend to make mistakes and intentional violations due to the fog of war and emotions like fear, revenge, and prejudice (Etzioni 2017, 74).
Accountability
In 2007, Robert Sparrow first formulated the “accountability trilemma” argument, according to which death by AWS cannot be attributed to any human (Sparrow 2007, 66-67). Sparrow rejects three possibilities for who could be held accountable. First, the creators of the AWS were instructed to build a system that learns from experience and applies it to future behavior in an unscripted manner. Second, the military commander who deployed the AWS cannot be blamed for disobedience to his orders. Third, responsibility cannot be ascribed to AWS themselves. Sparrow argues that following the laws of war requires a subject who is accountable for killings, and therefore it is immoral to use AWS (Simpson & Müller 2016, 304-305).
Opponents of the accountability argument attack Sparrow’s argument on two fronts: 1. They deny the premise that accountability is always necessary. 2. Denying the premise that the use of AWS necessarily leads to death without accountability (Statman 2015, 6).
Simpson and Müller develop a detailed response to the accountability problem in their paper. They argue that every engineered system has a “tolerance level” – defined conditions under which it is expected to function properly. This tolerance level assists in assigning accountability when the system fails. In the design of a bridge, for example, it is defined that it will be able to withstand certain temperature, water, and weight conditions, and will be built from materials that ensure this. However, the tolerance for failure is never zero – there will always be conditions under which the system is defined to fail, otherwise an extremely inefficient amount of resources would be required, on the verge of paralysis (Simpson & Müller 2016, 307-310).
How is accountability distributed? The regulator is responsible for defining the tolerance for failure and for monitoring its implementation. The engineer is responsible for building the system accordingly. The user is responsible for not violating its user conditions (assuming he is informed of them by the regulator). When the system fails within the tolerance conditions –at least one of the three is responsible for the violation (e.g., a truck too heavy entered the bridge, an engineer who did not use the required materials, a regulator who did not inspect). However, when the system fails outside the conditions (e.g., a once-in-300-years rainfall) – there are no responsible parties (Ibid). The authors apply this tolerance level concept to the analysis of AWS, and state that they should operate within a defined tolerance for failure for each mission (Ibid, 310-312).
Finally, the researchers analyze the moral requirements for determining the tolerance level for failure. When State A deploys AWS in place of human combatants, it reduces the risk of death for its own combatants and affects the risk of death for non-combatants in State B. This change in risk for State B’s non-combatants occurs without their consent, and they cannot benefit from the AWS deployment or receive compensation for potential loss of life. If the risk to the lives of non-combatants in State B increases as a result of the deployment of AWS, compared to the existing risk from human combatants, the deployment of AWS is immoral (even if in a utilitarian analysis the reduction in risk for State A’s combatants is greater). Otherwise, the action is morally permissible. If the risk decreases significantly, it may even be obligatory. The authors argue that although an equal distribution of risk among non-combatants is preferable, in practice AWS will inherently impose asymmetric risk (but have the potential to reduce human asymmetry). Finally, there is a moral obligation to deploy the technology that inflicts the lowest possible risk – even if choosing between systems that are both better than humans (Ibid, 312-317). In sum, the key question is not whether there is a subject accountable for every death. Rather, it is whether AWS are capable of meeting fair demands for risk distribution to non-combatants, while using the “tolerance for failure” concept (Ibid, 303).
Humanity
Some opponents argue that the lack of humanity in AWS renders them immoral. According to this argument, they operate in a cold and calculated manner, and lack the tendency of human combatants towards compassion, regret, and reluctance to kill. From the perspective of AWS supporters, this is an advantage given the significant role of negative emotions such as anger, revenge, or nationalism in fueling wars, outweighing the positive emotions (Statman 2015, 7).
Another claim is that killing by an autonomous machine deprives the deceased of their humanity, since they are not recognized as human by another human, even if the decision was made to kill them. However, even in remote killings by humans, such as with drones, the operator is not necessarily aware of or able to see the people they are about to kill. If we rule out the legitimacy of conventional warfare, we would arrive at a pacifist position (Ibid, 4). Nevertheless, support for this version can be found in the European law on AI (GDPR) which establishes that every person has the right not to be subject to a decision of an autonomous machine in “substantive matters” (Shany 2024, 21).
Lastly, opponents argue that the psychological distance between the humans operating AWS and their victims will lead to more killings. In practice, this is a disadvantage of remotely operated systems, not of fully autonomous weapon systems (Statman 2015, 7).
Transparency and Control
AWS are increasingly developing as opaque “black boxes”, posing a challenge for transparently monitoring them, explaining their decisions, as well as ensuring they comply with laws in specific cases. This contributes to the blurring between humans on and off the loop, and in the long run, there may be a trade-off between the quality of AWS performance and the level of human supervision and control over them (Shany 2024, 16-19).
However, AWS offer transparency advantages, thanks to their ability to record and store data about their actions and the instructions they received. This is especially true in comparison to human warfare, where it is almost impossible to objectively judge violations by combatants in the battlefield. Even in mixed teams consisting of AWS and human combatants, it is possible to rely more on the former to report moral violations by their comrades (Etzioni 2017, 74).
Considerations for Regulation
In recent years, there has been an ongoing discourse on an international agreement on the development, deployment, and use of AWS, taking place within the framework of meetings of the States Parties to the “Convention on Certain Conventional Weapons” (CCW) annexed to the Geneva Conventions (Shany 2024, 5-6). However, to date, this process has not yielded an agreement.
Any regulation will need to be agreed upon internationally, since no country will agree to forgo the use of AWS unless its enemies also do so. However, it is reasonable that countries around the world will at least be able to agree to a ban on AWS that are incapable of human prevention or cancellation. This is because they could create irreversible damage (although they do offer a deterrent advantage in the automatic enforcement of red lines), such as endless wars that persist even after the human sides have already agreed to a ceasefire. Evidence of the irrationality of such a creation can be found in the fact that the U.S. Pentagon itself has already banned it without an international agreement (Etzioni 2017, 76-79). However, it is unclear how to promote effective norms and regulations regarding other components and aspects of AWS that are not clearly irrational.
In his paper, Matthijs Maas attempted to confront this challenge of regulating AWS by drawing comparative lessons from the arms control system around nuclear weapons. He identified many similarities between nuclear weapons and AWS, including: the need for scientific and technical expertise; initial lack of informed public debate due to the concentration among major powers, secrecy, and complexity; potential for strategic, institutional, and rapid societal change; arms race dynamics from asymmetric strategic advantages; dual-use technology complicating bans and monitoring; and exposure to accidents with severe policy implications (Maas 2019, 288).
Maas extracts 3 key lessons for the regulation of AWS from the history of nuclear arms control:
- Global norms influence domestic politics and its considerations in weapon development (top-down effect). 56 countries have considered developing nuclear weapons since 1960, but despite fears of a rapid proliferation, only 10 countries actually reached the finish line. Security considerations do not explain many cases, and the academic literature therefore attributes some weight to domestic politics, often influenced by global norms. For example, norms such as the NPT and the nuclear taboo against first use of nuclear weapons influenced local political coalitions comprising intellectual elites and the military and civilian bureaucracies to choose non-proliferation and non-use of nuclear weapons. Additionally, they also created reputational costs that strengthened these coalitions – and led countries to choose trade, technological aid, and investment over a “wasteful” nuclear program. Applying these norms to AWS could take the form of an international ban on certain categories combined with economic incentives, and strengthening the negative public image of “killer robots” (Ibid, 288-303).
- Organized communities of experts can lead to effective control (bottom-up effect). During the Cold War, the American community succeeded in pushing policymakers to pursue arm control agreements. They promoted a theoretical analysis of the destabilizing effect that a lack of a second-strike ability or a unilateral disarmament could create. Their success was based on internal consensus, access to policymakers, and collaboration with similar communities in other countries. It is unclear whether there is such an effective community around AI, and its early formation is important in shaping regulation (Ibid, 288-303).
- Accidents in complex systems make the goal of “meaningful human control” a problematic one. Numerous accidents throughout history nearly led to nuclear detonation and nuclear war, resulting from both human errors and computer errors. In AI systems, there is a significant amplification of features that contribute to accidents, such as “interactive complexity” (multiple connections that complicate understanding and prediction, error detection, or comprehensive scenario examination), “competing goals” (e.g., safety versus response speed and secrecy), and “competitive context” (incentive for speed and getting ahead of the enemy, leading to communication failures, miscalculation, and potential cyber-attacks on the opposing system). This means that even “meaningful human involvement” does not guarantee the prevention of accidents (Ibid, 288-303).
Maas’ work teaches that formulating regulation for AWS should not only address moral and legal arguments but also consider more prominent arguments in domestic politics, such as strategic stability and safety. It should also continue adapting international relations theories related to decision-making, institutional norms, and organizational safety to the present era in order to prepare for the “next chapter in the history” of arms control (Ibid, 303-304).
The regulation of AWS may require a higher degree of moral clarity. If AWS are programmed to judge military necessity, proportionality, and distinction – these currently ambiguous principles will need to be well-defined, including a default decision in situations not covered by the definitions: whether to attack or hold fire? (Statman 2015, 5-6).
Summary and Personal Discussion
Autonomous weapon systems (AWS) represent a technological breakthrough based on impressive progress in the field of AI in recent years. They are defined as systems capable of independently identifying targets, planning, and carrying out attacks without direct human supervision. They offer enormous strategic advantages such as distancing combatants from life-threatening situations, enhanced combat capabilities due to faster perception, learning, and planning abilities, and significant economic savings in personnel costs. However, their fundamental impact on the nature of warfare necessitates moral deliberation and appropriate regulation, which has yet to be formulated at the time of writing.
The moral discussion in this paper is built around the principles of distinction and proportionality in Just War Theory (JWT). It surveys the arguments of proponents and opponents of AWS regarding four issues: the extent of harm to non-combatants, the accountability problem, the lack of humanity, and the transparency and control challenge.
Opponents fear that AWS will increase harm to non-combatants because they have limitations in identifying combatants and calculating proportionality compared to humans. On the other hand, proponents anticipate that in the long run they will become better and enable the adoption of higher standards. According the accountability problem, AWS allegedly prevents the accountability necessary for implementing the laws of war. Proponents of AWS argue that responsibility can be assigned in most cases and that there are exceptional cases without accountability even in common civilian contexts. They argue that the morally relevant question is the ability of AWS compared to humans to distribute risk to non-combatants. While the lack of humanity in AWS prevents them from acting on positive human emotions like compassion and remorse, it also neutralizes negative emotions like anger, fear, and vengeance that tend to escalate in wars. Although AWS may allow greater transparency on the battlefield due to data collection and logging, opponents point out that their increasing complexity could create a conflict between performance quality and the ability for human oversight.
As for the slowly emerging regulation, the lowest common denominator is likely to be a ban on irrational AWS – those that cannot be canceled or stopped by humans. From the comparison to nuclear arms control, which is similar to AWS in several key features, it can be learned that global norms shaping domestic politics (top-down), and organized expert communities promoting ideas (bottom-up), can influence the considerations for developing AWS. In addition, human control over AWS is no guarantee against catastrophic accidents.
In my view, most of the arguments of AWS proponents answer the concerns of its opponents, mainly on a consequentialist basis and with the expectation of a greater ability to avoid harming innocents. I do not believe that the accountability problem and the lack of humanity in AWS justify halting their development.
However, the most significant argument against AWS in my view relates to transparency and human control over AWS. This is an issue with deeper implications than the ability to uphold the laws of armed conflict, which could call into question human autonomy and the ability to prevent catastrophic harm. The initial step is to ensure the ability to cancel and stop any action performed by AWS. In the long run, there is a risk that humans will struggle to understand when to exercise this ability. In such a future, military strategist Thomas Adams warned that humans could hold only symbolic authority over AWS after the initial decision to wage war.
At the same time, in the long run AWS could fundamentally change the core principles of JWT and bring about dramatic moral progress for humankind. With their increasing accuracy and wisdom, a day may come when they can enable effective warfare that imposes strategic will on the enemy without harming humans at all. In such a scenario, the principle of distinction between non-combatants and combatants could shift to a distinction between humans and property, while the principle of proportionality in harming non-combatants could become, for example, proportionality in harming humanitarian needs such as food or medicine.
Between the dystopia of unstoppable wars and the utopia of wars without harm to human life, humanity must navigate the development of AWS’s remarkable capabilities while safeguarding autonomy and safety.