AWS: A Threat to International Humanitarian Law or a Necessary Technological Evolution?

“A killing machine that does not rely on human commands to choose and eliminate its targets” sounds like something out of a dystopian movie.

“A killing machine that does not rely on human commands to choose and eliminate its targets” sounds like something out of a dystopian movie. While some may find it exciting to envision an imaginary world where society is living in a perpetual state of fear due to the unstoppable march of technology that forces humanity to scramble for survival, others may choose not to entertain the idea of algorithms having the power to determine humans’ fate. Regardless of our personal feelings on the matter, we cannot afford to ignore the fact that killer robots (hereafter referred to as “autonomous weapon systems”) are no longer just a nightmare-inducing fantasy—they have been a part of our reality for some time now.

According to the ICRC, autonomous weapon systems (AWS) are any weapons that can select (search for, detect, identify, track, or select) and attack (use force against, neutralize, damage, or destroy) targets without human intervention—it is worth mentioning, however, that these weapons still depend on some level of human involvement, particularly during the activation process. The history of AWS can be traced back to the development of autonomous vehicles, but it was only in the 20th century that the concept of lethal autonomy gained prominence with the integration of artificial intelligence that supports autonomous functionalities into military weapons.

Reports indicate that AWS have been used on many battlefields worldwide. In 1943, Germany employed guided anti-ship glide bombs, code-named “Fritz X,” against an Italian battleship, which resulted in 1,254 deaths. In 1945, the U.S. used autonomous homing bombs, called the “Bat,” to sink a Japanese destroyer in Okinawa, as well as an oiler and picket boat on the Yangtze River. In 1972, laser-guided bombs were deployed by the U.S. Air Force during the Vietnam War to destroy several strategic targets in North Vietnam, such as port facilities in Hải Phòng, a hydropower plant in Lăng Chi, the “Dragon’s Jaw” Bridge in Thanh Hóa, and the Paul Doumer Bridge in Hanoi. Post-9/11, drones became a regular tool for search-and-attack operations related to the Global War on Terrorism in Afghanistan, Iraq, Yemen, Somalia, Pakistan, and Syria. In the Russo-Ukrainian War, tens of thousands of reconnaissance drones and loitering munitions were sent to the front line for both defense and offense.

Among the many cases, the Second Nagorno-Karabakh War serves as the most notable one, as it is deemed to be the first war won—in a sense—by AWS. Thanks to the employment of cutting-edge Turkish-made UCAV, the “Bayraktar TB2,” and Israeli-made kamikaze drone, the “Harop,” Azerbaijan was able to overwhelm Armenia’s ground-based air defenses. While AWS played a vital role in Azerbaijan’s military strength, it is important to note that they did not win the war by themselves—at most, they only helped exploit the gap in the Armenian air defense. The credit for winning the war still goes primarily to Azerbaijan’s troops who despite facing heavy casualties, still managed to gradually weaken Armenian forces until they realized that further resistance would only result in more losses for their side.

Based on precedent cases, it can be said with confidence that AWS have the potential to significantly alter the warfighting paradigm. Warfare will be characterized by weaponry with varying degrees of autonomy that are far more efficient, subtler, cheaper, safer, and overall better in comparison to conventional weapons. As a result, States are likely to vie for investment opportunities in the development of AWS, both to gain an edge on the battlefield and to seize or maintain status as “the world’s most powerful nation.” This could lead to a new arms race—one that could surpass the nuclear arms race of the Cold War era in terms of its severity, as it would not be solely driven by governments and defense industries. As opposed to nuclear technology whose commercial value is limited to power generation, artificial intelligence has a higher market value due to its much wider use in daily life. For this reason, other industries are also likely to have an interest in the development of AWS, further fueling global competition.

The nature of AWS has raised significant security concerns. Although AWS currently still work in a way that is within humans’ grasp, there is a possibility that humans may not be able to keep up with the rampant development and adoption after a few decades. This could result in a situation where the only way to counter AWS is through reliance on other AWS. In this scenario, all nations will inevitably have to adopt AWS, at least for defense purposes. The problem is that States are not only faced with a security dilemma due to other States increasing their military capacity, but also the unpredictability of the systems themselves. There is a risk of “flash war,” wherein weapon systems’ algorithms feed off each other, causing non-hostile actions to be perceived as threats which will then trigger hostile responses from all systems involved. Something similar has occurred in the finance and trade sector—known as “flash crash.” The difference between the two is that trade can be halted to prevent disaster in a flash crash, but once a flash war begins, there will be no pulling back.

For many, AWS are just another military instrument that will present security threats if not properly controlled by States. However, there is a more fundamental issue: with AWS, we are entrusting life-or-death decisions—decisions that require legal and moral judgments—to inanimate machines made of sensors and software that have no conscience. Since the dawn of civilization, humans have been killing each other and later engaged in organized warfare. In times of armed conflict, moral principles become ambiguous and decisions to kill are somewhat normalized or even encouraged. Even then, moral agency is always present, guiding humans to weigh the justifiability of taking a life and to comprehend the gravity of such a decision. In contrast, the absence of conscience makes AWS neither understand nor respect the value of life, thus unable to fathom the significance of its loss. Therefore, allowing robots—no matter how advanced they are—to determine someone’s life and death is an affront to human dignity.

Humans are responsible for activating AWS, but the processes of selecting and attacking targets rely heavily on the weapons’ sensors (based on motion, heat, and video) and software (such as facial and object recognition), bringing into question the accuracy and accountability of AWS. In open battlefields, these technologies may be sufficient to carry out attacks accurately—if we turn a blind eye to potential malfunctions and unpredictable manners, that is. However, if the confrontation takes place in populated areas, it will be extremely challenging—even for humans—to distinguish between military objectives and civilian objects, combatants and non-combatants, as well as active combatants and hors de combat. In such situations, when civilian casualties and/or unacceptable collateral damage occur, there will be a debate on who should be held accountable because different from conventional weapons, there is little clarity on the chain of accountability in the deployment of AWS.

One of the biggest problems with AWS is that they are not specifically regulated by international humanitarian law (IHL) treaties. Those who are developing, deploying, and using weapons are the ones bound to the rules on the conduct of hostilities—particularly the rules of distinction, proportionality, and precautions in attack—as well as the principles of humanity and the dictates of the public conscience. These rules and customs create legal obligations and accountability for humans, but they do not extend to machines, computer programs, or weapon systems used in armed conflicts. Therefore, it falls under human combatants’ responsibility to ensure that AWS use adheres to IHL. Additionally, to determine the predictability and reliability of AWS, human control is essential and should be exercised throughout the development (are the operational parameters integrated into the system compliant with IHL?), activation (is the technical performance following the predetermined operational parameters?), and operation stages (is it possible to adjust the criteria or cancel the attack if necessary?).

Another tricky part of AWS’ position under IHL is the “accountability gap”. States could be held liable if their armed forces—while using AWS—violated IHL or if feasibility tests were not conducted prior to the deployment of AWS. This also applies to manufacturers who might be held liable for programming errors that lead to AWS malfunctions. Individuals involved in the development and deployment stages, on the other hand, are a little bit more complicated as they cannot be held liable for independent actions carried out by AWS, unless it is proven that the systems were intentionally programmed to violate IHL or recklessly activated, despite the person in charge being aware of the potential faulty.

Despite the pressing concerns at hand, there is no authoritative legal framework in place that specifically governs the use, development, and production of AWS. Lethal AWS (LAWS) only began to be discussed at the UN Convention on Certain Conventional Weapons (CCW) in 2014, followed by the adoption of the Guiding Principles comprising 11 points with no legal force in 2019. Furthermore, dialogues on multilateral regulations are still ongoing and are not projected to conclude until at least 2026.

The process of establishing regulations for AWS has been extremely slow due to differences in country positions, making it difficult to reach a consensus. Nearly all countries have expressed concern about the potential threats posed by AWS and acknowledged the importance of multilateral talks to address the issue. Nonetheless, only 30 countries—mostly smaller nations, and some are not even a CCW State party—supported a ban on LAWS and called for a new international treaty. Unsurprisingly, countries that are investing heavily in the development of AWS, such as Australia, China, India, Israel, Russia, South Korea, Türkiye, the U.K., and the U.S., have made it clear that they wish to preserve the status quo (except for China, which called for a ban on LAWS, but only limited to the use, not development and production).

Taking into account all the security, moral, ethical, and legal concerns that have been raised, it is worth asking whether AWS are entirely ‘evil.’ Those in favor of AWS emphasize how people tend to exaggerate the risks posed by such weapons; that opposition to AWS is often based on erroneous assumptions and speculations, ignoring the military and humanitarian benefits. AWS have the potential to enhance a State’s military power by acting as a force multiplier that reduces the need for warfighters without compromising the troops’ efficacy; delivering lethal, but more accurate and surgical attacks with less risk of collateral damage compared to human combatants and conventional munitions; expanding the battlefield to previously inaccessible areas; increasing the effectiveness of kill chains by speeding up the process of sorting through the overwhelming amount of data used to find, fix, track, target, and engage targets, as well as to assess strike results; minimizing human involvement in ‘dull, dirty, or dangerous’ missions and mitigating the risk of human error, thereby reducing casualties; and significantly saving military budgets. With all of these potentials in mind, insisting on banning AWS will not only be infeasible and ineffective since most technology utilized comes from civilian developments, but also deprive nations of the technological advancements necessary for their survival.

AWS proponents also argue that ‘removing’ humans from high-stress combat zones—and ‘replacing’ them with AWS—is ethically preferable. By doing so, combatants will not be subjected to the psychological repercussions of war that are not only detrimental to their well-being, but also increase the likelihood of war crimes. This, in turn, minimizes the risks of harm to civilians, which is the primary goal of IHL. Moreover, it is a given that sacrificing inanimate machines is far more acceptable in terms of morale than sacrificing soldiers—although both options come with significant costs: the cost of sacrificing soldiers is evident in the loss of life and survivor’s guilt, whereas sacrificing munitions can have long-term economic and strategic consequences.

In conclusion, it is imperative to consider both the good and the bad when discussing AWS. The challenge that now has to be addressed is to find a compromise for these opposing perspectives. Rather than banning the use, development, and production of AWS altogether, the focus should be on expediting the creation of legally binding instruments that regulate how AWS are used and the extent of human involvement therein. These international regulations should guarantee that in the use of AWS, humans are always in-the-loop (the weapons’ actions are instigated by a human) and/or on-the-loop (the weapons’ actions can be overridden by a human operator). Furthermore, there should be clear and IHL-compliant limitations on the use of AWS—what purposes are they intended for?; are they solely aimed at other autonomous systems?; under what circumstances are they acceptable?; how can we ensure that they operate according to the intentions of the humans behind them?; who should be held accountable when things go awry?; and so forth. Finally, there should be tangible efforts to ensure that all countries, especially those playing a critical role in the development and deployment of AWS, acknowledge and adhere to these regulations.

Andi Faradilla Ayu Lestari
Andi Faradilla Ayu Lestari
Master’s student of International Relations at Universitas Gadjah Mada with an interest in international politics, security studies, and peace studies.