Humanizing the Use of Autonomous Weapon Systems

Authors: Nafees Ahmad & Shezan Samrat*

Human rights commitments at the heart of every innovation guiding the use of autonomous weapons presented unprecedented challenges. The geo-strategic application of algorithmic programming, employment of geopolitical tactics, technological data maneuverability in armed conflicts and pragmatic war policymaking still confront a multitude of inherent Artificial Intelligence (AI) challenges. Isaac Asimov first presented the three rules of robotics in his 1942 short tale Runaround. First, a robot must not intentionally damage a human being or enable a human to be hurt as a result of its inaction. Second, any direction provided to a robot by a human must be obeyed. Third, a robot must avoid behaviours or circumstances that could endanger itself. These laws perform as a legal, practical, and ethical guideline. Still, it’s clear that they only scrape the veneer of what is likey with AI more than 80 years later as it grows more prevalent and assumes traits that differ from those Asimov foresaw.

According to Mark Robert Anderson that “Asimov imagined a society in which these humanoid robots would behave as servants and require a set of programming guidelines to keep them from harm… However, considerable technological improvements have occurred. Today, our ideas of what robots may look like and how we might work with them have significantly changed. The question of human rights, particularly in the context of conflict, should be taken into account as one of the most critical features of this robot. Maybe Asimov should have predicted the Universal Declaration of Human Rights (UDHR) and changed the first law to read,” A robot shall not violate human rights, or by inaction let human rights be violated. The question of robot agency becomes more prominent, particularly in light of human agency. We can identify the offender and hold them accountable for violating human rights when it is a person or an organization.

The International War Crimes Tribunal for the Former Yugoslavia indicted Slobodan Milosevic, the former president of Yugoslavia, with war crimes for crimes against humanity when he violated human rights. Charles Taylor, the former leader of Liberia, was similarly found guilty of war crimes and given a life sentence by the International Criminal Court. Since a recognizable human agent is implicated in these acts, handling this case is straightforward. When autonomous machines are engaged, the problem gets more challenging. There are three main methods by which AI can be implemented in warfare. A person in the loop first. Here, an AI weapon can only launch an attack if someone provides the go-ahead. In this situation, if there is any human rights infringement, that person can be held accountable via procedures like a court martial. When a person is in the loop, it is the second scenario. Here, a person watches and only gets involved when something goes wrong. Because the AI weapon can be deployed when the human believes everything is fine, the human does not have full agency in this situation.

The third form is the human out of the loop, in which the human is completely removed from the AI weapon’s decision-making process. The issue of accountability here gets complex. Did an engineer create the system? How does these weapons evolve independently of their manufacturer and/or creator if this is the case? Has the commander deployed the system? Who are you accountable to? These questions have complex solutions. The difficulty of proving actus reus (guilty conduct) and mens rea persists even if one can determine who should be held liable (guilty mind). Additionally, in some jurisdictions, military contractors and manufacturers frequently have protection from legal action.

This makes it challenging for anyone who faulty autonomous weapons have injured to pursue legal action against such makers. Nevertheless, individuals who disobey international law and violate human rights shouldn’t be exempted from punishment. “Impunity for international crimes and systematic and pervasive abuses of fundamental human rights is a treachery to human solidarity with the victims of conflicts to whom we owe a duty to do justice, remembering, and restitution,” says Mahmoud Cherif Bassiouni. These problems are possible. How to bring the use of technology in war to the forefront of global governance is one of our major problems in the AI age.

Global discussions on the dangers of Lethal Automated Weapons Systems (LAWS), sometimes known as killer robots, are already changing. A report on the Libyan civil war from the UN Security Council in June 2021 implied that these weapons had been used to murder people for the first time in 2020. Iranian nuclear scientist Mohsen Fakhrizadeh was reportedly killed in December 2020 using a satellite-controlled machine pistol with AI. The question of whether or not to outlaw these weapons remained unresolved at the Convention on Certain Conventional Weapons the following year. The discussion is more complex than Asimov’s laws would entail or allude.

As per the Universal Declaration of Human Rights (UDHR), everyone has the right to life, liberty, and personal security at birth. How do we react, for instance, to the idea of “just war”? According to the just war theory, certain circumstances can justify going to war. According to David Luban’s definition from 1980, “a just war is either a war of self-defense against an unjust war; or (ii) a war in defense of socially fundamental human rights (subject to proportionality).” But who is responsible if autonomous weapons are employed in this situation, and humans are not involved? According to Mariarosaria Taddeo and Alexander Blanchard, only humans bear moral responsibility for the conduct of autonomous weapon systems (AWS). This is so that intentions, plans, rights, obligations, and praise and punishment can only be meaningfully attributed to humans.

The UN Group of Governmental Experts on Emerging Intelligent Technologies in the Area of LAWS of the Convention on Certain Conventional Weapons makes a similar argument, saying that human accountability for decisions cannot be delegated to machines. According to Taddeo and Blanchard, AI systems should be interpretable in reaction to the deployment of AWS; providers and defense institutions should be capable decision-makers to have a high degree of knowledge and awareness to evaluate the predictability of weapons to prevent unanticipated results, and the development cycle and method of procurement should be transparent, and the necessity principle should be used to justify the use of autonomous weapons; decision-makers must have a significant level of knowledge and understanding, the development and procurement process must be transparent, the use of autonomous weapons must be justified using the necessity principle, precautions must be taken to reduce the risk of lethal consequences when using non-lethal weapons, and a procedure to identify errors and undesirable outcomes, assess their impact and costs, and define corrective action steps must be developed. And audits ought to be given priority to facilitate accountability.

As our current global structure implies, the unjust war persists. “Artificial superintelligence arriving in a society where war is still normalized constitutes a catastrophic existential risk,” Elias Carayannis and John Draper concluded in a 2022 paper. How can we be confident that we are defending human rights in this changing period? There must be rules and laws governing the use of autonomous weapons, as shown by UN agreements. AI must also be compatible with current human values. Human Rights Watch urged states to adopt a treaty establishing requirements for adequate human supervision over LAWS in August 2021. While some nations demand a complete ban, others say the discussion is premature. However, as the epidemic has shown, the pace of AI is unparalleled, so we must adapt our responses. One claim is that AI should be utilized to forecast militarised interstate conflicts and that there should be a push for more technologically focused mediation strategies. For instance, by utilising technologies like Chat GPT, which analyse words and languages, we may track discussions on social media channels to understand better the dynamics in particular locations and the potential impact these dynamics may have on the global community. The sustainable development goals’ tenets must also be worked toward because conflicts frequently arise from these fundamental ideas.

Resource issues at the state level could be clarified by improved tracking technologies such as satellites or drones. Additionally, there is potential for using AI for decision-making. According to the UN, the focus must be on researching these cutting-edge intelligent technologies so that they can be put to use to reduce violence, promote global stability, and defend human rights. Taddeo and Blanchard’s conclusion holds even as we navigate this brave new world: “In the age of use of autonomous weapons in warfare, may perform immoral actions, but the morality of warfare can be upheld only by holding the individuals who design, develop, and deploy them morally accountable.” Before it’s too late, it is anticipated that states will open their eyes and come to a worldwide agreement on LAWS. One can only speculate in these circumstances as to what Hugo de Groot, the “father of international law” from the Netherlands and author of De Jure Belli ac Pacis (On the Law of War and Peace), might have thought regarding autonomous weapons and human rights.

Additionally, it is possible to foresee adapting non-contractual civil responsibility laws to AI in order to supplement and update the AI liability framework and propose new laws that are tailored to the harms brought on by AWS. Such unique regulations must aim to ensure that individuals hurt by AI systems receive the same level of protection as individuals hurt by other technologies across the globe. In order to reduce the burden of proof on victims to demonstrate that an AWS caused their damages, the AI liability must establish a rebuttable presumption of causality. Additionally, it must grant national courts the authority to mandate the disclosure of information regarding high-risk AI systems that are suspected of having caused harm. The ampleness and pragmatism of such liability rules, their coherence with the IHRL and their potential to have a negative impact on innovation, as well as the interaction between international and national jurisdictions, are among the concerns of the stakeholders and academics that must be taken into consideration.

* Shezan Samrat. Senior Secondary School Student, Saharanpur-UP-247001, Contact at: shezansamrat24[at]gmail.com

Dr. Nafees Ahmad
Dr. Nafees Ahmad
Ph. D., LL.M, Faculty of Legal Studies, South Asian University (SAARC)-New Delhi, Nafees Ahmad is an Indian national who holds a Doctorate (Ph.D.) in International Refugee Law and Human Rights. Author teaches and writes on International Forced Migrations, Climate Change Refugees & Human Displacement Refugee, Policy, Asylum, Durable Solutions and Extradition Issus. He conducted research on Internally Displaced Persons (IDPs) from Jammu & Kashmir and North-East Region in India and has worked with several research scholars from US, UK and India and consulted with several research institutions and NGO’s in the area of human displacement and forced migration. He has introduced a new Program called Comparative Constitutional Law of SAARC Nations for LLM along with International Human Rights, International Humanitarian Law and International Refugee Law & Forced Migration Studies. He has been serving since 2010 as Senior Visiting Faculty to World Learning (WL)-India under the India-Health and Human Rights Program organized by the World Learning, 1 Kipling Road, Brattleboro VT-05302, USA for Fall & Spring Semesters Batches of US Students by its School for International Training (SIT Study Abroad) in New Delhi-INDIA nafeestarana[at]gmail.com,drnafeesahmad[at]sau.ac.in