Autonomous Weapon Systems: Understanding and Operationalizing Human control

Much has been already written on the autonomous weapon systems (AWS), and repeating the same conceptual description would be unnecessary. Here, we shall briefly discuss the difficulties in objectifying AWS[1]and understand the current developments on the concept of Human control on AWS. The essay shall analyze Stockholm International Peace Research Institute (SIPRI) and International Committee on Red Cross (ICRC) combined report (Boulanin, Neil, Netta, & Peldan, 2020) released in June 2020, and Ajay Lele’s article titled ‘Autonomous Weapon systems. To have a construct for the discussion ahead, let’s define what exactly the AWS in this article is.AWS is understood as the military-grade machine that can make their own decisions without human intervention(Lele, 2019). If that is broadly the understanding of AWS, defining Lethal Autonomous Weapon Systems (LAWS) turns out to have similar problems as in defining Terrorist. It is because of the subjectivity involved in the term ‘Lethal.’ For example, cyber warfare can be equally or more lethal than an airstrike. Are cyber-attacks assisted by Artificial Intelligence (AI) considered as LAWS? No consensus arrived for the latter.

If that is the dichotomy involved with the objectification of definition, the term ‘autonomy’ of the machine system itself is contextual. This makes it difficult to arrive at a universal legal consensus. During the World Wars, remote-controlled tanks, guided missiles were considered as autonomous as they could make decisions regarding their physical movements without soldiers manning them directly. take another example – It is impossible for a pilot while flying at the speed of Mach, to observe the targets with a naked eye. Decisions must be made within a fraction of second, to which the human body is not made of. There, the decision is made by computers along with high precision cameras. Isn’t that autonomous when the vision is considered? Consider US Tomahawk missiles. A sub-sonic cruise missile capable of maneuvering its way towards the target without constant human supervision. Even this is autonomous!

But the concern involving the development and deployment was not like that of today’s AI-based AWS. No matter how advanced the autonomy was, the decision making power, control regarding the actions on the filed was the pure prerogative of humans. The introduction of AI changes that. We have arrived at a junction in history where no human can comprehend the societal structure(Winner, 1978, p. 290). Even within the military, the complex inter-dependence of technology and humans have gone to an un-comprehensible level. AI involved weapon systems have aggregated the ‘black-box’ concern, pushing all the states to re-visit the humanitarian, ethical standards of the AWS.

As of current AWS deployment, airborne autonomous systems are saddle at Unmanned Air Vehicles (UAE), land-based robots at the preliminary stage (US SWORDS TALON), and Sea-based are missile systems assisted with auto-detection systems. However, the threat of machines taking the cognitive decisions without any human input is possible only with Artificial General Intelligence (AGI) and Artificial Super intelligence (ASI)[2]. The ASI is not yet invented and scientists are not sure if it is possible but it is strongly opined by some that it is not impossible and likely to be realized by the 1st third of the next century (Bostrom, 1998). Such super intelligence would be considered to have the capacity to become an uncontrolled offensive system but largely the current developments fall under controlled – defensive systems (Lele, 2019).

On these AWS, the 8 years long persisting concern of expert groups on emerging technologies figures two main aspects – human control, and accountability of AWS. The 2019 report of the Group of Governmental Experts (GGE) has drawn four principles on which further AWS policy research would be undertaken. They cover the aspects of International Humanitarian Law (IHL), Human control and accountability, the applicability of international law on the usage of AWS, and the accountability of development, deployment and usage must adhere to Convention on Certain Conventional Weapons (CCW) and necessary international laws.

AWS -Ethics and Human Control

The aspect of human control of AWS is the major ongoing debate in international fora. Previously, it never happened that a military operation is completely carried autonomously by munitions and thus, no laws are governing such aspects. These ex-ante debates on the probable loss of human control are anchored to the machine’s uncertain capabilities on predictability, ability to analyze the environment, and differentiating civilians and combatants. While humanitarian law is unquestionable agreed on while deploying the AWS, the ethical standards to be made are much more complex because of their subjectivity. The ethics of a soldier is different from the ethics of civilian. The debates of ethical standards on AWS are of two types, Result driven (consequential approach), action-driven (Deontological approach).  The latter depends on the moral judgments of the user. It considers the rights of both combatants and civilians alike while engaging in conflict. The former includes the probable consequences of the military operation. The international norms would take both the approaches into considerations in arriving at the final draft as the research is ex- ante.

For a proper subjective understanding, brood over the question – ‘Save her fellow soldier or save civilian? Which is ethical?’

To have ethics-based human control over the AWS, there are three ways – strict control of the weapons, control of the environment, and to have a hybrid human-machine interaction. Out of these, the last option is the most sophisticated and challenging. It involves humans in the loop and the entire decision making would be left to the human. She would be responsible for the identification of the target and analyzing the environment supported by the AI-based analysis. In the current stage of AI development, this becomes necessary as the intelligence of algorithms does not match that of a human.

Operational Challenges

Technology is always used to enhance their capabilities and to ensure their dominance of force. AWS would be an exceptional addition to its arsenal and probably be a leap forward for the military. While it is so, human control becomes more necessary so that AWS is used for the tactical and strategic advantage of the commanders but not as the commander itself.

The challenge which all the militaries across the world face are the knowledge required to operate such sophisticated AI-based weapons. To take control of the AWS as when required, the supervisor of the systems should have enough knowledge about the working of the system including the working of the algorithm. In addition to that, deployed AWS will not always be operated by the controller. It would be left auto most of the time which makes the operator dormant. SIPRI report provides a concept of ‘safe human-machine ratio’ to overcome the challenges of human-machine interaction. This formula is provided to have optimum operational personnel. If more humans are involved in the loop, co-ordination becomes difficult and less makes it strenuous to handle the decision making.

Nh = Nv + Np + 1

Where,

Nh– number of humans needed

Nv – number of vehicles

Np– number of payloads on those vehicles

+ 1 – additional safety officer.67

However, these three approaches are mutually dependent. On the whole, the report advises establishing a structural, cognitive, educational framework to embed humans into AWS working.[3]

Proceeding further, who, what, when, how are the univocal questions arising with the human control of the AWS. The questions who supervises and what provides a technically similar scenario to the already deployed systems like THAAD. The commander in control of the strategy, deployment, and decisions will have the obligation to ensure that the usage is in line with the IHL. Answering the question when, the involvement of humans is considered not to be just at the stage of usage, but even in the pre-development and development stage according to the GGE report. The last question of ‘how?’ involves the extent and type of human control. It requires proper Compliance with applicable international law along with the ability to retain and exercise human agency and moral responsibility for the use of force and its consequences and ensuring military effectiveness while mitigating risks to friendly forces.

Even if the supervision becomes mandatory, the AWS systems suffer from three different challenges viz. Human inclination towards machine bias, out of the loop controls, under-trust. The probable solution appears again to have a sophisticated human-machine interaction with a new structure to educate, train the operators.

Characteristics to be considered in drafting norms

The key characteristics to be considered –

Weapon SystemEnvironmentUser
Type of target Type of effect Mobility Types and capabilities of sensors System complexity Duration of autonomous operation.Predictability ObservabilityControllabilityThe physical and  cognitive abilities of humansThe user’s ability to understand the system; andThe distribution of human control.

1st column indicates the developmental and operational limits of the AWS. Of course in the view of ethical and humanitarian concerns, if there is a scientific solution for the latter, there may arrive a situation where the military establishment would consider realizing Elllul’s technological society.

2nd column emphasizes the restrictions on the operations to avoid civilian harm. One can think of not approving the usage of AWS in civilian spaces. Well, there is always a counter-argument that machines might be more efficient in differentiating combatants to innocent civilians, given their sematic censors, facial recognition algorithms. Surprisingly, the report has not touched on this aspect. 

3rd column, human-machine interaction is a vivid encouragement of human supervision and retaining the ability to intervene in the AWS at any point.

Finally, the overarching concern regarding human control and ethical usage looms on the efficient international norms. The problem of accountability and ethical debates shows that states are not concerned with the technology itself but the absence of laws. So the debate should revolve around the establishment of legal structures, both nationally and internationally to develop and use AI systems in the military. The above-categorized attributes become central in drafting the human control structures to deploy AWS into the armed forces. The complex interconnectedness of AI development and its integration into the latest weapon systems requires states to have their norms on AWS while adhering to common consensual international laws. This makes states retain their authority to determine the extent of human control and at the same time encourage the international scientific community to actively engage in the development of scientific solutions to uncertain autonomy.

On a concluding note, reiteration on the objectivity and contextual definitions of AWS, fear of un-ethical calls being taken by the autonomous systems and the loss of human agency takes us to the texts of French Philosopher, Jacques Ellul. His account -‘The technological society’ provides that the agency of humans would be completely taken over by techniques and technology with the current development and advancing dependence of humans on technology. Such a society with ubiquitous technology would restrict the knowledge systems of human civilization. With this hindsight, if one reads George Orwell’s 1984, it is sure that they would strongly advocate a ban on AWS development. However, Winner’s ‘autonomous technology’ provides an excellent scrutiny on Ellul’s work, reiterating the importance of understanding the change that the technology brings into the society, and how the social structures change accordingly so that they could accommodate such development. Based on Winner’s account, the SIPRI report and Lele’s article which has been critically looked at here would provide the best possible way towards incorporating AWS into the military with necessary considerations to account for while drafting the international norms.  However, it is in the ethos of military to adopt the advance technology and improve their efficiency.

References

Bostrom, N. (1998). How Long Before Superintelligence? International Journal of Future Studies, 2.

Boulanin, V., Neil, D., Netta, G., & Peldan, C. (2020). Limits of Autonomy in Weapon Systems: Identifying Practical Elements of Human Control. Stockholm: SIPRI.

Lele, A. (2019, January- March). Debating Lethal Autonomous Weapon Systems. Journal of Defence Studies, 13(1), 33-49.

Winner, L. (1978). Autonomous Technology. USA: MIT Press.


[1] This is for states to arrive at common consensual norms in the development and deployment of AWS. 

[2] Whose intelligence is far ahead of human intelligence. Having the capacity to cognitively comprehend wide variables in the surroundings and calculating numerous aspects simultaneously.

[3]I deliberately chose this articulation‘embed humans into AWS’ because the training of operators, providing a sufficient number of them to an AWS system, involving them in the process, etc. arrives from the pre-conception that soldiers should be able to learn and use AWS. It is seldom thought that AWS should be designed in such a way that it should meet the requirements of a particular commander.