The Ethics of Artificial Intelligence in Defence – Book Review

War has always been subject to ethical scrutiny. In other words, organized violence is expected to be justified and conducted within whatever limits are possible.

War has always been subject to ethical scrutiny. In other words, organized violence is expected to be justified and conducted within whatever limits are possible. For this reason, advancements in military technologies are analyzed not only through the sheer level of their efficiency on the battlefield but also by how they are employed to achieve military objectives. Some technologies have substantial disruptive potential and can significantly change the conduct and outcomes of war. One such technology is Artificial Intelligence (AI), which is not yet mature but is already producing profound results in the defense sector. The defense-AI integration is raising questions related to its ethical implications.

Mariarosaria Taddeo’s book titled The Ethics of Artificial Intelligence in Defence is a policy-oriented contribution to debates on the ethics of military applications of AI. The author is a professor of digital ethics and defense technologies at Oxford University. Published by Oxford University Press in 2025, her book develops a structured framework to identify, analyze, and address the ethical challenges arising from the current and emerging uses of AI in the defense sector. The book integrates contemporary AI ethics with the Just War Theory and translates this understanding into practical guidance for the defense sector.

Mariarosaria Taddeo, The Ethics of Artificial Intelligence in Defence, (Oxford University Press, 2025, pp 298)

The book advances two closely related claims. First, AI in defense raises ethical issues that are different from those in civilian domains. The combination of autonomy, learning, and adaptive behavior in AI systems, when deployed in a military setting, creates more serious problems than those associated with commercial or administrative AI. Second, while ethical principles are necessary, they are not sufficient on their own. Principles such as responsibility, transparency, and human control are quite general in nature and must be supported by methodologies and strong institutional mechanisms to guide real-world military practice. The author frames these claims as both conceptual and practical challenges and addresses them across eight chapters.

AI has become integral to modern military operations, shaping logistics, intelligence, cyber operations, and targeting decisions. The core ethical challenge, Taddeo argues, lies in the predictability problem (p. 4). Unpredictability arises not only from technical features of machine learning but also from operational contexts, human-machine teaming, data curation, and accumulated technical debt. To address these risks, the book introduces the Levels of Abstraction (LoA) methodology, which clarifies that ethical analysis depends on purpose and perspective rather than technical function alone. She notes, “Because of their malleability, the ethical challenges of digital technologies (AI in particular) are not defined by their design function as much as by the purpose for which these technologies are deployed (pp. 14-16).”

The author identifies three main uses of AI in defense: (i) (i) sustainment and support; (ii) adversarial non-kinetic; and (iii) adversarial kinetic. She argues that ethical risks increase as AI systems move closer to the use of force (adversarial kinetic). While support functions raise issues of transparency and accountability, adversarial uses add risks of escalation and harm to individual rights. In adversarial kinetic contexts, questions of human autonomy, dignity, and compliance with Just War principles become central (p. 18).

The author notes that, despite AI’s growing military role, only a few defense actors, including the US Department of Defense, the UK Ministry of Defence, and NATO, had adopted formal AI ethics principles by 2023. These frameworks rely heavily on the notion of “responsible AI,” which often repeats broad legal or moral maxims without operational guidance and remains insufficient for high-risk military contexts involving escalation and the use of force (pp. 30-32). To address this gap, the author proposes a methodology that applies ethical principles throughout the AI life cycle, from procurement to retirement. Taddeo argues that ethics cannot be confined to system design, since AI systems evolve within complex organizational environments where new risks emerge (p. 60).

Taddeo situates AI-augmented Intelligence Analysis (AIA) as a response to information asymmetry and data overload that “overwhelm all previous forms of analytic tradecraft” (p. 72). While AI enhances intelligence analysis by automating data filtering, pattern recognition, and behavioral assessment, these gains come with serious ethical risks. Four key concerns are identified: (i) (i) intrusion resulting from large-scale automated data processing without human oversight; (ii) reducedexplainability and accountability due to opaque systems; (iii) the reproduction and amplification of bias through flawed data; and (iv) the risk of authoritarian drift. The author stresses that AIA must remain purpose-bound, overridable, and institutionally governed. She cautions that AI is “not fit for every task” and that robust ethical oversight is required to protect civil liberties (p. 96).

Taddeo argues that AI heightens ethical risks in cyberspace by accelerating speed, scale, and opacity in an offense-persistent domain where escalation is hard to control. She rejects simple analogies with kinetic warfare, showing that cyber operations disrupt traditional concepts of force, harm, and sovereignty. She outlines principles for a “just non-kinetic cyberwarfare” to foster a more stable cyberspace, which disincentivizes cyberattacks that increase instability in the infosphere (p. 122).

The author argues that classical deterrence theory, which is built on attribution, proportional retaliation, and credible signaling, does not apply to cyberspace. She proposes an alternative deterrence model centered on target identification, retaliation, and demonstrative action (p. 144).

The author presents Autonomous Weapon Systems (AWS) as the most contested adversarial and kinetic use of AI, focusing on their ethical permissibility, legal regulation, and definitional uncertainty. She situates the debate within International Humanitarian Law (IHL), noting that AWS must meet the principles of necessity, proportionality, and distinction. She also highlights deep disagreement over whether autonomous systems can meaningfully satisfy the principles of IHL. Delegating life-and-death decisions to machines raises legitimacy concerns regardless of technical sophistication.

Moral responsibility for AWS rests solely with humans, but AI’s unpredictability and distributed design create a persistent responsibility gap. Taddeo addresses this by proposing a “moral gambit,” in which human agents accept responsibility for both intended and unintended outcomes of AWS use (pp. 198-201). The author contends that the “gambit” may be acceptable for non-lethal systems, but it is impermissible for lethal AWS, as it amounts to gambling with human lives and violates principles of distinction and moral equality.

The book successfully bridges AI technical literature, ethics (particularly Just War Theory), and defense practice. However, it is primarily a normative and conceptual work. The author has prioritized principles and ethical reasoning over detailed case studies. As Taddeo herself admits, the book should not be taken as an introductory text. Rather, it is aimed at readers with prior familiarity with AI ethics and military ethics. Timely and methodical, the book can be useful for scholars, defense officials, and ethicists seeking a conceptual framework and policy guidance on governing AI in the defense sector.

Iraj Abid
Iraj Abid
Iraj Abid is a Research Officer at the Center for International Strategic Studies Sindh. She holds a Master Degree in Social Sciences from Shaheed Zulfikar Ali Bhutto Institute of Science and Technology University, Karachi. Her research interests include International Relations and Strategic Affairs. She can be reached at irajabid[at]cisss.org.pk.