Preventing Blind Trust: U.S.-China Cooperation on Military AI Judgment

Militarized use of AI offers a form of swiftness and agility that is unprecedented in warfare. AI-enabled systems embody this principle by enabling decisions to be made at machine speed and scale, greatly enhancing the effectiveness of militaries.

Authors: Hyeyoon Jeong and Mathew Jie Sheng Yeo*

In The Art of War, Sun Tzu extolled the virtues of speed in war by proclaiming, “Speed is the essence of war.” Presently, militarized use of AI offers a form of swiftness and agility that is unprecedented in warfare. AI-enabled systems embody this principle by enabling decisions to be made at machine speed and scale, greatly enhancing the effectiveness of militaries.

The transition from human to machine-driven warfare is already underway. Historically, warfare has been a fundamentally human endeavor, with strategy, tactics, and decision-making falling squarely under human control. Yet this traditional warfare has been plagued by inherent limitations, including decision-making latency and an overreliance on human resources, which in turn constrain operational speed and efficiency. The rapid development and integration of AI technologies on the battlefield thus presents an enticing solution to long-standing challenges that have plagued militaries. An AI-enabled system not only significantly simplifies decision-making processes and functions reliably under extreme stress but also enhances operational effectiveness, lowers logistical burdens, and—perhaps most importantly—reduces risks to human life. 

Behind these glowing technological advancements lies a significant risk: automation bias. As reliance on AI deepens, there is a growing propensity for human operators to uncritically trust AI judgments. In warfare, where decisions must be made in milliseconds, the human role can quickly shift from decision-maker to passive validator. This increased tendency for an individual to overly trust and rely on an automated system strips the user of its autonomy. And without proper oversight, especially in situations of contradictory information, it may lead to increased risk of accidents, errors, and other more pervasive effects.

Recent military application of AI illustrates this danger. In the Israel-Hamas conflict, the Israeli Defence Force deployed an AI-based targeting system, “Lavender,” for its operations. This system automatically generates target lists for IDF ground operators by analyzing its vast datasets to identify Hamas operatives. Upon receiving the AI-generated kill list, Israeli operators reportedly spent as little as 20 seconds to review and verify each target—barely enough time to only gloss over superficial details, much less a comprehensive verification. In such cases, human oversight becomes a mere formality—relegating lethal decision-making into a process driven primarily by machines and algorithms, with human involvement reduced to a rubber stamp.
            Crucially, the risks of succumbing to decisions generated by AI, thereby incurring automation bias, are neither isolated nor static—they are global and growing. The 2020 F-16 air combat simulation conducted by the Defense Advanced Research Projects Agency (DARPA), along with the 2023 live trials of the two-seat X-26A VISTA (F-16 variant), demonstrated that AI can outperform human pilots in certain tactical engagements. These breakthroughs illustrate that AI not only has the potential to displace human decision-making in real-time engagement, but the accelerating pace at which such capabilities are fielded raises a pivotal question: Will AI remain an auxiliary tool, or will it ultimately replace human judgment in military decision-making? Indeed, with the major powers already fielding advanced AI systems into the military, coupled with heavy investments in military AI, questions of overreliance on AI and the resultant erosion of meaningful human judgment will continue to become more pressing and urgent. 

To be sure, human decision-making is by no means infallible, and history has shown that miscalculations in war can have catastrophic consequences. But unlike humans, AI lacks moral judgment, contextual awareness, and accountability. Ideally, in a traditional military command setting, flawed decisions can be questioned, revised, or overridden by another human actor. In steep contrast, an AI-dominated, rather than AI-enabled, system may delegate most of the decision-making to the AI system. In times of an AI-driven error, mistakes may unfold at a scale and speed that renders human intervention impossible. 

These challenges are further compounded by the accelerating AI arms race between the U.S. and China. As both militaries acquire more advanced and AI-empowered weaponry and integrate AI into their military operations, like in the case of Lavender, the incentives to delegate and rely on machines may grow stronger. This undermines the safeguards required to retain human oversight, risking decision-making to be AI-dominated rather than AI-enabled. 

Nevertheless, a silver lining lies. Given that neither side can assume its immunity to automation bias due to an overreliance on military AI, particularly in high-stress environments, the risks become mutual, which thereby reflects a potential area for cooperation. In this regard, adopting a human-centric approach to counter overreliance on AI in the military system presents an opportunity for U.S. and China cooperation. 

Firstly, Beijing and Washington should formally acknowledge the dangers of automation bias and overreliance on AI in warfare. Building on the Biden-Xi summit agreement reaffirming the necessity of maintaining human control over decisions surrounding nuclear weapons, this principle should be extended to encompass all military AI applications. A joint declaration by President Trump and President Xi that emphasizes the need to preserve human judgment in the use of AI, without hindering technological advancement, would not only demonstrate high-level political will but also inject renewed impetus into global efforts to constrain the unchecked expansion of AI in combat, particularly where excessive reliance could undermine human oversight. Such a statement could help establish guardrails against delegating life-and-death decisions to autonomous systems while promoting responsible innovation. 

Secondly, both the U.S. and China, out of mutual concern for automation bias, can engage in dialogues to clarify the principle of “meaningful human control” (MHC), with the ultimate goal of embedding MHC into their respective military AI systems. While this concept is supported by the Convention on Certain Conventional Weapons (CCW), ambiguity and doubt may still remain between the U.S. and China. Within the CCW, “meaningful” has been interpreted as “appropriate levels of human judgment,” yet this definition still leaves room for differing interpretations and skepticism. As such, it would still be highly beneficial if the U.S. and China could clarify at a deeper level on issues like what constitutes “meaningful” or “appropriate.” 

Furthermore, despite ongoing efforts by the U.S.-China Track II Dialogue to develop a shared glossary of AI-related terms, no mutually agreed glossary has been finalized to date. While the Chinese side has begun to articulate a more concrete understanding of meaningful human control, the concept is notably absent from the terminology used by their American counterparts. Therefore, building on the CCW’s foundation and guided by mutual interests, a clearer and jointly refined definition would help increase transparency around AI deployment thresholds and significantly reduce uncertainty and strategic misperceptions. These topics offer valuable starting points for more substantive and structured discussions between the United States and China.

Lastly, Washington and Beijing can each unilaterally enhance further training programs for military commanders and personnel operating AI systems. As the role of AI on modern battlefields expands and grows in importance, so does the acute need for human expertise capable of effectively utilizing, supervising, and understanding the limitations, biases, and risks inherent in these technologies. Strengthening AI literacy would not only demonstrate a commitment to mitigate the risks and dangers of automation bias and over-reliance on military AI, but it could also signal goodwill and transparency.

Moreover, the content of the training programs—including cases, key lessons, dilemmas, and imperatives—could subsequently serve as a basis or agenda for discussion at a Track 1.5 or 2 level dialogue between the U.S. and China. From this view, efforts to harden respective military forces against AI overreliance and automation bias could lay the groundwork for confidence building and future U.S.-China military exchanges. 

As the world moves to harness AI’s speed in war-fighting and decision-making, avoiding the pitfalls of over-reliance on AI and reaffirming meaningful human control in AI becomes ever more necessary and imperative. A human-centric approach towards the military use of AI may well be the pathway to more stable and cooperative relations between the U.S. and China.   

*Mathew Jie Sheng Yeo is a researcher at the Taejae Future Consensus Institute, where he focuses on fostering cooperative relations between the U.S. and China. He also serves as the assistant director of the Center for Strategic Studies at the Fletcher School of Law and Diplomacy, Tufts University. Mathew is currently pursuing his Ph.D. at the Fletcher School of Law and Diplomacy.

Hyeyoon Jeong
Hyeyoon Jeong
Hyeyoon Jeong is an independent researcher focusing on the military applications of artificial intelligence. Her work centers on human-machine interaction, examining both the potential risks and the synergies that can emerge through such interaction, and drawing out their policy implications. She previously served as a researcher at the Taejae Future Consensus Institute and as an assistant professor in the Department of International Relations at the Air Force Academy, Republic of Korea.