Countering AI-Driven Disinformation: An International Synthetic Media Disclosure Agreement

Few technological developments have reshaped society as profoundly as the rise of generative AI over the past decade.

Few technological developments have reshaped society as profoundly as the rise of generative AI over the past decade. These systems have reshaped how people work, communicate, and produce information, ushering in a new era of convenience and productivity. Yet alongside these benefits, generative AI has introduced a structural vulnerability: synthetic media can now replicate authentic human communication at scale, with minimal cost and striking realism. In early 2019, Chinese scholar Li Bicheng fantasized about a world in which AI systems could adopt realistic personas that replicate human activity in order to bend political opinion and push their maker’s agenda (Irving, 2024). Within a decade, this dystopian scenario will become a reality.

The danger of this vulnerability presents a fundamental challenge not because synthetic media exists, but because it circulates without disclosure. A wide range of actors, including state and private organizations and individuals, can generate realistic content that appears authentic. Modern AI systems can create increasingly realistic content at a scale and speed that is beginning to outpace disinformation defense measures (Helmus & Chandra, 2024). Distinguishing truth from fabrication is becoming increasingly difficult.

Current countermeasures remain inconsistent. Warning labels are a widely studied intervention that can reduce belief in false content; however, their effectiveness varies depending on label design and content. (Martel & Rand, 2023). Additionally, private platforms are influenced by corporate priorities, economic incentives, and political pressures, making labeling inconsistent (Bateman & Jackson, 2024). While initiatives such as the “European Commission’s Code of Practice on Disinformation” have improved transparency, these measures only hold accountable content creators in EU jurisdiction and cannot fully address foreign manipulation (European Commission, 2022). Because disclosure standards are neither universal nor internationally enforceable, deceptive synthetic content can circulate freely across borders.

To address this vulnerability, the international community must establish a multilateral Synthetic Media Disclosure Agreement. Modeling existing arms control and Geneva Convention humanitarian law frameworks, this agreement would require mandatory disclosure of synthetic content and impose individual accountability for deliberate deceptive use. By creating consistent rules governing disclosure, such an agreement would stabilize the global information environment while preserving legitimate uses of artificial intelligence.

The Security Risks of AI-Driven Disinformation

AI-generated disinformation threatens global security by eroding informational trust, a fundamental requirement for political and social stability. As generative systems can produce large volumes of convincing synthetic content, “truth decay” erodes public confidence as the line between authentic and generated is blurred (Helmus & Chandra, 2024). As predicted by Li Bicheng, AI-powered bots now mimic human communication patterns with remarkable accuracy, blending with authentic behavior and making detection significantly harder (Irving, 2024).

The Russo-Ukrainian war illustrates how damaging synthetic media can be to international security. Throughout this conflict, fabricated videos of combat operations, falsified diplomatic communication, and generated images of attacks have circulated to the public. While such deepfakes remain less sophisticated and easy to dismiss, this conflict serves as a warning of the consequences of not attributing or condemning fabricated material (Kuźnicka-Błaszkowska & Kostyuk, 2025). Without coordinated policy, militaries, civilians, and policymakers are vulnerable to psychological manipulation.

This gap in international law extends beyond military conflict. Synthetic media can be used to fabricate policy announcements, distort democratic processes, and enable microtargeting (Saab, 2024). Because generative tools are accessible globally, regulation cannot single out a particular country, government, or private actor. The threat arises not from the technology itself, but from the lack of disclosure requirements governing its use. Without clear norms, plausible deniability serves as a scapegoat for abusing synthetic media, undermining trust in institutions and destabilizing global information systems.

Policy Proposal: A Synthetic Media Disclosure Agreement

Because AI-generated disinformation ignores borders, to restore stability, states should establish a multilateral Synthetic Media Disclosure Agreement. Rather than restricting the development or use of generative AI, the agreement would require transparency and accountability in its circulation. This approach mirrors existing international frameworks, such as the Geneva Conventions and nuclear arms agreements, which do not eliminate weapons but establish norms governing their use.

The first pillar of the agreement would require mandatory labeling of synthetic content intended for public distribution. All AI-generated or AI-altered media would require a clear, standardized disclosure flagging its synthetic origin. Like public health warning labels on tobacco or other carcinogenic products, which do not prohibit use but ensure informed awareness, AI labels would not prevent individuals from engaging with synthetic media but clarify ambiguity regarding its origin. The goal is to enable informed judgment, not restrict free choice.

The second pillar would establish individual accountability for deliberately abusing synthetic media. States would be required to adopt domestic legal frameworks that specifically prohibit individuals in positions of influence (i.e., government personnel, contractors, and private actors) from distributing synthetic content without disclosure. This is especially important in high-risk contexts, such as diplomatic communications, elections, emergency announcements, or official statements. As the Geneva Conventions hold individual soldiers accountable for violations in combat, this framework would punish individual content creators who attempt to use generative AI to push disinformation.

Lastly, the agreement would outline enforcement mechanisms. Like nuclear proliferation agreements, coordinated diplomatic pressure, sanctions, or tariffs would encourage states to adopt the appropriate domestic reforms and provide the global order with leverage in responding to manipulation.

Importantly, this framework does not ban synthetic media or restrict legitimate expression. Instead, it establishes disclosure as a shared expectation and creates clear consequences for individuals who deliberately weaponize synthetic media to deceive others.

Strategic Assessment: Feasibility and Effectiveness

This approach is viable because it builds on international security models states already recognize. The EU’s Code of Practice shows that transparency reforms can be implemented at scale, while NATO’s continued coordination demonstrates that states are already equipped for multilateral cooperation (European Commission, 2022; NATO, 2024). Moreover, nuclear nonproliferation agreements and the Geneva Conventions demonstrate that international norms can reduce abuse by establishing shared expectations and clear consequences for violations.

Although censorship could reduce this vulnerability, it would undermine the productivity and creative potential that make generative AI valuable. Disclosure requirements offer a better alternative, preserving freedom of expression while promoting transparency and protecting privacy (Romero Moreno, 2024). Synthetic media would remain legal for artistic, educational, and commercial purposes. The agreement targets deception, not creation. By focusing on transparency, the framework avoids civil liberty violations while addressing the vulnerability created by undisclosed synthetic content.

Challenges remain, particularly the likelihood that states or content creators may refuse to join the agreement. Additionally, labelling content will not magically delete already circulated synthetic media. However, continuous monitoring, international coordination, and credible consequences can help manage these challenges.

Conclusion

Generative artificial intelligence has fundamentally altered the global information environment. As Li Bicheng predicted, synthetic media is becoming increasingly difficult to distinguish from authentic content. The threat arises not from the existence of synthetic media, but from the rise of distrust and manipulation attributed to its undisclosed use. Without shared international norms mandating disclosure, informational stability will continue to erode.

A Synthetic Media Disclosure Agreement offers a realistic and effective solution. By requiring disclosure and establishing individual accountability, the international community can restore transparency without restricting legitimate use of generative AI. Although violations would still occur, shared norms and clear consequences would help protect the integrity of global information while enabling society to benefit from responsible AI use.

Sophie I. Kent
Sophie I. Kent
Sophie I. Kent is a cadet at the United States Air Force Academy, where she studies Management with a minor in Global Logistics. Her academic and research interests focus on artificial intelligence, information warfare, and the governance challenges posed by emerging technologies. She examines how synthetic media and AI-driven disinformation are reshaping global security and international stability. Sophie intends to pursue graduate study in international relations and public policy and serve as an Air Force Intelligence Officer.