The New Digital Cold War and Why the Global South Needs a Digital Non-Aligned Movement

The next cold war may not begin with soldiers crossing borders or missiles moving into position.

The next cold war may not begin with soldiers crossing borders or missiles moving into position. It may begin with a completed acquisition being pulled back across geopolitical lines, months after the deal was done, and a question few governments are ready to answer: who owns intelligence once it can act?

That is the deeper signal behind China’s move to force Meta to unwind its acquisition of Manus, a Singapore-headquartered AI startup with deep roots in China. The deal had reportedly closed in December 2025 without prior Chinese regulatory approval, only to be investigated in January and later reversed under Beijing’s foreign investment security review mechanism. What mattered was not merely where Manus was incorporated, but where its technology, talent, data, and infrastructure came from—and what kind of AI capability Meta was acquiring. Reuters described Manus as an AI agent tool capable of autonomously executing complex tasks.

This timing matters. Beijing was not simply blocking a proposed deal before it happened. It was signaling that a strategic AI transaction can be revisited even after completion if the state believes capability, talent, or technological provenance remains tied to national interest. Manus, therefore, matters beyond one company or one transaction. It suggests that the digital cold war has entered a new stage: it’s no longer only about chips, cloud infrastructure, or critical minerals, but about the ownership of intelligence itself.

From Restriction to Retaliation

The Manus case is shocking because the rivalry has moved into a new layer: from controlling the tools of intelligence to controlling the ownership of intelligence. Since 2019, when Huawei was added to the US Entity List after US authorities determined it had engaged in activities contrary to national security or foreign policy interests, Washington has steadily treated Chinese technology as a security concern.

The escalation later shifted toward advanced chip controls, AI-related investment screening, and broader scrutiny of technology flows into China. The US Treasury’s outbound investment program, effective from 2 January 2025, can prohibit or require notification for certain US investments involving countries of concern in semiconductors, quantum information technologies, and artificial intelligence.

China has not remained passive. In December 2024, Beijing banned exports to the United States of gallium, germanium, and antimony, citing national security concerns after Washington’s latest chip-sector restrictions. In April 2025, China also placed export restrictions on rare earth elements as part of its response to US tariffs. Seen in this sequence, Manus looks less like an isolated intervention and more like a new stage of retaliation: the contest has moved from networks to chips, from chips to minerals, and now from minerals to AI ownership.

That is why the case should unsettle investors, policymakers, and technology firms. The next phase of the digital cold war will not only be fought over who can access frontier chips, rare earths, or cloud infrastructure. It will also be fought over who is allowed to own, acquire, relocate, and integrate AI systems with strategic value.

KPMG’s 2025 geopolitical risk analysis points in the same direction. It warns that competition in AI and quantum computing is creating technology blocs around the United States and China, while fragmented regulation and security-driven alliances make the technology landscape harder to navigate. The deeper issue is no longer whether one acquisition succeeds or fails. It is whether states are expanding national security to include the ownership, provenance, and transferability of intelligence itself.

When Code Carries a Passport

Manus matters because it is not simply a consumer application. It is an AI agent tool able to execute complex tasks autonomously. Generative AI produces outputs. Agentic AI can coordinate tasks, interact with systems, and move workflows towards outcomes. Once AI moves from assisting humans to acting within operational environments, its strategic value changes. A tool that can execute tasks, connect with systems, and operate across workflows is closer to infrastructure than content.

This is why Manus should not be read narrowly as an M&A dispute. It shows that legal domicile is no longer enough to define the identity of an AI asset. A company may be headquartered in Singapore, but if its technology, engineers, data history, and infrastructure environment remain linked to China, Beijing may still treat it as strategically Chinese. Reuters noted that the case reinforces scrutiny over offshore entities with substantial China ties and raises the risk for future cross-border deals involving Chinese-built technologies.

In the older globalization model, corporate structure often determined jurisdictional treatment. In the AI age, states may look deeper: where the code was built, who trained the engineers, what data shaped the system, which infrastructure enabled it, and which national ecosystem gave it strategic meaning. In short, code now carries a passport.

For investors, this changes the risk model. Cross-border AI acquisitions can no longer be treated as ordinary commercial transactions. They require geopolitical provenance checks. A clean cap table may not be enough. A foreign holding company may not be enough. Even formal legal compliance may not be enough if a state later decides that the asset belongs to its strategic technology ecosystem.

Why the Global South Is Exposed

For the Global South, this new digital cold war creates a difficult position. They do not control most frontier models, advanced chips, dominant cloud platforms, or AI safety standards. Yet they may become some of its most exposed theaters.

The pressure will not always be an explicit demand to choose sides. It may appear in infrastructure decisions: which cloud provider hosts public data, which AI model powers public services, which cybersecurity vendor protects critical sectors, which digital identity system becomes foundational, and which standards shape interoperability.

PwC’s 2025 Cloud Business Survey covering Europe, the Middle East, and Africa (EMEA) shows why these matters. It found that organizations are moving beyond cloud migration towards optimization, sovereignty, and trust while preparing for agentic AI. It also reported that cloud strategy is increasingly shaped by geopolitical and regulatory change and that agentic AI capabilities are becoming important for provider selection. Cloud choice is no longer just an IT decision. In the AI age, it becomes part of strategic dependency.

A country may remain politically non-aligned while becoming technologically dependent on one ecosystem. It may maintain diplomatic balance while its banks, hospitals, schools, payment systems, and public services run on architectures designed elsewhere. The real question for the Global South is not whether to use American, Chinese, European, Indian, Gulf, Japanese, or Korean technology. The real question is whether developing countries have enough institutional capacity to negotiate the terms.

Without that capacity, the Global South risks becoming a market for other people’s systems, a testing ground for other people’s risks, and a diplomatic space where competing powers relocate their technological tensions. AI geopolitics cannot remain a discussion among diplomats and technology experts. Its impact will be felt by sectors that keep modern economies running: finance, telecommunications, energy, health, logistics, public services, and data centers.

Digital NAM and Sovereign AI

This is where a digital non-aligned movement becomes relevant. Its strategic role is simpler: to help the Global South preserve room to maneuver in an AI order increasingly shaped by great-power rivalry.

The old Non-Aligned Movement was born in a world where newly independent states refused to become instruments of military and ideological blocs. The digital age requires a similar instinct; the issue today is not only military alignment. It is cloud alignment, model alignment, data alignment, compute alignment, and standards alignment.

Yet Digital NAM should not become another ceremonial platform. Its value lies in practical coordination. Global South countries need shared awareness of AI geopolitics, clearer mapping of digital dependencies, common expectations on auditability and interoperability, and stronger bargaining power when dealing with Big Tech or major AI powers.

For most developing economies, sovereign AI does not mean building a national frontier model. It means having the capacity to decide how AI systems are adopted, governed, audited, localized, and integrated into critical sectors without becoming locked into one external ecosystem.

That requires three practical moves. First, countries need to map their dependencies: which cloud providers support critical systems, which models are embedded in public services, where sensitive data is stored, which vendors protect national infrastructure, and which AI assets may become strategically important. Second, they need stakeholder literacy, especially among regulators, boards, industry associations, universities, and critical sectors. Third, they need collective bargaining. Individually, many Global South countries have limited leverage. Collectively, they represent markets, data environments, talent pools, public-sector needs, and political legitimacy that no digital order can ignore.

This will not be easy. The Global South is not a single body with one interest or one level of capacity. Some countries depend more on Chinese infrastructure, others on American cloud, European regulation, Indian digital public infrastructure, Gulf capital, or Japanese and Korean industrial partnerships. That fragmentation can be exploited if countries bargain alone.

Chatham House has warned that international AI governance is at risk of failure because rapid geopolitical change, institutional weakness, and asymmetries between public and private sectors make cooperation difficult. CSIS has also argued that many AI governance frameworks are shaped in Washington, Brussels, and Beijing, creating the risk that priorities are set without sufficient participation from the countries expected to implement and use these systems.

The practical starting point, therefore, is not maximal political unity. It is coordination around shared vulnerabilities: data control, model dependency, auditability, procurement standards, interoperability, and capacity transfer.

The goal is not digital isolation. It is better in terms of engagement. Digital non-alignment matters because the future of AI will not only be decided by those who build the largest models. It will also be shaped by those who can still decide how, when, and under whose terms those models enter their societies.

The term “digital colonization” should not be used lightly. But if the Global South cannot shape the models, infrastructures, and standards that increasingly govern its societies, the question will become harder to avoid.

In the age of AI, sovereignty may begin with a simple question: do countries still control their choices, or have those choices already been designed elsewhere?

Tuhu Nugraha
Tuhu Nugraha
Digital Business & Metaverse Expert Principal of Indonesia Applied Economy & Regulatory Network (IADERN)