The Cyber Cold War: How Nations Are Turning AI and Data into Weapons

We are living through the first stages of a new global rivalry, not simply over land or ideology, but over algorithms, sensors, and data. Governments no longer treat AI as just a productivity tool.

We are living through the first stages of a new global rivalry, not simply over land or ideology, but over algorithms, sensors, and data. Governments no longer treat AI as just a productivity tool. They now see it as a source of power. From alliance strategies to trade restrictions, and from self-operating weapons to mass disinformation, states are turning artificial intelligence and the data that supports it into weapons. The result is a geopolitical competition that looks very similar to a cold war, except the missiles are AI models and the frontlines are digital networks.

Why call it a Cyber Cold War? Because the rivalry is broad, happening across different areas, and driven by gaps in computing power, data access, and trust in institutions. Western militaries have openly increased efforts to bring AI into command, control, and intelligence. NATO has updated its AI strategy to focus on defense and preventing threats, and the U.S. Department of Defense has published several strategies to speed up AI adoption while setting responsible boundaries. These documents are clear. AI is now a key capability, and states are treating it as such.

First, militarization and autonomy. The battlefield is a testing ground. Drones, loitering weapons, and self-targeting systems are being researched and bought and are already used in some places. The United Nations’ ongoing discussions on lethal autonomous weapons and the Secretary General’s report highlight the seriousness of the issue. Member states are already giving strong views on how autonomy changes the use of force and risks human control. This reality has created divided positions; some states want bans or tight rules, while others want national frameworks that protect their freedom of action.

Second, data as national fuel. AI needs huge amounts of data and computing power. States now view large datasets, unique sensors (satellites, mobile data, and battlefield information), and access to data as important national resources. The United States’ trade restrictions on advanced chips and chip-making equipment show how computer hardware has become a tool of state policy. Blocking chips is not just economic policy; it is a way to slow down rivals’ ability to train and use advanced AI models. This power changes alliances, supply chains, and the balance of escalation.

Third, information warfare expanded. Generative AI increases the speed and believability of disinformation, fake videos, fake audio, and targeted propaganda. U.S. cybersecurity agencies have warned that generative AI makes threats to elections and public trust worse. National guidance now treats AI-powered influence as an urgent risk. Here, the weapon is not a missile but a story, spread widely and boosted by machine learning.

These trends make the “cold” in this cold war dangerously unstable. Once AI systems are placed inside weapons, decision-making systems, and surveillance networks, the pace of action speeds up. Automation raises the chance of mistakes, wrong blame, and sudden, fast-moving responses. The UN has warned that some autonomous systems have “very serious human rights impacts” and may weaken compliance with the laws of war.

Treat high-risk military AI like dual-use arms control. States should negotiate a two-track system: a global ban on the most dangerous fully autonomous lethal systems and strong transparency, checks, and reporting for other military AI tools. The UN’s work on killer robots and ideas from disarmament groups offer starting points. Politics will be difficult, but the alternative is a hidden, fast arms race.

Coordinate controls on hardware and data while building resilience. Restrictions on advanced chips showed that hardware can be used as a lever, but they must be applied globally to avoid harmful results (like rivals building their own supply chains). At the same time, democracies need to invest in tools to detect fakes, confirm real content, secure AI systems, and strengthen infrastructure (the U.S. and its allies have already shared advice on protecting AI data pipelines). This is deterrence by making systems stronger.

Build transparency and global governance with real power. The UN’s call for a global scientific panel and funding is not wishful thinking; it is a practical way to reduce surprise risks and give support to weaker states. This should be supported with mandatory reporting of military AI use (especially in weapons and safety systems), independent checks, and shared rules for export licenses. Voluntary frameworks like NIST’s RMF are useful, but they must be combined into international obligations when they affect other countries’ security.

Finally, there is a hard truth: technical skill alone will not stop misuse. Ethical rules and risk checks are needed, but they are not enough when global competition pushes states to move faster. To truly close the gap between capability and control, policy must combine technical safeguards with strong treaties, joint industrial planning, and the political will to accept temporary limits on advantage for long-term peace.

This is not a call to stop AI progress. AI will keep changing economies and defense. It is a call to understand that, unlike past technologies, AI and data exist at the crossroads of speed, secrecy, and influence. Without common rules and shared safeguards, we risk turning competition into a dangerous, uncontrollable arms race.

Raja Aneel Meghwar
Raja Aneel Meghwar
Iam Raja Aneel Meghwar, a researcher, policy maker and Human Right Activist. Pursing LLB at Everest Law College. Previously I have also been an internee at Islamabad Policy Research Institute.