The Race for AI, Quantum Supremacy

On a hot summer’s morning in July, Robert Oppenheimer stood in a control bunker in New Mexico and watched the results of his Manhattan Project burn the desert sand, transforming it into a mild but lightly radioactive green glass. Years later, when asked what went through his head when he saw that great grey cloud rise out of the sand, he said he was reminded of Hindu Scripture, the line from Vishnu: ‘Now I am become Death, the destroyer of worlds’. Although, according to his brother, what he actually said after seeing the bomb explode was: ‘I guess it worked’.


As romantic as the potential of science can be, there is also a banality to the discoveries and inventions that shape our world. It is irrefutable that the atomic bomb changed the trajectory of the 20th century, ending the Second World War and fuelling the Cold War between the Soviet Union and the United States, and their proxies. Today, in an era when energy security, food and water shortages and wide-spread dignity-deficits make as many headlines as guns and tanks, investing in AI and quantum technologies can help ensure supremacy. But at what price?

With the world’s superpowers on the cusp of a full-blown AI arms race, things could turn ugly very fast unless efforts are made to guarantee sustainable security for all. AI and quantum technologies could still become game-changing weapons, much like the nuclear bomb. There are already smart bombs, and hypersonic missiles that are faster than ever imagined. AI will immediately provide speed and power, enabling systems to move faster and do more complex activities more efficiently. In short, AI will progressively increase our capabilities, for good or evil. The ultimate challenge will be for countries at the forefront of AI advancement, often geopolitical rivals, to create international frameworks that encourage the transparent development of impressive innovations whose benefits can be shared widely, and responsibly.


There are plenty of eye-catching stories depicting the use of AI in ‘killer drones’ or missiles defence systems, and various world leaders have extolled the benefits of the technology in their militaries. But to focus on specific AI applications in the military is to miss the larger role that the technology is likely to play in global societies and potential conflicts. Military AI is at a relatively early stage of development, and while we can well imagine a future of robotic soldiers and other autonomous killing machines, this would be to ignore the unprecedented impact of AI and quantum technology on our future existence. In the near future, Artificial Intelligence will seep into every aspect of our societies and our economies, transforming our computational power, and with it the manufacturing speed, domestic output, energy usage, and all other processes and relations that define the economic success of a society. It is no wonder then that major global powers China, Russia, the U.S. and others, have poured billions into R&D labs, developing quantum technology and artificial intelligence, in the hope of unlocking a level of extreme-computational power that will catapult scientific, economic, military and technological advances into a new era.

In most developed countries, economic growth in the past half-century has been closely tied to advances in computational power, often from a relatively low base. The dash to quantum supremacy, whether by Google, IBM, or major entities in other nations, will propel states to domination of the global stage. This will come at a price for humanity and the collateral damage is likely to be equitable and dignified peace, security and prosperity. The unilateral and exclusive development of quantum supremacy will break every encryption of other states, and potentially dominate every aspect of world politics and critical infrastructure. It will encroach on our individual freedoms, cultural norms and identity. This won’t be sustainable and will trigger highly disruptive conflicts that could threaten the future of humanity as we know it. 

So how do we prevent this doomsday scenario? We should start by taking an honest look in the mirror. History shows that it is in the nature of states to first strive for survival before ultimately aiming for domination. An unchecked hegemon is rarely fair, just or peaceful, regardless of their proclaimed ideals or political ethos. That is why multipolarity and multilateralism are necessary prerequisites for securing a sustainable future for humanity. Parity or, near parity, is not in the DNA of a hegemon, because most states still govern their national interest through zero-sum paradigms without regard to transnational, global or planetary interests. This is understandable. But it is unworkable in our instantly connected and deeply interdependent world. Despite the initial horror emanating from the use of nuclear weapons against Japan in 1945, near-parity is what led nuclear states to enact treaties that governed the peaceful use of nuclear weapons. It also helped avoid, at least so far, scenarios of mutually assured destruction.

But we need not shackle ourselves to dated Cold War paradigms. In an anarchic, global system without a just, equitable or representative overarching authority, we should seek shelter in more sustainable approaches to global governance. Best embodied by “Multi-sum security” and “Symbiotic Realism” frameworks, these are defined by absolute gains, non-conflictual competition and win-win scenarios, thus guaranteeing sustainable security for all. Importantly, the future should not be taken hostage by any nation that unilaterally masters quantum supremacy. This would create a destructive and uncertain era that could lead to a dystopic stratification of peoples, cultures and states. Such a scenario may not start with a bang, but it could very well once again involve a scientist standing back, looking at their work and exclaiming ‘I guess it worked’.

Nayef Al-Rodhan
Nayef Al-Rodhan
Professor Nayef Al-Rodhan is a neuroscientist and philosopher. He is an Honorary Fellow at St Antony’s College, University of Oxford, and the Head of the Geopolitics & Global Futures Programme at the Geneva Centre for Security Policy (GCSP).