Authors: Oleg Shakirov and Evgeniya Drozhashchikh*
The creation of the Global Partnership on Artificial Intelligence (GPAI) reflects the growing interest of states in AI technologies. The initiative, which brings together 14 countries and the European Union, will help participants establish practical cooperation and formulate common approaches to the development and implementation of AI. At the same time, it is a symptom of the growing technological rivalry in the world, primarily between the United States and China. Russia’s ability to interact with the GPAI may be limited for political reasons, but, from a practical point of view, cooperation would help the country implement its national AI strategy.
The Global Partnership on Artificial Intelligence (GPAI) was officially launched on June 15, 2020, at the initiative of the G7 countries alongside Australia, India, Mexico, New Zealand, South Korea, Singapore, Slovenia and the European Union. According to the Joint Statement from the Founding Members, the GPAI is an “international and multistakeholder initiative to guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation, and economic growth.”
In order to achieve this goal, GPAI members will look to bridge the gap between theory and practice by supporting both research and applied activities in AI. Cooperation will take place in the form of working groups that will be made up of leading experts from industry, civil society and the public and private sectors and will also involve international organizations. There will be four working groups in total, with each group focusing on a specific AI issue: responsible AI; data governance; the future of work; and innovation and commercialization. In acknowledgment of the current situation around the world, the partners also included the issue of using AI to overcome the socioeconomic effects of the novel coronavirus pandemic in the GPAI agenda.
In terms of organization, the GPAI’s work will be supported by a Secretariat to be hosted by the Organisation for Economic Co-Operation and Development (OECD) and Centres of Expertise – one each in Montreal and Paris.
To better understand how this structure came to be, it is useful to look at the history of the GPAI itself. The idea was first put forward by France and Canada in June 2018, when, on the eve of the G7 Summit, Justin Trudeau and Emmanuel Macron announced the signing of the Canada–France Statement on Artificial Intelligence, which called for the creation of an international group to study AI-related issues. By that time, both countries had already adopted their own national AI development strategies – Canada was actually the first country in the world to do so in March 2017. The two countries proposed a mandate for the international group, then known as the International Panel on Artificial Intelligence, at the G7 conference on artificial intelligence in late 2018. A declaration on the creation of the group was then made in May 2019, following a meeting of the G7 Ministers responsible for digital issues. The group was expected to be formally launched three months later at the G7 Summit in Biarritz, with other interested countries (such as India and New Zealand) joining.
However, the initiative did not receive the support of the United States wfithin the G7. Donald Trump and Emmanuel Macron were expected to announce the launch of the group at the end of the event, but the American delegation blocked the move. According to Lynne Parker, Deputy Chief Technology Officer at the White House, the United States is concerned that the group would slow down the development of AI technology and believes that it would duplicate the OECD’s work in the area. The originators of the idea to create the group (which received the name Global Partnership on Artificial Intelligence in Biarritz) clearly took this latter point into account, announcing that the initiative would be developed under the auspices of the OECD.
A Principled Partnership
Like other international structures, the OECD has started to pay greater attention to artificial intelligence in recent years, with its most important achievement in this area being the adoption of the Recommendation of the Council on Artificial Intelligence. Unlike other sets of principles on AI, the OECD’s recommendations were supported by the governments of all member countries, as well as by Argentina, Brazil, Colombia, Costa Rica, Peru and Romania, which made it the first international document of its kind. They were also used as the basis for the Global Partnership on Artificial Intelligence.
In accordance with the OECD recommendations, signatory countries will adhere to the following principles of AI development: promote AI technologies for inclusive growth, sustainable development and well-being; the priority of human-centred values and fairness throughout the life-cycle of AI systems; the transparency and (maximum possible) explainability of AI algorithms; the robustness, security and safety of AI systems; and the accountability of AI actors.
In addition to this, the document proposes that the following factors be taken into account when drafting national AI development strategies: investing in AI research and development; fostering a digital ecosystem for AI research and the practical implementation of AI technologies (including the necessary infrastructure); shaping national policies that allow for a smooth transition from theory to practice; building human capacity and preparing for labour market transformation; and expanding international cooperation in AI.
A few weeks after the OECD endorsement, the recommendations on AI were included as an annex to the G20 Ministerial Statement on Trade and Digital Economy dated July 9, 2019, albeit with slightly different wording. The principles thus received the support of Russia, China and India.
Within the OECD itself, the recommendations served as an impetus for the creation of the OECD AI Policy Observatory (OECD.AI), a platform for collecting and analysing information about AI and building dialogue with governments and other stakeholders. The platform will also be used within the framework of the Global Partnership on Artificial Intelligence.
Artificial Intelligence and Realpolitik
The decision of the United States to join the GPAI was likely motivated more by political reasons than anything else. In the run-up of the G7 Science and Technology Ministers’ Meeting in late May 2020 (where all participants, including the United States, officially announced the launch of the GPAI), Chief Technology Officer of the United States Michael Kratsios published an article in which he stated that democratic countries should unite in the development of AI on the basis of fundamental rights and shared values, rather than abuse AI to control their populations, which is what authoritarian regimes such as China do. According to Kratsios, it is democratic principles that unite the members of the GPAI. At the same time, Kratsios argues that the new coalition will not be a standard-setting or policy-making body, that is, it will not be a regulator in the field of AI.
The United States Strategic Approach to the People’s Republic of China published in May 2020 and the many practical steps that the American side has taken in recent years are a reflection of the tech war currently being waged between the United States and China. For example, the United States has taken a similar approach to the formation of new coalitions in the context of 5G technologies. In 2018–2019, the United States actively pushed the narrative that the solutions offered by Huawei for the creation of fifth-generation communications networks were not secure and convinced its allies to not work with Beijing. Thirty-two countries supported the recommendations put forward at the Prague 5G Security Conference in May 2019 (the Prague Proposals), which included ideas spread by the United States during its campaign against Huawei (for example, concerns about third countries influencing equipment suppliers).
The United States is not the only GPAI member that is concerned about China. Speaking back in January about the U.S. doubts regarding the Franco–Canadian initiative, Minister for Digital Affairs of France Cédric O noted, “If you don’t want a Chinese model in western countries, for instance, to use AI to control your population, then you need to set up some rules that must be common.” India’s participation in the GPAI is particularly telling, as the United States has been trying to involve India in containing China in recent years. The new association has brought together all the participants in the Quadrilateral Security Dialogue (Australia, India, the United States and Japan), which has always been a source of concern for Beijing, thus sending a very clear signal to the Chinese leadership.
The Prospects for Russia
The political logic that guides the United States when it comes to participating in the Global Partnership on Artificial Intelligence may very well extend to Russia. The Trump administration formally declared the return of great power competition in its 2017 National Security Strategy. In Washington, Russia and China are often referred to as the main rivals of the United States, promoting anti-American values.
When assessing the possibility of interaction between Russia and the GPAI, we need to look further than the political positions of the participants. According to the Joint Statement from the Founding Members, the GPAI is open to working with other interested countries and partners. In this regard, the obvious points of intersection between Russia and the new association may produce favourable conditions for practical cooperation in the future.
First of all, the GPAI members and Moscow rely on the same principles of AI development. Russia indirectly adopted the OECD recommendations on artificial intelligence when it approved the inclusion of the majority of their provisions in the Annex to the G20 Ministerial Statement on Trade and Digital Economy in 2019 and thus shares a common intention to ensure the responsible and human-centred development and use of artificial intelligence technologies. This does not mean that there will not be differences of opinion of specific issues, but, as we have already noted, in its current form, the activities of the GPAI will not be aimed at unifying the approaches of the participants.
Second, according to media reports, Russia is working to re-establish ties with the OECD. It is already helping the OECD with its website, periodically providing data on new legal documents that will create a framework for the development and implementation of AI that have been adopted or are being considered.
Third, the current development of the national AI ecosystem in Russia shows that the state, business and the scientific community are interested in the same topics that are on GPAI agenda. This is reflected in the National Strategy for the Development of Artificial Intelligence for the Period up to the Year 2030 adopted in October 2019 and the draft Federal Project on the Development of Artificial Intelligence as Part of the National Programme “Digital Economy of the Russian Federation.” Furthermore, following the adoption of the National Strategy last year, Russian tech companies set up an alliance for AI development in conjunction with the Russian Direct Investment Fund, which is very much in keeping with the multistakeholder approach adopted by the Global Partnership on Artificial Intelligence.
It would seem that politics is the main stumbling block when it comes to Russia’s possible participation in GPAI initiatives, for example, the organization’s clear anti-Chinese leaning or its members openly discrediting Russia’s approaches to the development of AI. That said, Russia has nothing to gain from politicizing the GPAI, since cooperation with the organization could help it achieve its own goals in artificial intelligence. What is more, we cannot rule out the possibility that the GPAI will be responsible in the future for developing unified AI rules and standards. It is in Russia’s interests to have its voice heard in this process to ensure that these standards do not turn into yet another dividing line.
*Evgeniya Drozhashchikh, Ph.D. Student in the Faculty of World Politics at Lomonosov Moscow State University, RIAC Expert
From our partner RIAC
First Quantum Computing Guidelines Launched as Investment Booms
National governments have invested over $25 billion into quantum computing research and over $1 billion in venture capital deals have closed in the past year – more than the past three years combined. Quantum computing promises to disrupt the future of business, science, government, and society itself, but an equitable framework is crucial to address future risks.
A new Insight Report released today at the World Economic Forum Annual Meeting 2022 provides a roadmap for these emerging opportunities across public and private sectors. The principles have been co-designed by a global multistakeholder community composed of quantum experts, emerging technology ethics and law experts, decision makers and policy makers, social scientists and academics.
“The critical opportunity at the dawn of this historic transformation is to address ethical, societal and legal concerns well before commercialization,” said Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning at the World Economic Forum. “This report represents an early intervention and the beginning of a multi-disciplinary, global conversation that will guide the development of quantum computing to the benefit of all society.”
“Quantum computing holds the potential to help solve some of society’s greatest challenges, and IBM has been at the forefront of bringing quantum hardware and software to communities of discovery worldwide,” said Dr. Heike Riel, IBM Fellow, Head of Science and Technology and Lead, Quantum, IBM Research Europe. “This report is a key step in initiating the discussion around how quantum computing should be shaped and governed, for the benefit of all.”
Professor Bronwyn Fox, Chief Scientist at CSIRO, Australia’s science national agency said, “the Principles reflect conversations CSIRO’s scientists have had with partners from around the world who share an ambition for a responsible quantum future. Embedding responsible innovation in quantum computing is key to its successful deployment and uptake for generations to come. CSIRO is committed to ensuring these Principles are used to support a strong quantum industry in Australia and generate significant social and public good.”
In adapting to the coming hybrid model of classical, multi-cloud, and soon quantum computing, the Forum’s framework establishes best-practice principles and core values. These guidelines set the foundation and give rise to a new information-processing paradigm while ensuring stakeholder equity, risk mitigation, and consumer benefit.
The governance principles are grouped into nine themes and underpinned by a set of seven core values. Themes and respective goals defining the principles:
1. Transformative capabilities: Harness the transformative capabilities of this technology and the applications for the good of humanity while managing the risks appropriately.
2. Access to hardware infrastructure: Ensure wide access to quantum computing hardware.
3. Open innovation: Encourage collaboration and a precompetitive environment, enabling faster development of the technology and the realization of its applications.
4. Creating awareness: Ensure the general population and quantum computing stakeholders are aware, engaged and sufficiently informed to enable ongoing responsible dialogue and communication; stakeholders with oversight and authority should be able to make informed decisions about quantum computing in their respective domains.
5. Workforce development and capability-building: Build and sustain a quantum-ready workforce.
6. Cybersecurity: Ensure the transition to a quantum-secure digital world.
7. Privacy: Mitigate potential data-privacy violations through theft and processing by quantum computers.
8. Standardization: Promote standards and road-mapping mechanisms to accelerate the development of the technology.
9. Sustainability: Develop a sustainable future with and for quantum computing technology
Quantum computing core values that hold across the themes and principles:
Common good: The transformative capabilities of quantum computing and its applications are harnessed to ensure they will be used to benefit humanity.
Accountability: Use of quantum computing in any context has mechanisms in place to ensure human accountability, both in its design and in its uses and outcomes. All stakeholders in the quantum computing community are responsible for ensuring that the intentional misuse of quantum computing for harmful purposes is not accepted or inadvertently positively sanctioned.
Inclusiveness: In the development of quantum computing, insofar as possible, a broad and truly diverse range of stakeholder perspectives are engaged in meaningful dialogue to avoid narrow definitions of what may be considered a harmful or beneficial use of the technology.
Equitability: Quantum computing developers and users ensure that the technology is equitable by design, and that quantum computing-based technologies are fairly and evenly distributed insofar as possible. Particular consideration is given to any specific needs of vulnerable populations to ensure equitability.
Non-maleficence: All stakeholders use quantum computing in a safe, ethical and responsible manner. Furthermore, all stakeholders ensure quantum computing does not put humans at risk of harm, either in the intended or unintended outcomes of its use, and that it is not used for nefarious purposes.
Accessibility: Quantum computing technology and knowledge are actively made widely accessible. This includes the development, deployment and use of the technology. The aim is to cultivate a general ability among the population, societal actors, corporations and governments to understand the main principles of quantum computing, the ways in which it differs from classical computing and the potential it brings.
Transparency: Users, developers and regulators are transparent about their purpose and intentions with regard to quantum computing.
“Governments and industries are accelerating their investments in quantum computing research and development worldwide,” said Derek O’Halloran, Head of Digital Economy, World Economic Forum. “This report starts the conversation that will help us understand the opportunities, set the premise for ethical guidelines, and pre-empt socioeconomic, political and legal risks well ahead of global deployment.”
The Quantum Computing Governance Principles is an initiative of the World Economic Forum’s Quantum Computing Network, a multi-stakeholder initiative focused on accelerating responsible quantum computing.
Next steps for the Quantum Computing Governance Initiative will be to work with wider stakeholder groups to adopt these principles as part of broader governance frameworks and policy approaches. With this framework, business and investment communities along with policy makers and academia will be better equipped to adopt to the coming paradigm shift. Ultimately, everyone will be better prepared to harness the transformative capabilities of quantum sciences – perhaps the most exciting emergent technologies of the 21st Century.
Closing the Cyber Gap: Business and Security Leaders at Crossroads as Cybercrime Spikes
The global digital economy has surged off the back of the COVID-19 pandemic, but so has cybercrime – ransomware attacks rose 151% in 2021. There were on average 270 cyberattacks per organization during 2021, a 31% increase on 2020, with each successful cyber breach costing a company $3.6m. After a breach becomes public, the average share price of the hacked company underperforms the NASDAQ by -3% even six months after the event.
According to the World Economic Forum’s new annual report, The Global Cybersecurity Outlook 2022, 80% of cyber leaders now consider ransomware a “danger” and “threat” to public safety and there is a large perception gap between business executives who think their companies are secure and security leaders who disagree.
Some 92% of business executives surveyed agree that cyber resilience is integrated into enterprise risk-management strategies, only 55% of cyber leaders surveyed agree. This gap between leaders can leave firms vulnerable to attacks as a direct result of incongruous security priorities and policies.
Even after a threat is detected, our survey, written in collaboration with Accenture, found nearly two-thirds would find it challenging to respond to a cybersecurity incident due to the shortage of skills within their team. Perhaps even more troubling is the growing trend that companies need 280 days on average to identify and respond to a cyberattack. To put this into perspective, an incident which occurs on 1 January may not be fully contained until 8 October.
“Companies must now embrace cyber resilience – not only defending against cyberattacks but also preparing for swift and timely incident response and recovery when an attack does occur,” said Jeremy Jurgens, Managing Director at the World Economic Forum.
“Organizations need to work more closely with ecosystem partners and other third parties to make cybersecurity part of an organization’s ecosystem DNA, so they can be resilient and promote customer trust,” said Julie Sweet, Chair and CEO, Accenture. “This report underscores key challenges leaders face – collaborating with ecosystem partners and retaining and recruiting talent. We are proud to work with the World Economic Forum on this important topic because cybersecurity impacts every organization at all levels.”
Chief Cybersecurity Officers kept up at night by three things
Less than one-fifth of cyber leaders feel confident their organizations are cyber resilient. Three major concerns keep them awake at night:
– They don’t feel consulted on business decisions, and they struggle to gain the support of decision-makers in prioritizing cyber risks – 7 in 10 see cyber resilience featuring prominently in corporate risk management
– Recruiting and retaining the right talent is their greatest concern – 6 in 10 think it would be challenging to respond to a cybersecurity incident because they lack the skills within their team
– Nearly 9 in 10 see SMEs as the weakest link in the supply chain – 40% of respondents have been negatively affected by a supply chain cybersecurity incident
Training and closing the cyber gap are key solutions
Solutions include employee cyber training, offline backups, cyber insurance and platform-based cybersecurity solutions that stop known ransomware threats across all attack vectors.
Above all, there is an urgent need to close the gap of understanding between business and security leaders. It is impossible to attain complete cybersecurity, so the key objective must be to reinforce cyber resilience.
Including cyber leaders into the corporate governance process will help close this gap.
Ethical aspects relating to cyberspace: Self-regulation and codes of conduct
Virtual interaction processes must be controlled in one way or another. But how, within what limits and, above all, on the basis of what principles? The proponents of the official viewpoint – supported by the strength of state structures – argue that since the Internet has a significant and not always positive impact not only on its users, but also on society as a whole, all areas of virtual interaction need to be clearly regulated through the enactment of appropriate legislation.
In practice, however, the various attempts to legislate on virtual communication face great difficulties due to the imperfection of modern information law. Moreover, considering that the Internet community is based on an internal “anarchist” ideology, it shows significant resistance to government regulations, believing that in a cross-border environment – which is the global network – the only effective regulator can be the voluntarily and consciously accepted intranet ethics based on the awareness of the individual person’s moral responsibility for what happens in cyberspace.
At the same time, the significance of moral self-regulation lies not only in the fact that it makes it possible to control the areas that are insufficiently covered, but also in other regulatory provisions at political, legal, technical or economic levels. It is up to ethics to check the meaning, lawfulness and legitimacy of the remaining regulatory means. The legal provisions themselves, supported by the force of state influence, are developed or – at least, ideally – should be implemented on the basis of moral rules. It should be noted that, although compliance with law provisions is regarded as the minimum requirement of morality, in reality this is not always the case – at least until an “ideal” legislation is devised that does not contradict morality in any way. Therefore, an ethical justification and an equal scrutiny of legislative and disciplinary acts in relation to both IT and computer technology are necessary.
In accordance with the deontological approach to justifying web ethics, the ethical foundation of information law is based on the human rights of information. Although these rights are enshrined in various national and international legal instruments, in practice their protection is often not guaranteed by anyone. This enables several state structures to introduce various restrictions on information, justifying them with noble aims such as the need to implement the concept of national security.
It should be stressed that information legislation (like any other in general) is of a conventional nature, i.e. it is a sort of temporary compromise reached by the representatives of the various social groups. Therefore, there are no unshakable principles in this sphere: legality and illegality are defined by a dynamic balance between the desire for freedom of information, on the one hand, and the attempts at restricting this freedom in one way or another.
Therefore, several subjects have extremely contradictory requirements with regard to modern information law, which are not so easy to reconcile. Information law should simultaneously protect the right to free reception of information and the right to information security, as well as ensure privacy and prevent cybercrime. It should also promote again the public accessibility of the information created, and protect copyright – even if this impinges on the universal principle of knowledge sharing.
The principle of a reasonable balance of these often diametrically opposed aspirations, with unconditional respect for fundamental human rights, should be the basis of the international information law system.
Various national and international public organisations, professionals and voluntary users’ associations define their own operation principles in a virtual environment. These principles are very often formalised in codes of conduct, aimed at minimising the potentially dangerous moral and social consequences of the use of information technologies and thus at achieving a certain degree of web community’s autonomy, at least when it comes to purely internal problematic issues. The names of these codes do not always hint at ethics, but this does not change their essence. After all, they have not the status of law provisions, which means that they cannot serve as a basis for imposing disciplinary, administrative or any other liability measures on offenders. They are therefore enforced by the community members who have adopted them solely with goodwill, as a result of free expression based on recognition and sharing of the values and rules enshrined in them. These codes therefore act as one of the moral self-regulating mechanisms of the web community.
The cyberspace codes of ethics provide the basic moral guidelines that should guide information activities. They specify the principles of general theoretical ethics and are reflected in a virtual environment. They contain criteria enabling to recognise a given act as ethical or unethical. They finally provide specific recommendations on how to behave in certain situations. The rules enshrined in the codes of ethics under the form of provisions, authorisations, bans, etc., represent in many respects the formalisation and systematisation of unwritten rules and requirements that have developed spontaneously in the process of virtual interaction over the last thirty years of the Internet.
Conversely, the provisions of codes of ethics must be thoroughly considered and judged – by their very nature, code of ethics are conventional and hence they are always the result of a mutual agreement of the relevant members of a given social group – as otherwise they are simply reduced to a formal and sectorial statement, divorced from life and not rule-bound.
Despite their multidirectionality due to the variety of net functional abilities and the heterogeneity of its audience, a comparison of the most significant codes of ethics on the Internet shows a number of common principles. Apparently, these principles are in one way or another shared by all the Internet community members. This means that they underpin the ethos of cyberspace. They include the principle of accessibility, confidentiality and quality of information; the principle of inviolability of intellectual property; the principle of no harm, and the principle of limiting the excessive use of net resources. As can be seen, this list echoes the four deontological principles of information ethics (“PAPA: Privacy, Accuracy, Property and Accessibility”) formulated by Richard Mason in his article Four Ethical Issues of the Information Age. (“MIS Quarterly”, March 1986).
The presence of a very well-written code of ethics cannot obviously ensure that all group members will act in accordance with it, because – for a person – the most reliable guarantees against unethical behaviour are his/her conscience and duties, which are not always respected. The importance of codes should therefore not be overestimated: the principles and actual morals proclaimed by codes may diverge decisively from one another. The codes of ethics, however, perform a number of extremely important functions on the Internet: firstly, they can induce Internet users to moral reflection by instilling the idea of the need to evaluate their actions accordingly (in this case, it is not so much a ready-made code that is useful, but the very experience of its development and discussion). Secondly, they can form a healthy public in a virtual environment, and also provide it with uniform and reasonable criteria for moral evaluation. Thirdly they can become the basis for the future creation of international information law, adapted to the realities of the electronic age.
Shipyard in Finland receives major order to build icebreaker
Helsinki Shipyard has received a major order to build the largest icebreaker in Finnish history and in the marine industry...
2022: Small Medium Business & Economic Development Errors
Calling Michelangelo: would Michelangelo erect a skyscraper or can an architect liberate David from a rock of marble? When visibly damaged...
Lithuania is left in the dust
The nearly completed Nord Stream 2 is again in focus. It has become known that the U.S. Senate on January...
Spotlight on the Russia-Ukraine situation
The United States of America and Russia have recently been at loggerheads over the issue of Ukraine. Weeks ago the...
Putin’s post-Soviet world remains a work in progress, but Africa already looms
Russian civilisationalism is proving handy as President Vladimir Putin seeks to expand the imaginary boundaries of his Russian World, whose...
S. Jaishankar’s ‘The India Way’, Is it a new vision of foreign policy?
S. Jaishankar has had an illustrious Foreign Service career holding some of the highest and most prestigious positions such as ambassador to...
PM Kishida Outlines Vision for a New Form of Capitalism
Japanese Prime Minister Kishida Fumio called for a new form of liberal democratic capitalism, balancing economic growth and distribution, in...
Africa4 days ago
SADC extends its joint military mission in Mozambique
Central Asia3 days ago
Kazakhstan, like Ukraine, spotlights the swapping of the rule of law for the law of the jungle
Africa3 days ago
Pragmatic Proposals to Optimize Russia’s Pledged Rehabilitation of Ethiopia
International Law4 days ago
Omicron and Vaccine Nationalism: How Rich Countries Have Contributed to Pandemic’s Longevity
Development4 days ago
Competition to Find Solutions to Reduce Overfishing in Coastal Fisheries
Africa4 days ago
Decade of Sahel conflict leaves 2.5 million people displaced
Energy News4 days ago
Canada’s bold policies can underpin a successful energy transition
Finance3 days ago
China: $1.9 Trillion Boost and 88M Jobs by 2030 Possible with Nature-Positive Solutions