The U.S. Executive Order on AI: The Future of Global AI Governance

The EOAI mobilises the federal government to create policies and guidelines as well as reports on the application and advancement of AI.

The age of artificial intelligence (AI)’s transformational potential has come to stay for once and all. Our world is saturated with deep learning, big language models, neural networks, processing capacity, specialised chips, and autonomous systems after decades of enormous technical advancement halted by AI winters. AI has arrived and is becoming more and more powerful. And it is spreading quickly throughout the world’s economies and civilizations. Thus, a very reasonable question has emerged: How can humans control this? There is no lack of inspiration. Various proposals for global AI governance have been proposed by national leaders, executives of technology companies, and AI academics. AI has been likened to other policy-relevant issues such as social media, energy, genetic engineering, nuclear and biological weapons, and space exploration. People have looked to organisations like the International Civil Aviation Organisation, the Intergovernmental Panel on Climate Change, and the European Organisation for Nuclear Research, or CERN, for inspiration. Numerous regional and national projects are in the works to set up institutional frameworks for the supervision of AI. There is a need that all parties to engage in the collaborative and consultative process that many are seeking and make sure to have more chances to actively participate in, observe, and learn from the broader conversation concerning AI’s effects on society as these initiatives move forward.

US Executive Order

The White House published an extensive and in-depth executive order on artificial intelligence (EOAI) on October 30, 2023. The EOAI mobilises the federal government to create policies and guidelines as well as reports on the application and advancement of AI. The Voluntary AI Commitments, the AI Bill of Rights, the EOAI, and the efforts made on AI standards add to a more comprehensive and logical approach to AI governance. Given that the United States is a significant developer and investor in artificial intelligence (AI), having developed core AI models like ChatGPT4 more recently, and the US leadership in AI governance is essential. However, international cooperation on AI governance is also required to make domestic AI governance efforts more effective. It includes facilitating the exchange of experiences on AI governance that can inform domestic AI governance approaches; addressing the extraterritorial impacts and externalities of domestic AI governance that otherwise reduce opportunities for AI adoption and use and stifle innovation; and figuring out how to increase global access to the computing power and data that are necessary for developing and training AI models.

AI Global Governance

Through various routes, the EOAI and the other domestic AI regulations mentioned will have a significant international impact. First and foremost, Vice President Kamala Harris, who is leading the US delegation to the UK AI Safety Summit, has a genuine chance to spearhead the advancement of global AI governance thanks to this suite of domestic policy breakthroughs on AI governance. The United States’ stance on AI sharply contrasts with its weak leadership in privacy regulation. In this area, the lack of federal privacy legislation created a void that the European Union’s General Data Protection Regulation (GDPR) filled, making GDPR a global model for privacy regulation. Second, many American domestic AI governance actions will impact global AI results. For instance, the G7’s October 30 announcement of the International Code of Conduct for Organisations Developing Advanced AI Systems was based on the White House Voluntary AI Commitments. Third, the EOAI’s plethora of new AI regulations and recommendations across the federal government will impact how governments and businesses around the globe handle AI governance. It will occur when companies are compelled to align with US AI standards and rules due to the magnitude of the US government procurement industry.

International Cooperation

Furthermore, many AI standards will probably undergo additional development and internationalisation through more formal procedures in international standards development organisations like the ISO/IEC. Fortunately, other countries also pursue home-grown AI governance models faster than the US; for example, the EU AI Act is nearly complete, and nations, including Brazil, the United Kingdom, Canada, and Japan, are creating their own AI governance frameworks. These domestic initiatives are the cornerstones of international AI governance, even though domestic AI governance is where tackling AI threats and realising AI’s societal and economic potential must start. Finding chances for international AI cooperation has been the main focus for the past three years or more. Given the speed and scale of domestic AI governance systems being established worldwide, this task has become even more urgent. The EOAI tasks the Departments of State and Commerce with creating international solid frameworks for using AI’s advantages, controlling its hazards, and guaranteeing security. The Accelerated Development of AI Standards with International Partners in Standards Organisations is another recommendation made by the EOAI.

Looking Forward

AI is advancing at a rate never seen before, with the potential to solve a number of pressing global issues, including speeding up the energy transition, enhancing public health, and reducing poverty worldwide. The speed at which AI is being developed and applied also creates new hazards in the areas of labour, equity, and safety and security, among others. The latest executive order issued by US President Joe Biden seeks to control these hazards and assist global initiatives to regulate AI. The Global Partnership on AI (GPAI), the US-EU Trade and Technology Council, and the G7 discussed AI governance. More active participation in FCAI through these international forums and advancements in creating worldwide AI standards will be required in the future. Another chance to develop the essential global collaboration on AI governance is to increase pledges in trade and digital economy accords.

Dr. Nafees Ahmad
Dr. Nafees Ahmad
Ph. D., LL.M, Faculty of Legal Studies, South Asian University (SAARC)-New Delhi, Nafees Ahmad is an Indian national who holds a Doctorate (Ph.D.) in International Refugee Law and Human Rights. Author teaches and writes on International Forced Migrations, Climate Change Refugees & Human Displacement Refugee, Policy, Asylum, Durable Solutions and Extradition Issus. He conducted research on Internally Displaced Persons (IDPs) from Jammu & Kashmir and North-East Region in India and has worked with several research scholars from US, UK and India and consulted with several research institutions and NGO’s in the area of human displacement and forced migration. He has introduced a new Program called Comparative Constitutional Law of SAARC Nations for LLM along with International Human Rights, International Humanitarian Law and International Refugee Law & Forced Migration Studies. He has been serving since 2010 as Senior Visiting Faculty to World Learning (WL)-India under the India-Health and Human Rights Program organized by the World Learning, 1 Kipling Road, Brattleboro VT-05302, USA for Fall & Spring Semesters Batches of US Students by its School for International Training (SIT Study Abroad) in New Delhi-INDIA nafeestarana[at]gmail.com,drnafeesahmad[at]sau.ac.in