Connect with us

Science & Technology

Trump vs. The Robots: US jobs and Promises

Osama Rizvi

Published

on

Among political observers, there is a widespread notion that U.S. President-elect Donald Trump will inherit an economy in the best of shape. Inflation is down to historic and desirable levels, the unemployment rate stands at 4.9% and U.S. economic growth is better than expected. Moreover, observers can’t help but hear Mr. Trump’s boastful rhetoric as soon as he steps onto the bully pulpit. But as promising as the picture might seem, it will be very difficult to carry off his promise of ‘getting back our jobs’ in the long term.

Since 1980’s and up to the 2000’s, the world has undergone immense changes. The most prominent and significant being that in the realm of technology. And, the internet generates new, mind boggling marvels with each passing day – and continues to do so. Through the ‘Internet of Things’ and automation people are experiencing massive changes in the way the world works while scientists are signing letters foreboding the dangers of the rising AI. U.S. politicians and the media have typically blamed offshoring [usually to China] and international trade agreements for wrecking the domestic economy. A University of California study asserts that approximately 14 million white collar jobs are susceptible to off-shoring [5]. Ron and Anil Hira, in their book “Outsourcing America, believe that US companies justify off-shoring by arguing that to create more jobs domestically through cost savings are “self-delusion.” (Ron Hira is a professor at the Rochester Institute of Technology and Anial Hira is a professor at Simon Fraser University.)

In other words, it is not the intervention of foreigners which leads to the scarcity of jobs but automation. There are two vocal camps on this issue: One believes that automation, instead of creating a paucity of jobs instead leads to the creation of more job opportunities. And the other camp remains certain that, despite the spread of AI and factory robots, their jobs will remain intact during the next coming years, as reported by a research paper issued by the non-partisan PEW Research Center. Experts surveyed by Pew called for a more optimistic approach: “many jobs currently performed by humans will be substantially taken over by robots or digital agents by 2025. But they have faith that human ingenuity will create new jobs, industries, and ways to make a living, just as it has been doing since the dawn of the Industrial Revolution”. However, there are dissenters as well.

Justin Reich, a fellow at Harvard University’s Berkman Center for Internet & Society, says: “Robots and AI will increasingly replace routine kinds of work − I’m not sure that jobs will disappear altogether, though that seems possible, but the jobs that are left will be lower-paying and less secure than those that exist now. The middle is moving to the bottom.”

One can see very clearly how technologies are replacing even white-collar jobs and thus breaking the presumption that only routine and repetitive jobs are at danger from automation. Take for example, the case of Enlitic: A deep-learning system that is now being tested in Australia. The software can diagnose diseases, analyze X-rays and identify cancer. Moreover, the field of medicine is not the only profession feeling the heat of automation. Jobs in the field of law are also vulnerable. There is software in existence that can rummage through dossiers of legal documents and easily pin-point the desired files.

usemp1

“Automation is now “blind to the color of your collar”, declares Jerry Kaplan, author of “Humans Need Not Apply”, a book that predicts upheaval in the labor market.

Another Perspective

The other camp, however, is trying to peddle a more positive future. Debunking the ‘lump of labor’ fallacy which states that there is a finite amount of work and automation, and hence opening a chasm between jobs and peoples, the proponents state that automating a task results in creating more tasks as more people or different processes are now required to operate that ‘automated’ job. Again quoting The Economist, “During the Industrial Revolution more and more tasks in the weaving process were automated, prompting workers to focus on the things machines could not do, such as operating a machine, and then tending multiple machines to keep them running smoothly. This caused output to grow explosively. In America during the 19th century the amount of coarse cloth a single weaver could produce in an hour increased by a factor of 50, and the amount of labor required per yard of cloth fell by 98%. This made cloth cheaper and increased demand for it, which in turn created more jobs for weavers: their numbers quadrupled between 1830 and 1900. In other words, technology gradually changed the nature of the weaver’s job, and the skills required to do it, rather than replacing it altogether,” says James Bessen, an economist at Boston University School of Law said.

usemp2

“We already have cars that talk to us, a phone we can talk to, robots that lift the elderly out of bed, and apps that remind us to call Mom. An app can dial Mom’s number and even send flowers, but an app can’t do the most human of all things: emotionally connect with her,” according to Pamela Rutledge, PhD and director of the Media Psychology Research Center.

When Mr. Trump assumes office on 20th January, 2017, he says that one of his first priorities, among other things, is to repeal the Trans-Pacific Partners or otherwise known as “TPP.” And Trump intends to lure back U.S. companies by offering lower taxes (if not through sheer brute force as displayed in his negotiations with the air conditioner manufacture, Carrier). And yet, at the same point, Trump promises more government spending e.g. Infrastructure development.   Economists generally agree that lower taxes and increased spending will increase U.S. debt which may potentially lead to a ruinous outcome for the US Economy. Nevertheless, Americans who voted for him count on his actions and his promises, including bringing thousands of jobs back to the US. Therefore, observers must consider the question: What is that is more dangerous? Off-shoring or Automation?

Independent Economic Analyst, Writer and Editor. Contributes columns to different newspapers. He is a columnist for Oilprice.com, where he analyzes Crude Oil and markets. Also a sub-editor of an online business magazine and a Guest Editor in Modern Diplomacy. His interests range from Economic history to Classical literature.

Continue Reading
Comments

Science & Technology

Artificial Intelligence and Its Partners

Oleg Shakirov

Published

on

Authors: Oleg Shakirov and Evgeniya Drozhashchikh*

The creation of the Global Partnership on Artificial Intelligence (GPAI) reflects the growing interest of states in AI technologies. The initiative, which brings together 14 countries and the European Union, will help participants establish practical cooperation and formulate common approaches to the development and implementation of AI. At the same time, it is a symptom of the growing technological rivalry in the world, primarily between the United States and China. Russia’s ability to interact with the GPAI may be limited for political reasons, but, from a practical point of view, cooperation would help the country implement its national AI strategy.

AI Brothers

The Global Partnership on Artificial Intelligence (GPAI) was officially launched on June 15, 2020, at the initiative of the G7 countries alongside Australia, India, Mexico, New Zealand, South Korea, Singapore, Slovenia and the European Union. According to the Joint Statement from the Founding Members, the GPAI is an “international and multistakeholder initiative to guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation, and economic growth.”

In order to achieve this goal, GPAI members will look to bridge the gap between theory and practice by supporting both research and applied activities in AI. Cooperation will take place in the form of working groups that will be made up of leading experts from industry, civil society and the public and private sectors and will also involve international organizations. There will be four working groups in total, with each group focusing on a specific AI issue: responsible AI; data governance; the future of work; and innovation and commercialization. In acknowledgment of the current situation around the world, the partners also included the issue of using AI to overcome the socioeconomic effects of the novel coronavirus pandemic in the GPAI agenda.

In terms of organization, the GPAI’s work will be supported by a Secretariat to be hosted by the Organisation for Economic Co-Operation and Development (OECD) and Centres of Expertise – one each in Montreal and Paris.

To better understand how this structure came to be, it is useful to look at the history of the GPAI itself. The idea was first put forward by France and Canada in June 2018, when, on the eve of the G7 Summit, Justin Trudeau and Emmanuel Macron announced the signing of the Canada–France Statement on Artificial Intelligence, which called for the creation of an international group to study AI-related issues. By that time, both countries had already adopted their own national AI development strategies – Canada was actually the first country in the world to do so in March 2017. The two countries proposed a mandate for the international group, then known as the International Panel on Artificial Intelligence, at the G7 conference on artificial intelligence in late 2018. A declaration on the creation of the group was then made in May 2019, following a meeting of the G7 Ministers responsible for digital issues. The group was expected to be formally launched three months later at the G7 Summit in Biarritz, with other interested countries (such as India and New Zealand) joining.

However, the initiative did not receive the support of the United States wfithin the G7. Donald Trump and Emmanuel Macron were expected to announce the launch of the group at the end of the event, but the American delegation blocked the move. According to Lynne Parker, Deputy Chief Technology Officer at the White House, the United States is concerned that the group would slow down the development of AI technology and believes that it would duplicate the OECD’s work in the area. The originators of the idea to create the group (which received the name Global Partnership on Artificial Intelligence in Biarritz) clearly took this latter point into account, announcing that the initiative would be developed under the auspices of the OECD.

A Principled Partnership

Like other international structures, the OECD has started to pay greater attention to artificial intelligence in recent years, with its most important achievement in this area being the adoption of the Recommendation of the Council on Artificial Intelligence. Unlike other sets of principles on AI, the OECD’s recommendations were supported by the governments of all member countries, as well as by Argentina, Brazil, Colombia, Costa Rica, Peru and Romania, which made it the first international document of its kind. They were also used as the basis for the Global Partnership on Artificial Intelligence.

In accordance with the OECD recommendations, signatory countries will adhere to the following principles of AI development: promote AI technologies for inclusive growth, sustainable development and well-being; the priority of human-centred values and fairness throughout the life-cycle of AI systems; the transparency and (maximum possible) explainability of AI algorithms; the robustness, security and safety of AI systems; and the accountability of AI actors.

In addition to this, the document proposes that the following factors be taken into account when drafting national AI development strategies: investing in AI research and development; fostering a digital ecosystem for AI research and the practical implementation of AI technologies (including the necessary infrastructure); shaping national policies that allow for a smooth transition from theory to practice; building human capacity and preparing for labour market transformation; and expanding international cooperation in AI.

A few weeks after the OECD endorsement, the recommendations on AI were included as an annex to the G20 Ministerial Statement on Trade and Digital Economy dated July 9, 2019, albeit with slightly different wording. The principles thus received the support of Russia, China and India.

Within the OECD itself, the recommendations served as an impetus for the creation of the OECD AI Policy Observatory (OECD.AI), a platform for collecting and analysing information about AI and building dialogue with governments and other stakeholders. The platform will also be used within the framework of the Global Partnership on Artificial Intelligence.

Artificial Intelligence and Realpolitik

The decision of the United States to join the GPAI was likely motivated more by political reasons than anything else. In the run-up of the G7 Science and Technology Ministers’ Meeting in late May 2020 (where all participants, including the United States, officially announced the launch of the GPAI), Chief Technology Officer of the United States Michael Kratsios published an article in which he stated that democratic countries should unite in the development of AI on the basis of fundamental rights and shared values, rather than abuse AI to control their populations, which is what authoritarian regimes such as China do. According to Kratsios, it is democratic principles that unite the members of the GPAI. At the same time, Kratsios argues that the new coalition will not be a standard-setting or policy-making body, that is, it will not be a regulator in the field of AI.

The United States Strategic Approach to the People’s Republic of China published in May 2020 and the many practical steps that the American side has taken in recent years are a reflection of the tech war currently being waged between the United States and China. For example, the United States has taken a similar approach to the formation of new coalitions in the context of 5G technologies. In 2018–2019, the United States actively pushed the narrative that the solutions offered by Huawei for the creation of fifth-generation communications networks were not secure and convinced its allies to not work with Beijing. Thirty-two countries supported the recommendations put forward at the Prague 5G Security Conference in May 2019 (the Prague Proposals), which included ideas spread by the United States during its campaign against Huawei (for example, concerns about third countries influencing equipment suppliers).

The United States is not the only GPAI member that is concerned about China. Speaking back in January about the U.S. doubts regarding the Franco–Canadian initiative, Minister for Digital Affairs of France Cédric O noted, “If you don’t want a Chinese model in western countries, for instance, to use AI to control your population, then you need to set up some rules that must be common.” India’s participation in the GPAI is particularly telling, as the United States has been trying to involve India in containing China in recent years. The new association has brought together all the participants in the Quadrilateral Security Dialogue (Australia, India, the United States and Japan), which has always been a source of concern for Beijing, thus sending a very clear signal to the Chinese leadership.

The Prospects for Russia

The political logic that guides the United States when it comes to participating in the Global Partnership on Artificial Intelligence may very well extend to Russia. The Trump administration formally declared the return of great power competition in its 2017 National Security Strategy. In Washington, Russia and China are often referred to as the main rivals of the United States, promoting anti-American values.

When assessing the possibility of interaction between Russia and the GPAI, we need to look further than the political positions of the participants. According to the Joint Statement from the Founding Members, the GPAI is open to working with other interested countries and partners. In this regard, the obvious points of intersection between Russia and the new association may produce favourable conditions for practical cooperation in the future.

First of all, the GPAI members and Moscow rely on the same principles of AI development. Russia indirectly adopted the OECD recommendations on artificial intelligence when it approved the inclusion of the majority of their provisions in the Annex to the G20 Ministerial Statement on Trade and Digital Economy in 2019 and thus shares a common intention to ensure the responsible and human-centred development and use of artificial intelligence technologies. This does not mean that there will not be differences of opinion of specific issues, but, as we have already noted, in its current form, the activities of the GPAI will not be aimed at unifying the approaches of the participants.

Second, according to media reports, Russia is working to re-establish ties with the OECD. It is already helping the OECD with its website, periodically providing data on new legal documents that will create a framework for the development and implementation of AI that have been adopted or are being considered.

Third, the current development of the national AI ecosystem in Russia shows that the state, business and the scientific community are interested in the same topics that are on GPAI agenda. This is reflected in the National Strategy for the Development of Artificial Intelligence for the Period up to the Year 2030 adopted in October 2019 and the draft Federal Project on the Development of Artificial Intelligence as Part of the National Programme “Digital Economy of the Russian Federation.” Furthermore, following the adoption of the National Strategy last year, Russian tech companies set up an alliance for AI development in conjunction with the Russian Direct Investment Fund, which is very much in keeping with the multistakeholder approach adopted by the Global Partnership on Artificial Intelligence.

It would seem that politics is the main stumbling block when it comes to Russia’s possible participation in GPAI initiatives, for example, the organization’s clear anti-Chinese leaning or its members openly discrediting Russia’s approaches to the development of AI. That said, Russia has nothing to gain from politicizing the GPAI, since cooperation with the organization could help it achieve its own goals in artificial intelligence. What is more, we cannot rule out the possibility that the GPAI will be responsible in the future for developing unified AI rules and standards. It is in Russia’s interests to have its voice heard in this process to ensure that these standards do not turn into yet another dividing line.

*Evgeniya Drozhashchikh, Ph.D. Student in the Faculty of World Politics at Lomonosov Moscow State University, RIAC Expert

From our partner RIAC

Continue Reading

Science & Technology

5G: A Geostrategic sector for Algorithmic finance

Published

on

The last ones were days of increasing tensions between the two biggest economic superpowers, the USA and China. The geopolitical crescendo seems to become always more intense, and the two giants are trying to build up two strong alignments against one another in a competition that Bloomberg defines a “Cold War 2.0.” or a “Tech War”. The implementation of 5G technologies plays a fundamental role in this “rush to the infrastructures” also due to their linkages with the “High-Frequency Trading” world; the sector of contemporary finance based on always faster algorithms and huge Data Centres that require strong software and the analysis of tons and tons of information to predict stocks fluctuations hence “to do God’s work” as Lloyd Blankfein (the actual Senior Chairman of Goldman Sachs) said in 2009 [1].

In the article “Digital Cold War” Marc Champion describes his strongly polarized vision of the global scenario in which the conflict would take place like two technological ecospheres; with half of the world where people are carried around by driverless cars created by Baidu using Huawei 5G’s, chatting and paying with WeChat and buying on Alibaba with an internet connection strictly controlled and limited by the Great Firewall; while on the other part of the world people with a less controlled internet connection buy on Amazon and use other dominating companies e.g. Google, Tesla, Ericsson, and Facebook. The latter presented scenario is always more tangible, indeed it is enough to consider that the People’s Republic of China has equipped itself with an alternative system to the GPS (the instrument that following the theory of the PRC caused the downfall of two Chinese missiles sent during the conflict with Formosa), created by Baidu on the 23rd July 2020.

The presentation of scenarios in which the 5G plays a crucial role makes it necessary to give a closer look at what 5G technologies technically are. 5G (Fifth-generation) stands for the next major phase of mobile telecommunications standards beyond the 4G/IMT Advanced standards. Since the first generation, that was introduced in 1982, it is observable a remarkable growth of cellular communication of about 40% per year; these issues led mobile service providers to research new technologies and improved services, basing on the evidence that wireless communication networks have become much more pervasive. Therefore, aiming to fulfill the growing need of human being, 5G will be the network for millions of devices and not just for smartphones, hence it grants connectivity between sensors, vehicles, robots and drones; and it provides data speed up to 1 to 10 Gbps, and a faster connection for more people in a km2, thus the creation of smart cities.

It is now more evident that the implementation of the fifth-generation technologies offers strategic slides of power and control to the companies, and the linked geopolitical actors, that manage the infrastructures and the network band. Therefore, 5G technologies play a crucial geopolitical role, being inter alia fundamental for strategic sectors, such as the high-frequency trading (that we are going to discuss later), that sustain and orientate the world’s economy. This rush to the infrastructure, hence to the technological supremacy led to a crescendo of reprisals among the world’s most influential countries. If we give a closer look at the relations between the USA and China, the last years were characterized by increasing tensions, in the commercial relations, in military ones linked to the Indo-Pacific area and the Xinjiang, and lastly, tensions concerning the approach to face the COVID-19 threat. USA and China, as Kishore Mahbubani says seems no longer partners either in business; but to fully understand the actual situation in terms of 5G the concrete measures and imposed bans are going to be presented. The fact that Chinese companies, in particular Huawei and ZTE, began focusing on acquiring a lead in 5G intellectual property well before their global competitors (with an expense, indicated in their annual report, of about $600 million between 2009 and 2013; and a planned one of about $800 million in 2019), being now leading ones in the implementation of the technology, led the US to be more consternated about their national security and global influence. Therefore, in the geopolitical logic of 5G, the US keep acting to opt against China as a country that “exploit data”, indeed Mike Pompeo in an interview in 2019 said “We can’t forget these systems were designed by- with the express (desire to) work alongside the Chinese PLS, their military in China”; while on the other side China has responded with a campaign that blends propaganda, persuasion, and incentives with threats and economic coercion, offering massive investments plans, aiming to reach the now well known “Belt and Road Initiative”. The Trump administration effectively banned executive agencies from using or procuring Huawei and ZTE telecommunication equipment with the National Defense Authorization Act signed in 2018, a ban that was challenged by Huawei in the court and obtained a favourable verdict; a ban that was later re-proposed in May 2019 with an executive order; that was followed by the US Commerce Department placing Huawei and 68 affiliates on an Entity List, a document that conditions the sale or transfer of American technology to that entities unless they have a special license; however the latter restrictions were imposed just for 90 days after the failure of the 11th round of trade talks between China and the US. Canada is another country with deteriorated relations with Beijing, after the arrest of Meng, who was extradited from the US territory. Furthermore, in recent days, as revealed by The Wall Street Journal, the UK announced that is going to ban Huawei 5G technologies from 2027, following the US imposition, and Beijing responded considering a possible ban on Chinese elements for the Finnish Nokia and the Swedish Ericsson. While the European Union keeps struggling to face the situation as a Union, and political reprisals between these two States occur e.g. the closure of the Chinese consulates in Huston and San Francisco and the closure of the US’ one in Chengdu;  in a geopolitical context, the US are trying to build a strong anti-Chinese alignment in the Indo-Pacific area, with the support of countries like the Philippines, Singapore, Taiwan, South Korea, Japan, Australia, and New Zealand; also following the logic that in a strategic scenario the geopolitical actors, between two competitors States tends to choose the side of the farthest one. Another actor that could tip the balance in this global scenario is India; that following a government study of August 2018 could hit the national income of about $1 trillion by 2035 with the implementation of 5G technologies, improving the governance capacity, and enabling healthcare delivery, energy grid management and urban planning. However, high levels of automation and dependence on a communication network, if it would follow the investment plan proposed by Huawei (but also an extreme inclination to the US), could bring security threats and lack of supremacy, hence “voice” in a global scenario.

After having analysed the geopolitical patterns of 5G implementation, it is time to analyse a strategic sector linked to the fifth-generation technologies, which is the “engine” of the World’s economy, the finance. There are some milestones that have made national markets global ones; within what is called the “rebellion of the machines” that led the financial world to be totally based on algorithms hence on speed. The first one was the introduction of the Telegraph, introduced in 1848, and that both with the new Galena and Chicago Railroad promoted the born of the Chicago Board of Trade. The telegraph carried anthropological changes hence it was fundamental for the division between the price and the goods; and it seems to have carried with itself big changes in the finance world, the same thing 5G will do in our scenario. Among all the events that led to the second phase of the “rebellion” there is what happened in 2000, when after merging with other European markets, thanks to SuperCAC, the Paris stock exchange took the name EURONEXT. Later in 2007, the second phase of the rebellion took place in an increasingly globalized scenario; where the tech was already part of the finance, and there were a lot of digitalized platforms to trade-in. Therefore, following the development of the digital world The Chicago Mercantile Exchange created his own platform Globex, which in 2007 merged with the CBOT’s one Aurora that was based on a weak band network of about 19,2 kb. The banks created black boxes so dark that it would not allow them to be in control anymore; a very different situation from the conditions established by the Buttonwood agreement of 1792, the act at the basis of the birth of the second world market after that of Philadelphia which provided for the sale of securities between traders without going through intermediaries. Subsequently, the steps that favoured the rise of trading platforms, the development of adaptive algorithms based on the laws of physics and, mathematics and biology, were multiple, which therefore led to the development of what is called phynanza. In the 2000s the most influential banking groups, Goldman Sachs, Crédit Suisse, BNP Paribas, Barclays, Deutsche Bank, Morgan Stanley, Citigroup, through a strong deregulation and lobbying activities have directed the markets towards their deeper turning point; in an era in which the headquarters of the stock exchanges are not physical and the core bodies of the  exchange markets are in the suburbs where large spaces and the technological infrastructures of network and data transmissions allow the creation of huge data centers, where powerful software, cooling systems and adaptive algorithms give life to the daily oscillations of global finance. Algorithms like Iceberg, that splits a large volume of orders into small portions, so that the entirety of the initial volume escapes the “nose of the hounds”; or Shark that identifies orders shipped in small quantities to glimpse the big order that Is hiding behind; or Dagger, a Citibank algorithm launched in 2012 that like Stealth, Deutsche Bank’s algorithm, is looking for more liquid values, and also Sumo of Knight Capital, a high frequency trading company that alone trades an amount of about $ 20 billion a day; and there are many others, from Sonar, Aqua, Ninja and Guerrilla.

It is clear that to support such an articulated financial apparatus it is necessary to connect and analyze data with microsecond accuracy. Therefore, another example of 5G geostrategy in finance is Coriolis 2, an oceanographic ship created in 2010 by Seaforth Geosurveys that offers maritime engineering solutions. Notably, among their clients there is Hibernia Atlantic; an underwater communication network, that connects North America to Europe, created in 2000 at a cost of 1 billion. The New Jersey office manufactures transatlantic cables that rent to telecommunications companies like Google and Facebook, obviously not to improve the circulation of stupid comments on social networks. The ship is preparing the construction of “dark fiber” cables, and the technical management and the end-use are by Hibernia who may not share the band with anyone. The peculiar thing is that who ordered the cable, Hibernia, was created specifically for financial market operators and it is part of the Global Financial Network (GFN), which manages 24.000Km of optical fiber that connects more than 120 markets. This new fiber at the cost of 300 million, will allow to gain 6 milliseconds, a time that a USA-UE investment fund can use to earn £100 million dollars more per year. The transmission networks are fundamental in guaranteeing trading and in high frequency and the motto has changed from “time is money” to “speed is money”.

Bibliography

[1] Laumonier A., 2018. 6/5, Not, Nero collection, Roma.
[2] Kewalramani M., Kanisetti A. 5G, Huawei & Geopolitics: An Indian Roadmap. 2019, Takshashila institution; Discussion document.

From our partner International Affairs

Continue Reading

Science & Technology

AI Is Neither the Terminator Nor a Benevolent Super Being

Anastasia Tolstukhina

Published

on

Digitalization and the development of artificial intelligence (AI) bring up many philosophical and ethical questions about the role of man and robot in the nascent social and economic order. How real is the threat of an AI dictatorship? Why do we need to tackle AI ethics today? Does AI provide breakthrough solutions? We ask these and other questions in our interview with Maxim Fedorov, Vice-President for Artificial Intelligence and Mathematical Modelling at Skoltech.

On 1–3 July, Maxim Fedorov chaired the inaugural Trustworthy AI online conference on AI transparency, robustness and sustainability hosted by Skoltech.

Maxim, do you think humanity already needs to start working out a new philosophical model for existing in a digital world whose development is determined by artificial intelligence (AI) technologies?

The fundamental difference between today’s technologies and those of the past is that they hold up a “mirror” of sorts to society. Looking into this mirror, we need to answer a number of philosophical questions. In times of industrialization and production automation, the human being was a productive force. Today, people are no longer needed in the production of the technologies they use. For example, innovative Japanese automobile assembly plants barely have any people at the floors, with all the work done by robots. The manufacturing process looks something like this: a driverless robot train carrying component parts enters the assembly floor, and a finished car comes out. This is called discrete manufacturing – the assembly of a finite set of elements in a sequence, a task which robots manage quite efficiently. The human being is gradually being ousted from the traditional economic structure, as automated manufacturing facilities generally need only a limited number of human specialists. So why do we need people in manufacturing at all? In the past, we could justify our existence by the need to earn money or consume, or to create jobs for others, but now this is no longer necessary. Digitalization has made technologies a global force, and everyone faces philosophical questions about their personal significance and role in the modern world – questions we should be answering today, and not in ten years when it will be too late.

At the last World Economic Forum in Davos, there was a lot of discussion about the threat of the digital dictatorship of AI. How real is that threat in the foreseeable future?

There is no evil inherent in AI. Technologies themselves are ethically neutral. It is people who decide whether to use them for good or evil.

Speaking of an AI dictatorship is misleading. In reality, technologies have no subjectivity, no “I.” Artificial intelligence is basically a structured piece of code and hardware. Digital technologies are just a tool. There is nothing “mystical” about them either.

My view as a specialist in the field is that AI is currently a branch of information and communications technology (ICT). Moreover, AI does not even “live” in an individual computer. For a person from the industry, AI is a whole stack of technologies that are combined to form what is called “weak” AI.

We inflate the bubble of AI’s importance and erroneously impart this technology stack with subjectivity. In large part, this is done by journalists, people without a technical education. They discuss an entity that does not actually exist, giving rise to the popular meme of an AI that is alternately the Terminator or a benevolent super-being. This is all fairy tales. In reality, we have a set of technological solutions for building effective systems that allow decisions to be made quickly based on big data.

Various high-level committees are discussing “strong” AI, which will not appear for another 50 to 100 years (if at all). The problem is that when we talk about threats that do not exist and will not exist in the near future, we are missing some real threats. We need to understand what AI is and develop a clear code of ethical norms and rules to secure value while avoiding harm.

Sensationalizing threats is a trend in modern society. We take a problem that feeds people’s imaginations and start blowing it up. For example, we are currently destroying the economy around the world under the pretext of fighting the coronavirus. What we are forgetting is that the economy has a direct influence on life expectancy, which means that we are robbing many people of years of life. Making decisions based on emotion leads to dangerous excesses.

As the philosopher Yuval Noah Harari has said, millions of people today trust the algorithms of Google, Netflix, Amazon and Alibaba to dictate to them what they should read, watch and buy. People are losing control over their lives, and that is scary.

Yes, there is the danger that human consciousness may be “robotized” and lose its creativity. Many of the things we do today are influenced by algorithms. For example, drivers listen to their sat navs rather than relying on their own judgment, even if the route suggested is not the best one. When we receive a message, we feel compelled to respond. We have become more algorithmic. But it is ultimately the creator of the algorithm, not the algorithm itself, that dictates our rules and desires.

There is still no global document to regulate behaviour in cyberspace. Should humanity perhaps agree on universal rules and norms for cyberspace first before taking on ethical issues in the field of AI?

I would say that the issue of ethical norms is primary. After we have these norms, we can translate them into appropriate behaviour in cyberspace. With the spread of the internet, digital technologies (of which AI is part) are entering every sphere of life, and that has led us to the need to create a global document regulating the ethics of AI.

But AI is a component part of information and communications technologies (ICT). Maybe we should not create a separate track for AI ethics but join it with the international information security (IIS) track? Especially since IIS issues are being actively discussed at the United Nations, where Russia is a key player.

There is some justification for making AI ethics a separate track, because, although information security and AI are overlapping concepts, they are not embedded in one another. However, I agree that we can have a separate track for information technology and then break it down into sub-tracks where AI would stand alongside other technologies. It is a largely ontological problem and, as with most problems of this kind, finding the optimal solution is no trivial matter.

You are a member of the international expert group under UNESCO that is drafting the first global recommendation on the ethics of AI. Are there any discrepancies in how AI ethics are understood internationally?

The group has its share of heated discussions, and members often promote opposing views. For example, one of the topics is the subjectivity and objectivity of AI. During the discussion, a group of states clearly emerged that promotes the idea of subjectivity and is trying to introduce the concept of AI as a “quasi-member of society.” In other words, attempts are being made to imbue robots with rights. This is a dangerous trend that may lead to a sort of technofascism, inhumanity of such a scale that all previous atrocities in the history of our civilization would pale in comparison.

Could it be that, by promoting the concept of robot subjectivity, the parties involved are trying to avoid responsibility?

Absolutely. A number of issues arise here. First, there is an obvious asymmetry of responsibility. “Let us give the computer with rights, and if its errors lead to damage, we will punish it by pulling the plug or formatting the hard drive.” In other words, the responsibility is placed on the machine and not its creator. The creator gets the profit, and any damage caused is someone else’s problem. Second, as soon as we give AI rights, the issues we are facing today with regard to minorities will seem trivial. It will lead to the thought that we should not hurt AI but rather educate it (I am not joking: such statements are already being made at high-level conferences). We will see a sort of juvenile justice for AI. Only it will be far more terrifying. Robots will defend robot rights. For example, a drone may come and burn your apartment down to protect another drone. We will have a techno-racist regime, but one that is controlled by a group of people. This way, humanity will drive itself into a losing position without having the smallest idea of how to escape it.

Thankfully, we have managed to remove any inserts relating to “quasi-members of society” from the group’s agenda.

We chose the right time to create the Committee for Artificial Intelligence under the Commission of the Russian Federation for UNESCO, as it helped to define the main focus areas for our working group. We are happy that not all countries support the notion of the subjectivity of AI – in fact, most oppose it.

What other controversial issues have arisen in the working group’s discussions?

We have discussed the blurred border between AI and people. I think this border should be defined very clearly. Then we came to the topic of human-AI relationships, a term which implies the whole range of relationships possible between people. We suggested that “relationships” be changed to “interactions,” which met opposition from some of our foreign colleagues, but in the end, we managed to sort it out.

Seeing how advanced sex dolls have become, the next step for some countries would be to legalize marriage with them, and then it would not be long before people starting asking for church weddings. If we do not prohibit all of this at an early stage, these ideas may spread uncontrollably. This approach is backed by big money, the interests of corporations and a different system of values and culture. The proponents of such ideas include a number of Asian countries with a tradition of humanizing inanimate objects. Japan, for example, has a tradition of worshipping mountain, tree and home spirits. On the one hand, this instills respect for the environment, and I agree that, being a part of the planet, part of nature, humans need to live in harmony with it. But still, a person is a person, and a tree is a tree, and they have different rights.

Is the Russian approach to AI ethics special in any way?

We were the only country to state clearly that decisions on AI ethics should be based on a scientific approach. Unfortunately, most representatives of other countries rely not on research, but on their own (often subjective) opinion, so discussions in the working group often devolve to the lay level, despite the fact that the members are highly qualified individuals.

I think these issues need to be thoroughly researched. Decisions on this level should be based on strict logic, models and experiments. We have tremendous computing power, an abundance of software for scenario modelling, and we can model millions of scenarios at a low cost. Only after that should we draw conclusions and make decisions.

How realistic is the fight against the subjectification of AI if big money is at stake? Does Russia have any allies?

Everyone is responsible for their own part. Our task right now is to engage in discussions systematically. Russia has allies with matching views on different aspects of the problem. And common sense still prevails. The egocentric approach we see in a number of countries that is currently being promoted, this kind of self-absorption, actually plays into our hands here. Most states are afraid that humans will cease to be the centre of the universe, ceding our crown to a robot or a computer. This has allowed the human-centred approach to prevail so far.

If the expert group succeeds at drafting recommendations, should we expect some sort of international regulation on AI in the near future?

If we are talking about technical standards, they are already being actively developed at the International Organization for Standardization (ISO), where we have been involved with Technical Committee 164 “Artificial Intelligence” (TC 164) in the development of a number of standards on various aspects of AI. So, in terms of technical regulation, we have the ISO and a whole range of documents. We should also mention the Institute of Electrical and Electronics Engineers (IEEE) and its report on Ethically Aligned Design. I believe this document is the first full-fledged technical guide on the ethics of autonomous and intelligent systems, which includes AI. The corresponding technical standards are currently being developed.

As for the United Nations, I should note the Beijing Consensus on Artificial Intelligence and Education that was adopted by UNESCO last year. I believe that work on developing the relevant standards will start next year.

So the recommendations will become the basis for regulatory standards?

Exactly. This is the correct way to do it. I should also say that it is important to get involved at an early stage. This way, for instance, we can refer to the Beijing agreements in the future. It is important to make sure that AI subjectivity does not appear in the UNESCO document, so that it does not become a reference point for this approach.

Let us move from ethics to technological achievements. What recent developments in the field can be called breakthroughs?

We haven’t seen any qualitative breakthroughs in the field yet. Image recognition, orientation, navigation, transport, better sensors (which are essentially the sensory organs for robots) – these are the achievements that we have so far. In order to make a qualitative leap, we need a different approach.

Take the “chemical universe,” for example. We have researched approximately 100 million chemical compounds. Perhaps tens of thousands of these have been studied in great depth. And the total number of possible compounds is 1060, which is more than the number of atoms in the Universe. This “chemical universe” could hold cures for every disease known to humankind or some radically new, super-strong or super-light materials. There is a multitude of organisms on our planet (such as the sea urchin) with substances in their bodies that could, in theory, cure many human diseases or boost immunity. But we do not have the technology to synthesize many of them. And, of course, we cannot harvest all the sea urchins in the sea, dry them and make an extract for our pills. But big data and modelling can bring about a breakthrough in this field. Artificial intelligence can be our navigator in this “chemical universe.” Any reasonable breakthrough in this area will multiply our income exponentially. Imagine an AIDS or cancer medicine without any side effects, or new materials for the energy industry, new types of solar panels, etc. These are the kind of things that can change our world.

How is Russia positioned on the AI technology market? Is there any chance of competing with the United States or China?

We see people from Russia working in the developer teams of most big Asian, American and European companies. A famous example is Sergey Brin, co-founder and developer of Google. Russia continues to be a “donor” of human resources in this respect. It is both reassuring and disappointing because we want our talented guys to develop technology at home. Given the right circumstances, Yandex could have dominated Google.

As regards domestic achievements, the situation is somewhat controversial. Moscow today is comparable to San Francisco in terms of the number, quality and density of AI development projects. This is why many specialists choose to stay in Moscow. You can find a rewarding job, interesting challenges and a well-developed expert community.

In the regions, however, there is a concerning lack of funds, education and infrastructure for technological and scientific development. All three of our largest supercomputers are in Moscow. Our leaders in this area are the Russian Academy of Sciences, Moscow State University and Moscow Institute of Physics and Technology – organizations with a long history in the sciences, rich traditions, a sizeable staff and ample funding. There are also some pioneers who have got off the ground quickly, such as Skoltech, and surpassed their global competitors in many respects. We recently compared Skoltech with a leading AI research centre in the United Kingdom and discovered that our institution actually leads in terms of publications and grants. This means that we can and should do world-class science in Russia, but we need to overcome regional development disparities.

Russia has the opportunity to take its rightful place in the world of high technology, but our strategy should be to “overtake without catching up.” If you look at our history, you will see that whenever we have tried to catch up with the West or the East, we have lost. Our imitations turned out wrong, were laughable and led to all sorts of mishaps. On the other hand, whenever we have taken a step back and synthesized different approaches, Asian or Western, without blindly copying them, we have achieved tremendous success.

We need to make a sober assessment of what is happening in the East and in the West and what corresponds to our needs. Russia has many unique challenges of its own: managing its territory, developing the resource industries and continuous production. If we are able to solve these tasks, then later we can scale up our technological solutions to the rest of the world, and Russian technology will be bought at a good price. We need to go down our own track, not one that is laid down according to someone else’s standards, and go on our way while being aware of what is going on around us. Not pushing back, not isolating, but synthesizing.

From our partner RIAC

Continue Reading

Publications

Latest

Human Rights7 mins ago

Violence in Sudan’s Western Darfur forces 2,500 into Chad

Recent clashes in Sudan’s Western Darfur region has driven more than 2,500 people across the border into neighbouring Chad, the...

Southeast Asia2 hours ago

Asia’s New Geopolitics- Book Review

In Asia’s New Geopolitics: Essays on Reshaping the Indo-Pacific, Michael R. Auslin presents a series of essays touching on major...

South Asia4 hours ago

The Power Competition between Liberals and Conservatives in Pakistan

There is competition between the two sections of society in Pakistan. Their ideologies, ideas and agendas are essentially conflictual in...

New Social Compact6 hours ago

Iranian regime: Male Gods and Oppressed Women

The patriarchal world was formed as a result of several historical processes. These ancient processes served to dominate men, denying...

Central Asia8 hours ago

Discourses and Reality of New Great Game: Particularly focus on Kazakhstan

The “New Great Game” became a much-debated term of current events in the region. Currently, the analogy of “great powers”...

Defense10 hours ago

China’s Effect: A Global NATO

A shift is taking place in global military thinking. NATO, arguably the most successful military alliance in history, is slowly...

Europe12 hours ago

The future of Europe depends on its neighborhood – UfM’s Nasser Kamel says

On July 1st, 2020, the Secretary-General of the Union for the Mediterranean (UfM), Dr. Nasser Kamel, participated in an international...

Trending