Digitalization and the development of artificial intelligence (AI) bring up many philosophical and ethical questions about the role of man and robot in the nascent social and economic order. How real is the threat of an AI dictatorship? Why do we need to tackle AI ethics today? Does AI provide breakthrough solutions? We ask these and other questions in our interview with Maxim Fedorov, Vice-President for Artificial Intelligence and Mathematical Modelling at Skoltech.
On 1–3 July, Maxim Fedorov chaired the inaugural Trustworthy AI online conference on AI transparency, robustness and sustainability hosted by Skoltech.
Maxim, do you think humanity already needs to start working out a new philosophical model for existing in a digital world whose development is determined by artificial intelligence (AI) technologies?
The fundamental difference between today’s technologies and those of the past is that they hold up a “mirror” of sorts to society. Looking into this mirror, we need to answer a number of philosophical questions. In times of industrialization and production automation, the human being was a productive force. Today, people are no longer needed in the production of the technologies they use. For example, innovative Japanese automobile assembly plants barely have any people at the floors, with all the work done by robots. The manufacturing process looks something like this: a driverless robot train carrying component parts enters the assembly floor, and a finished car comes out. This is called discrete manufacturing – the assembly of a finite set of elements in a sequence, a task which robots manage quite efficiently. The human being is gradually being ousted from the traditional economic structure, as automated manufacturing facilities generally need only a limited number of human specialists. So why do we need people in manufacturing at all? In the past, we could justify our existence by the need to earn money or consume, or to create jobs for others, but now this is no longer necessary. Digitalization has made technologies a global force, and everyone faces philosophical questions about their personal significance and role in the modern world – questions we should be answering today, and not in ten years when it will be too late.
At the last World Economic Forum in Davos, there was a lot of discussion about the threat of the digital dictatorship of AI. How real is that threat in the foreseeable future?
There is no evil inherent in AI. Technologies themselves are ethically neutral. It is people who decide whether to use them for good or evil.
Speaking of an AI dictatorship is misleading. In reality, technologies have no subjectivity, no “I.” Artificial intelligence is basically a structured piece of code and hardware. Digital technologies are just a tool. There is nothing “mystical” about them either.
My view as a specialist in the field is that AI is currently a branch of information and communications technology (ICT). Moreover, AI does not even “live” in an individual computer. For a person from the industry, AI is a whole stack of technologies that are combined to form what is called “weak” AI.
We inflate the bubble of AI’s importance and erroneously impart this technology stack with subjectivity. In large part, this is done by journalists, people without a technical education. They discuss an entity that does not actually exist, giving rise to the popular meme of an AI that is alternately the Terminator or a benevolent super-being. This is all fairy tales. In reality, we have a set of technological solutions for building effective systems that allow decisions to be made quickly based on big data.
Various high-level committees are discussing “strong” AI, which will not appear for another 50 to 100 years (if at all). The problem is that when we talk about threats that do not exist and will not exist in the near future, we are missing some real threats. We need to understand what AI is and develop a clear code of ethical norms and rules to secure value while avoiding harm.
Sensationalizing threats is a trend in modern society. We take a problem that feeds people’s imaginations and start blowing it up. For example, we are currently destroying the economy around the world under the pretext of fighting the coronavirus. What we are forgetting is that the economy has a direct influence on life expectancy, which means that we are robbing many people of years of life. Making decisions based on emotion leads to dangerous excesses.
As the philosopher Yuval Noah Harari has said, millions of people today trust the algorithms of Google, Netflix, Amazon and Alibaba to dictate to them what they should read, watch and buy. People are losing control over their lives, and that is scary.
Yes, there is the danger that human consciousness may be “robotized” and lose its creativity. Many of the things we do today are influenced by algorithms. For example, drivers listen to their sat navs rather than relying on their own judgment, even if the route suggested is not the best one. When we receive a message, we feel compelled to respond. We have become more algorithmic. But it is ultimately the creator of the algorithm, not the algorithm itself, that dictates our rules and desires.
There is still no global document to regulate behaviour in cyberspace. Should humanity perhaps agree on universal rules and norms for cyberspace first before taking on ethical issues in the field of AI?
I would say that the issue of ethical norms is primary. After we have these norms, we can translate them into appropriate behaviour in cyberspace. With the spread of the internet, digital technologies (of which AI is part) are entering every sphere of life, and that has led us to the need to create a global document regulating the ethics of AI.
But AI is a component part of information and communications technologies (ICT). Maybe we should not create a separate track for AI ethics but join it with the international information security (IIS) track? Especially since IIS issues are being actively discussed at the United Nations, where Russia is a key player.
There is some justification for making AI ethics a separate track, because, although information security and AI are overlapping concepts, they are not embedded in one another. However, I agree that we can have a separate track for information technology and then break it down into sub-tracks where AI would stand alongside other technologies. It is a largely ontological problem and, as with most problems of this kind, finding the optimal solution is no trivial matter.
You are a member of the international expert group under UNESCO that is drafting the first global recommendation on the ethics of AI. Are there any discrepancies in how AI ethics are understood internationally?
The group has its share of heated discussions, and members often promote opposing views. For example, one of the topics is the subjectivity and objectivity of AI. During the discussion, a group of states clearly emerged that promotes the idea of subjectivity and is trying to introduce the concept of AI as a “quasi-member of society.” In other words, attempts are being made to imbue robots with rights. This is a dangerous trend that may lead to a sort of technofascism, inhumanity of such a scale that all previous atrocities in the history of our civilization would pale in comparison.
Could it be that, by promoting the concept of robot subjectivity, the parties involved are trying to avoid responsibility?
Absolutely. A number of issues arise here. First, there is an obvious asymmetry of responsibility. “Let us give the computer with rights, and if its errors lead to damage, we will punish it by pulling the plug or formatting the hard drive.” In other words, the responsibility is placed on the machine and not its creator. The creator gets the profit, and any damage caused is someone else’s problem. Second, as soon as we give AI rights, the issues we are facing today with regard to minorities will seem trivial. It will lead to the thought that we should not hurt AI but rather educate it (I am not joking: such statements are already being made at high-level conferences). We will see a sort of juvenile justice for AI. Only it will be far more terrifying. Robots will defend robot rights. For example, a drone may come and burn your apartment down to protect another drone. We will have a techno-racist regime, but one that is controlled by a group of people. This way, humanity will drive itself into a losing position without having the smallest idea of how to escape it.
Thankfully, we have managed to remove any inserts relating to “quasi-members of society” from the group’s agenda.
We chose the right time to create the Committee for Artificial Intelligence under the Commission of the Russian Federation for UNESCO, as it helped to define the main focus areas for our working group. We are happy that not all countries support the notion of the subjectivity of AI – in fact, most oppose it.
What other controversial issues have arisen in the working group’s discussions?
We have discussed the blurred border between AI and people. I think this border should be defined very clearly. Then we came to the topic of human-AI relationships, a term which implies the whole range of relationships possible between people. We suggested that “relationships” be changed to “interactions,” which met opposition from some of our foreign colleagues, but in the end, we managed to sort it out.
Seeing how advanced sex dolls have become, the next step for some countries would be to legalize marriage with them, and then it would not be long before people starting asking for church weddings. If we do not prohibit all of this at an early stage, these ideas may spread uncontrollably. This approach is backed by big money, the interests of corporations and a different system of values and culture. The proponents of such ideas include a number of Asian countries with a tradition of humanizing inanimate objects. Japan, for example, has a tradition of worshipping mountain, tree and home spirits. On the one hand, this instills respect for the environment, and I agree that, being a part of the planet, part of nature, humans need to live in harmony with it. But still, a person is a person, and a tree is a tree, and they have different rights.
Is the Russian approach to AI ethics special in any way?
We were the only country to state clearly that decisions on AI ethics should be based on a scientific approach. Unfortunately, most representatives of other countries rely not on research, but on their own (often subjective) opinion, so discussions in the working group often devolve to the lay level, despite the fact that the members are highly qualified individuals.
I think these issues need to be thoroughly researched. Decisions on this level should be based on strict logic, models and experiments. We have tremendous computing power, an abundance of software for scenario modelling, and we can model millions of scenarios at a low cost. Only after that should we draw conclusions and make decisions.
How realistic is the fight against the subjectification of AI if big money is at stake? Does Russia have any allies?
Everyone is responsible for their own part. Our task right now is to engage in discussions systematically. Russia has allies with matching views on different aspects of the problem. And common sense still prevails. The egocentric approach we see in a number of countries that is currently being promoted, this kind of self-absorption, actually plays into our hands here. Most states are afraid that humans will cease to be the centre of the universe, ceding our crown to a robot or a computer. This has allowed the human-centred approach to prevail so far.
If the expert group succeeds at drafting recommendations, should we expect some sort of international regulation on AI in the near future?
If we are talking about technical standards, they are already being actively developed at the International Organization for Standardization (ISO), where we have been involved with Technical Committee 164 “Artificial Intelligence” (TC 164) in the development of a number of standards on various aspects of AI. So, in terms of technical regulation, we have the ISO and a whole range of documents. We should also mention the Institute of Electrical and Electronics Engineers (IEEE) and its report on Ethically Aligned Design. I believe this document is the first full-fledged technical guide on the ethics of autonomous and intelligent systems, which includes AI. The corresponding technical standards are currently being developed.
As for the United Nations, I should note the Beijing Consensus on Artificial Intelligence and Education that was adopted by UNESCO last year. I believe that work on developing the relevant standards will start next year.
So the recommendations will become the basis for regulatory standards?
Exactly. This is the correct way to do it. I should also say that it is important to get involved at an early stage. This way, for instance, we can refer to the Beijing agreements in the future. It is important to make sure that AI subjectivity does not appear in the UNESCO document, so that it does not become a reference point for this approach.
Let us move from ethics to technological achievements. What recent developments in the field can be called breakthroughs?
We haven’t seen any qualitative breakthroughs in the field yet. Image recognition, orientation, navigation, transport, better sensors (which are essentially the sensory organs for robots) – these are the achievements that we have so far. In order to make a qualitative leap, we need a different approach.
Take the “chemical universe,” for example. We have researched approximately 100 million chemical compounds. Perhaps tens of thousands of these have been studied in great depth. And the total number of possible compounds is 1060, which is more than the number of atoms in the Universe. This “chemical universe” could hold cures for every disease known to humankind or some radically new, super-strong or super-light materials. There is a multitude of organisms on our planet (such as the sea urchin) with substances in their bodies that could, in theory, cure many human diseases or boost immunity. But we do not have the technology to synthesize many of them. And, of course, we cannot harvest all the sea urchins in the sea, dry them and make an extract for our pills. But big data and modelling can bring about a breakthrough in this field. Artificial intelligence can be our navigator in this “chemical universe.” Any reasonable breakthrough in this area will multiply our income exponentially. Imagine an AIDS or cancer medicine without any side effects, or new materials for the energy industry, new types of solar panels, etc. These are the kind of things that can change our world.
How is Russia positioned on the AI technology market? Is there any chance of competing with the United States or China?
We see people from Russia working in the developer teams of most big Asian, American and European companies. A famous example is Sergey Brin, co-founder and developer of Google. Russia continues to be a “donor” of human resources in this respect. It is both reassuring and disappointing because we want our talented guys to develop technology at home. Given the right circumstances, Yandex could have dominated Google.
As regards domestic achievements, the situation is somewhat controversial. Moscow today is comparable to San Francisco in terms of the number, quality and density of AI development projects. This is why many specialists choose to stay in Moscow. You can find a rewarding job, interesting challenges and a well-developed expert community.
In the regions, however, there is a concerning lack of funds, education and infrastructure for technological and scientific development. All three of our largest supercomputers are in Moscow. Our leaders in this area are the Russian Academy of Sciences, Moscow State University and Moscow Institute of Physics and Technology – organizations with a long history in the sciences, rich traditions, a sizeable staff and ample funding. There are also some pioneers who have got off the ground quickly, such as Skoltech, and surpassed their global competitors in many respects. We recently compared Skoltech with a leading AI research centre in the United Kingdom and discovered that our institution actually leads in terms of publications and grants. This means that we can and should do world-class science in Russia, but we need to overcome regional development disparities.
Russia has the opportunity to take its rightful place in the world of high technology, but our strategy should be to “overtake without catching up.” If you look at our history, you will see that whenever we have tried to catch up with the West or the East, we have lost. Our imitations turned out wrong, were laughable and led to all sorts of mishaps. On the other hand, whenever we have taken a step back and synthesized different approaches, Asian or Western, without blindly copying them, we have achieved tremendous success.
We need to make a sober assessment of what is happening in the East and in the West and what corresponds to our needs. Russia has many unique challenges of its own: managing its territory, developing the resource industries and continuous production. If we are able to solve these tasks, then later we can scale up our technological solutions to the rest of the world, and Russian technology will be bought at a good price. We need to go down our own track, not one that is laid down according to someone else’s standards, and go on our way while being aware of what is going on around us. Not pushing back, not isolating, but synthesizing.
From our partner RIAC
What is a ‘vaccine passport’ and will you need one the next time you travel?
Is the idea of a vaccine passport entirely new?
The concept of a passport to allow for cross border travel is something that we’ve been working on with the Common Trust Network for many months. The focus has been first on diagnostics. That’s where we worked with an organization called “The Commons Project” to develop the “Common Trust Framework”. This is a set of registries of trusted data sources, a registry of labs accredited to run tests and a registry of up-to-date border crossing regulations.
The set of registries can be used to generate certificates of compliance to prevailing border-crossing regulations as defined by governments. There are different tools to generate the certificates, and the diversity of their authentication solutions and the way they protect data privacy is quite remarkable.
We at the Forum have no preference when it comes to who is running the certification algorithm, we simply want to promote a unique set of registries to avoid unnecessary replication efforts. This is where we support the Common Trust Framework. For instance, the Common Pass is one authentication solution – but there are others, for example developed by Abbott, AOK, SICPA (Certus), IBM and others.
How does the system work and how could it be applied to vaccines?
The Common Trust Network, supported by the Forum, is combining the set of registries that are going to enrol all participating labs. Separately from that, it provides an up-to-date database of all prevailing border entry rules (which fluctuate and differ from country to country).
Combining these two datasets provides a QR code that border entry authorities can trust. It doesn’t reveal any personal health data – it tells you about compliance of results versus border entry requirements for a particular country. So, if your border control rules say that you need to take a test of a certain nature within 72 hours prior to arrival, the tool will confirm whether the traveller has taken that corresponding test in a trusted laboratory, and the test was indeed performed less than three days prior to landing.
The purpose is to create a common good that many authentication providers can use and to provide anyone, in a very agnostic fashion, with access to those registries.
What is the WHO’s role?
There is currently an effort at the WHO to create standards that would process data on the types of vaccinations, how these are channelled into health and healthcare systems registries, the use cases – beyond the management of vaccination campaigns – include border control but also possibly in the future access to stadia or large events. By establishing in a truly ethical fashion harmonized standards, we can avoid a scenario whereby you create two classes of citizens – those who have been vaccinated and those who have not.
So rather than building a set of rules that would be left to the interpretation of member states or private-sector operators like cruises, airlines or conveners of gatherings, we support the WHO’s effort to create a standard for member states for requesting vaccinations and how it would permit the various kinds of use cases.
It is important that we rely on the normative body (the WHO) to create the vaccine credential requirements. The Forum is involved in the WHO taskforce to reflect on those standards and think about how they would be used. The WHO’s goal is to deploy standards and recommendations by mid-March 2021, and the hope is that they will be more harmonized between member states than they have been to date in the field of diagnostics.
What about the private sector and separate initiatives?
When registry frameworks are being developed for authentication tools providers, they should at a minimum feed as experiments into the standardization efforts being driven by WHO, knowing that the final guidance from the only normative body with an official UN mandate may in turn force those providers to revise their own frameworks. We certainly support this type of interaction, as public- and private-sector collaboration is key to overcoming the global challenge posed by COVID-19.
What more needs to be done to ensure equitable distribution of vaccines?
As the WHO has warned, vaccine nationalism – or a hoarding and “me-first” approach to vaccine deployment – risks leaving “the world’s poorest and most vulnerable at risk.”
COVAX, supported by the World Economic Forum, is coordinated by the World Health Organization in partnership with GAVI, the Vaccine Alliance; CEPI, the Centre for Epidemics Preparedness Innovations and others. So far, 190 economies have signed up.
The Access to COVID-19 Tools Accelerator (ACT-Accelerator) is another partnership, with universal access and equity at its core, that has been successfully promoting global collaboration to accelerate the development, production and equitable access to COVID-19 tests, treatments and vaccines. The World Economic Forum is a member of the ACT-Accelerator’s Facilitation Council (governing body).
Iran among five pioneers of nanotechnology
Prioritizing nanotechnology in Iran has led to this country’s steady placement among the five pioneers of the nanotechnology field in recent years, and approximately 20 percent of all articles provided by Iranian researchers in 2020 are relative to this area of technology.
Iran has been introduced as the 4th leading country in the world in the field of nanotechnology, publishing 11,546 scientific articles in 2020.
The country held a 6 percent share of the world’s total nanotechnology articles, according to StatNano’s monthly evaluation accomplished in WoS databases.
There are 227 companies in Iran registered in the WoS databases, manufacturing 419 products, mainly in the fields of construction, textile, medicine, home appliances, automotive, and food.
According to the data, 31 Iranian universities and research centers published more than 50 nano-articles in the last year.
In line with China’s trend in the past few years, this country is placed in the first stage with 78,000 nano-articles (more than 40 percent of all nano-articles in 2020), and the U.S. is at the next stage with 24,425 papers. These countries have published nearly half of the whole world’s nano-articles.
In the following, India with 9 percent, Iran with 6 percent, and South Korea and Germany with 5 percent are the other head publishers, respectively.
Almost 9 percent of the whole scientific publications of 2020, indexed in the Web of Science database, have been relevant to nanotechnology.
There have been 191,304 nano-articles indexed in WoS that had to have a 9 percent growth compared to last year. The mentioned articles are 8.8 percent of the whole produced papers in 2020.
Iran ranked 43rd among the 100 most vibrant clusters of science and technology (S&T) worldwide for the third consecutive year, according to the Global Innovation Index (GII) 2020 report.
The country experienced a three-level improvement compared to 2019.
Iran’s share of the world’s top scientific articles is 3 percent, Gholam Hossein Rahimi She’erbaf, the deputy science minister, has announced.
The country’s share in the whole publications worldwide is 2 percent, he noted, highlighting, for the first three consecutive years, Iran has been ranked first in terms of quantity and quality of articles among Islamic countries.
Sourena Sattari, vice president for science and technology has said that Iran is playing the leading role in the region in the fields of fintech, ICT, stem cell, aerospace, and is unrivaled in artificial intelligence.
From our partner Tehran Times
Free And Equal Internet Access As A Human Right
Having internet access in a free and equal way is very important in contemporary world. Today, there are more than 4 billion people who are using internet all around the world. Internet has become a very important medium by which the right to freedom of speech and the right to reach information can be exercised. Internet has a central tool in commerce, education and culture.
Providing solutions to develop effective policies for both internet safety and equal Internet access must be the first priority of governments. The Internet offers individuals power to seek and impart information thus states and organizations like UN have important roles in promoting and protecting Internet safety. States and international organizations play a key role to ensure free and equal Internet access.
The concept of “network neutrality” is significant while analyzing equal access to Internet and state policies regulating it. Network Neutrality (NN) can be defined as the rule meaning all electronic communications and platforms should be exercised in a non-discriminatory way regardless of their type, content or origin. The importance of NN has been evident in COVID-19 pandemic when millions of students in underdeveloped regions got victimized due to the lack of access to online education.
Article 19/2 of the International Covenant on Civil and Political Rights notes the following:
“Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers either orally, in writing or in print, in the form of art, or through any other media of his choice.”
Internet access and network neutrality directly affect human rights. The lack of NN undermines human rights and causes basic human right violations like violating freedom of speech and freedom to reach information. There must be effective policies to pursue NN. Both nation-states and international organizations have important roles in making Internet free, safe and equally reachable for the people worldwide. States should take steps for promoting equal opportunities, including gender equality, in the design and implementation of information and technology. The governments should create and maintain, in law and in practice, a safe and enabling online environment in accordance with human rights.
It is known that, the whole world has a reliance on internet that makes it easy to fullﬁll basic civil tasks but this is also threatened by increasing personal and societal cyber security threats. In this regard, states must fulfill their commitment to develop effective policies to attain universal access to the Internet in a safe way.
As final remarks, it can be said that, Internet access should be free and equal for everyone. Creating effective tools to attain universal access to the Internet cannot be done only by states themselves. Actors like UN and EU have a major role in this process as well.
World Bank Helps Bangladesh Provide Education and Skills Training to Poor Children
The government of Bangladesh today signed a $6.5 million financing agreement with the World Bank to enable around 39,000 slum...
ADB, EIB Join Forces to Protect Oceans, Support the Blue Economy
The Asian Development Bank (ADB) and the European Investment Bank (EIB) today formed a new Clean and Sustainable Ocean Partnership...
GEN Z Creates Sustainable Fashion with Recycled Materials
“Sustainability is somewhat of a trend among the new generation. We care more about the planet as mass media pushes...
Rising Pak-Turk Cultural Diplomacy: “Dirilis Ertugrul”- The Prime Catalyst
Amid massive success of famous Turkish drama series Dirilis Ertugrul, also titled as Resurrection Ertugrul in English for Netflix, is...
Is Mike Pompeo the worst Secretary of State in history?
Trump may have a race for the worst presidential title, but Pompeo is in a class of his own. James...
Mozambique’s Crisis and its Humanitarian Aid
Escalating conflict and a deteriorating humanitarian situation in Cabo Delgado has left communities completely reliant on humanitarian assistance. According to...
Skills Development Vital to Enabling Transition to Industry 4.0 in Southeast Asia
Countries in Southeast Asia should consider developing industry transformation maps in key sectors to enable the transition to the fourth...
Russia3 days ago
Russia and Belarus: An increasingly difficult alliance
Middle East3 days ago
Saudi-Turkey Discourse: Is a Resolve Imminent?
Middle East2 days ago
An Enemy Among Us
Terrorism3 days ago
Hidden History – 1977 Terrorist Attacks in Moscow
Middle East3 days ago
Is Erdogan’s Obsession with Demirtas a Personal Vendetta or a Calculated Strategy?
South Asia2 days ago
Arnab Goswami’s whatsApp leaks show power of propaganda
Eastern Europe2 days ago
Thorny path towards peace and reconciliation in Karabakh
Reports3 days ago
The World Needs to Wake Up to Long-Term Risks