Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical issues. Artificial intelligence adds a new dimension to these questions. Systems that use artificial intelligence technologies are becoming increasingly autonomous in terms of the complexity of the tasks they can perform, their potential impact on the world and the diminishing ability of humans to understand, predict and control their functioning. Most people underestimate the real level of automation of these systems, which have the ability to learn from their own experience and perform actions beyond the scope of those intended by their creators. This causes a number of ethical and legal difficulties that we will touch upon in this article.
Ethics and Artificial Intelligence
There is a well-known thought experiment in ethics called the trolley problem. The experiment raises a number of important ethical issues that are directly related to artificial intelligence. Imagine a runaway trolley going down the railway lines. There are five people tied to the track ahead. You are standing next to a lever. If you pull it, the trolley will switch to a different set of track. However, there is another person tied to that set of track. Do you pull the lever or not?
There is no clear-cut answer to this question. What is more, there are numerous situations in which such a decision may have to be made . And different social groups tend to give different answers. For example, Buddhist monks are overwhelmingly willing to sacrifice the life of one person in order to save five, even if presented with a more complicated variation of the trolley problem.
As for artificial intelligence, such a situation could arise, for example, if a self-driving vehicle is travelling along a road in a situation where an accident is unavoidable. The question thus arises as to whose lives should take priority – those of the passengers, the pedestrians or neither. A special website has been created by the Massachusetts Institute of Technology that deals with this very issue: users can test out various scenarios out on themselves and decide which courses of action would be the most worthwhile.
Other questions also arise in this case: What actions can be allowed from the legal point of view? What should serve as a basis for such decisions? Who should ultimately be held responsible? This problem has already been addressed by companies and regulators. Representatives at Mercedes, for example, have said outright that their cars will prioritize the lives of passengers. The Federal Ministry of Transport and Digital Infrastructure of Germany responded to this immediately, anticipating future regulation by stating that making such a choice based on a set of criteria would be illegal, and that the car manufacturer be held responsible for any injury or loss of life.
Other countries may go a different route. Take the Chinese Social Credit System, for example, which rates its citizens based how law-abiding and how useful to society they are, etc. Those with low ratings will face sanctions. What is stopping the Chinese government from introducing a law that forces manufacturers of self-driving vehicles to sacrifice the lives of lower-rated citizens in the event of an unavoidable accident? Face recognition technologies and access to the relevant databases make it perfectly possible to identify potential victims and compare their social credit ratings.
The Main Problems Facing the Law
The legal problems run even deeper, especially in the case of robots. A system that learns from information it receives from the outside world can act in ways that its creators could not have predicted , and predictability is crucial to modern legal approaches. What is more, such systems can operate independently from their creators or operators thus complicating the task of determining responsibility. These characteristics pose problems related to predictability and the ability to act independently while at the same time not being held responsible .
There are numerous options in terms of regulation, including regulation that is based on existing norms and standards. For example, technologies that use artificial intelligence can be regulated as items subject to copyright or as property. Difficulties arise here, however, if we take into account the ability of such technologies to act autonomously, against the will of their creators, owners or proprietors. In this regard, it is possible to apply the rules that regulate a special kind of ownership, namely animals, since the latter are also capable of autonomous actions. In Russian Law, the general rules of ownership are applied to animals (Article 137 of the Civil Code of the Russian Federation); the issue of responsibility, therefore, comes under Article 1064 of the Civil Code of the Russian Federation: injury inflicted on the personality or property of an individual shall be subject to full compensation by the person who inflicted the damage.
Proposals on the application of the law on animals have been made , although they are somewhat limited. First, the application of legislation on the basis of analogy is unacceptable within the framework of criminal law. Second, these laws have been created primarily for household pets, which we can reasonably expect will not cause harm under normal circumstances. There have been calls in more developed legal systems to apply similar rules to those that regulate the keeping of wild animals, since the rules governing wild animals are more stringent . The question arises here, however, of how to make a separation with regard to the specific features of artificial intelligence mentioned above. Moreover, stringent rules may actually slow down the introduction of artificial intelligence technologies due to the unexpected risks of liability for creators and inventors.
Another widespread suggestion is to apply similar norms to those that regulate the activities of legal entities . Since a legal entity is an artificially constructed subject of the law , robots can be given similar status. The law can be sufficiently flexible and grant the rights to just about anybody. It can also restrict rights. For example, historically, slaves had virtually no rights and were effectively property. The opposite situation can also be observed, in which objects that do not demonstrate any explicit signs of the ability to do anything are vested with rights. Even today, there are examples of unusual objects that are recognized as legal entities, both in developed and developing countries. In 2017, a law was passed in New Zealand recognizing the status of the Whanganui River as a legal entity. The law states that the river is a legal entity and, as such, has all the rights, powers and obligations of a legal entity. The law thus transformed the river from a possession or property into a legal entity, which expanded the boundaries of what can be considered property and what cannot. In 2000, the Supreme Court of India recognized the main sacred text of the Sikhs, the Guru Granth Sahib, as a legal entity.
Even if we do not consider the most extreme cases and cite ordinary companies as an example, we can say that some legal systems make legal entities liable under civil and, in certain cases, criminal law . Without determining whether a company (or state) can have free will or intent, or whether they can act deliberately or knowingly, they can be recognized as legally responsible for certain actions. In the same way, it is not necessary to ascribe intent or free will to robots to recognize them as responsible for their actions.
The analogy of legal entities, however, is problematic, as the concept of legal entity is necessary in order to carry out justice in a speedy and effective manner. But the actions of legal entities always go back to those of a single person or group of people, even if it is impossible to determine exactly who they are . In other words, the legal responsibility of companies and similar entities is linked to the actions performed by their employees or representatives. What is more, legal entities are only deemed to be criminally liable if an individual performing the illegal action on behalf of the legal entity is determined . The actions of artificial intelligence-based systems will not necessarily be traced back to the actions of an individual.
Finally, legal norms on the sources of increased danger can be applied to artificial intelligence-based systems. In accordance with Paragraph 1 of Article 1079 of the Civil Code of the A Russian Federation, legal entities and individuals whose activities are associated with increased danger for the surrounding population (the use of transport vehicles, mechanisms, etc.) shall be obliged to redress the injury inflicted by the source of increased danger, unless they prove that injury has been inflicted as a result of force majeure circumstances or at the intent of the injured person. The problem is identifying which artificial intelligence systems can be deemed sources of increased danger. The issue is similar to the one mentioned above regarding domestic and wild animals.
National and International Regulation
Many countries are actively creating the legal conditions for the development of technologies that use artificial intelligence. For example, the “Intelligent Robot Development and Dissemination Promotion Law” has been in place in South Korea since 2008. The law is aimed at improving the quality of life and developing the economy through the creation and promotion of a strategy for the sustainable development of the smart robot industry. Every five years, the government works out a basic plan to ensure that these goals are achieved.
I would like to pay particular attention here to two recent examples: France, which has declared its ambitions to become a European and world leader in artificial intelligence; and the European Union, which has put forward advanced rules for the regulation of smart robots.
In late March 2018, President of France Emmanuel Macron presented the country’s new national artificial intelligence strategy, which involves investing 1.5 billion Euros over the next five years to support research an innovation in the field. The strategy is based on the recommendations made in the report prepared under the supervision of French mathematician and National Assembly deputy Cédric Villani. The decision was made for the strategy to be aimed at four specific sectors: healthcare; transport; the environment and environmental protection; and security. The reasoning behind this is to focus potential of the comparative advantages and competencies in artificial intelligence on sectors where companies can play a key role at the global level, and because these technologies are important for the public interest, etc.
Seven key proposals are given, one of which is of particular interest for the purposes of this article – namely, to make artificial intelligence more open. It is true that the algorithms used in artificial intelligence are discrete and, in most cases, trade secrets. However, algorithms can be biased, for example, in the process of self-learning, they can absorb and adopt the stereotypes that exist in society or which are transferred to them by developers and make decisions based on them. There is already legal precedent for this. A defendant in the United States received a lengthy prison sentence on the basis of information obtained from an algorithm predicting the likelihood of repeat offences being committed. The defendant’s appeal against the use of an algorithm in the sentencing process was rejected because the criteria used to evaluate the possibility of repeat offences were a trade secret and therefore not presented. The French strategy proposes developing transparent algorithms that can be tested and verified, determining the ethical responsibility of those working in artificial intelligence, creating an ethics advisory committee, etc.
The creation of the resolution on the Civil Law Rules on Robotics marked the first step towards the regulation of artificial intelligence in the European Union. A working group on legal questions related to the development of robotics and artificial intelligence in the European Union was established back in 2015. The resolution is not a binding document, but it does give a number of recommendations to the European Commission on possible actions in the area of artificial intelligence, not only with regard to civil law, but also to the ethical aspects of robotics.
The resolution defines a “smart robot” as “one which has autonomy through the use of sensors and/or interconnectivity with the environment, which has at least a minor physical support, which adapts its behaviour and actions to the environment and which cannot be defined as having ‘life’ in the biological sense.” The proposal is made to “introduce a system for registering advanced robots that would be managed by an EU Agency for Robotics and Artificial Intelligence.” As regards liability for damage caused by robots, two options are suggested: “either strict liability (no fault required) or on a risk-management approach (liability of a person who was able to minimise the risks).” Liability, according to the resolution, “should be proportionate to the actual level of instructions given to the robot and to its degree of autonomy. Rules on liability could be complemented by a compulsory insurance scheme for robot users, and a compensation fund to pay out compensation in case no insurance policy covered the risk.”
The resolution proposes two codes of conduct for dealing with ethical issues: a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics Committees. The first code proposes four ethical principles in robotics engineering: 1) beneficence (robots should act in the best interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).
The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore, our attitude to autonomous systems (whether they are robots or something else), and our reinterpretation of their role in society and their place among us, can have a transformational effect. Legal personality determines what is important for society and allows the decision to made as to whether “something” is a valuable and reasonable object for the purposes of possessing rights and obligations.
Due to the specific features of artificial intelligence, suggestions have been put forward regarding the direct responsibility of certain systems . According to this line of thought, there are no fundamental reasons why autonomous systems should not be legally liable for their actions. The question remains, however, about the necessity or desirability of introducing this kind of liability (at least at the present stage). It is also related to the ethical issues mentioned above. Perhaps making programmers or users of autonomous systems liable for the actions of those systems would be more effective. But this could slow down innovation. This is why we need to continue to search for the perfect balance.
In order to find this balance, we need to address a number of issues. For example: What goals are we pursuing in the development of artificial intelligence? And how effective will it be? The answers to these questions will help us to prevent situations like the one that appeared in Russia in the 17th century, when an animal (specifically goats) was exiled to Siberia for its actions .
First published at our partner RIAC
- 1. See, for example. D. Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong, Princeton University Press, 2013.
- 2. Asaro P., “From Mechanisms of Adaptation to Intelligence Amplifiers: The Philosophy of W. Ross Ashby,” in Wheeler M., Husbands P., and Holland O. (eds.) The Mechanical Mind in History, Cambridge, MA: MIT Press: pp. 149–184
- 3. Asaro P. The Liability Problem for Autonomous Artificial Agents // AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA. March 21–23, 2016, p. 191.
- 4. Arkhipov, V., Naumov, V. On Certain Issues Regarding the Theoretical Grounds for Developing Legislation on Robotics: Aspects of Will and Legal Personality // Zakon. 2017, No. 5, p. 167.
- 5. Asaro P. The Liability Problem for Autonomous Artificial Agents, p. 193.
- 6. Arkhipov, V., Naumov, V. Op. cit., p. 164.
- 7. See, for example. Winkler A. We the Corporations: How American Businesses Won Their Civil Rights. Liverlight, 2018. See a description here: https://www.nytimes.com/2018/03/05/books/review/adam-winkler-we-the-corporations.html
- 8. In countries that use the Anglo-Saxon legal system, the European Union and some Middle Eastern countries. This kind of liability also exists in certain former Soviet countries: Georgia, Kazakhstan, Moldova and Ukraine. It does not exist in Russia, although it is under discussion.
- 9. Brożek B., Jakubiec M. On the Legal Responsibility of Autonomous Machines // Artificial Intelligence Law. 2017, No. 25(3), pp. 293–304.
- 10. Khanna V.S. Corporate Criminal Liability: What Purpose Does It Serve? // Harvard Law Review. 1996, No. 109, pp. 1477–1534.
- 11. Hage J. Theoretical Foundations for the Responsibility of Autonomous Agents // Artificial Intelligence Law. 2017, No. 25(3), pp. 255–271.
- 12. U. Pagallo, The Laws of Robots. Crimes, Contracts, and Torts. Springer, 2013, p. 36.
What is a ‘vaccine passport’ and will you need one the next time you travel?
Is the idea of a vaccine passport entirely new?
The concept of a passport to allow for cross border travel is something that we’ve been working on with the Common Trust Network for many months. The focus has been first on diagnostics. That’s where we worked with an organization called “The Commons Project” to develop the “Common Trust Framework”. This is a set of registries of trusted data sources, a registry of labs accredited to run tests and a registry of up-to-date border crossing regulations.
The set of registries can be used to generate certificates of compliance to prevailing border-crossing regulations as defined by governments. There are different tools to generate the certificates, and the diversity of their authentication solutions and the way they protect data privacy is quite remarkable.
We at the Forum have no preference when it comes to who is running the certification algorithm, we simply want to promote a unique set of registries to avoid unnecessary replication efforts. This is where we support the Common Trust Framework. For instance, the Common Pass is one authentication solution – but there are others, for example developed by Abbott, AOK, SICPA (Certus), IBM and others.
How does the system work and how could it be applied to vaccines?
The Common Trust Network, supported by the Forum, is combining the set of registries that are going to enrol all participating labs. Separately from that, it provides an up-to-date database of all prevailing border entry rules (which fluctuate and differ from country to country).
Combining these two datasets provides a QR code that border entry authorities can trust. It doesn’t reveal any personal health data – it tells you about compliance of results versus border entry requirements for a particular country. So, if your border control rules say that you need to take a test of a certain nature within 72 hours prior to arrival, the tool will confirm whether the traveller has taken that corresponding test in a trusted laboratory, and the test was indeed performed less than three days prior to landing.
The purpose is to create a common good that many authentication providers can use and to provide anyone, in a very agnostic fashion, with access to those registries.
What is the WHO’s role?
There is currently an effort at the WHO to create standards that would process data on the types of vaccinations, how these are channelled into health and healthcare systems registries, the use cases – beyond the management of vaccination campaigns – include border control but also possibly in the future access to stadia or large events. By establishing in a truly ethical fashion harmonized standards, we can avoid a scenario whereby you create two classes of citizens – those who have been vaccinated and those who have not.
So rather than building a set of rules that would be left to the interpretation of member states or private-sector operators like cruises, airlines or conveners of gatherings, we support the WHO’s effort to create a standard for member states for requesting vaccinations and how it would permit the various kinds of use cases.
It is important that we rely on the normative body (the WHO) to create the vaccine credential requirements. The Forum is involved in the WHO taskforce to reflect on those standards and think about how they would be used. The WHO’s goal is to deploy standards and recommendations by mid-March 2021, and the hope is that they will be more harmonized between member states than they have been to date in the field of diagnostics.
What about the private sector and separate initiatives?
When registry frameworks are being developed for authentication tools providers, they should at a minimum feed as experiments into the standardization efforts being driven by WHO, knowing that the final guidance from the only normative body with an official UN mandate may in turn force those providers to revise their own frameworks. We certainly support this type of interaction, as public- and private-sector collaboration is key to overcoming the global challenge posed by COVID-19.
What more needs to be done to ensure equitable distribution of vaccines?
As the WHO has warned, vaccine nationalism – or a hoarding and “me-first” approach to vaccine deployment – risks leaving “the world’s poorest and most vulnerable at risk.”
COVAX, supported by the World Economic Forum, is coordinated by the World Health Organization in partnership with GAVI, the Vaccine Alliance; CEPI, the Centre for Epidemics Preparedness Innovations and others. So far, 190 economies have signed up.
The Access to COVID-19 Tools Accelerator (ACT-Accelerator) is another partnership, with universal access and equity at its core, that has been successfully promoting global collaboration to accelerate the development, production and equitable access to COVID-19 tests, treatments and vaccines. The World Economic Forum is a member of the ACT-Accelerator’s Facilitation Council (governing body).
Iran among five pioneers of nanotechnology
Prioritizing nanotechnology in Iran has led to this country’s steady placement among the five pioneers of the nanotechnology field in recent years, and approximately 20 percent of all articles provided by Iranian researchers in 2020 are relative to this area of technology.
Iran has been introduced as the 4th leading country in the world in the field of nanotechnology, publishing 11,546 scientific articles in 2020.
The country held a 6 percent share of the world’s total nanotechnology articles, according to StatNano’s monthly evaluation accomplished in WoS databases.
There are 227 companies in Iran registered in the WoS databases, manufacturing 419 products, mainly in the fields of construction, textile, medicine, home appliances, automotive, and food.
According to the data, 31 Iranian universities and research centers published more than 50 nano-articles in the last year.
In line with China’s trend in the past few years, this country is placed in the first stage with 78,000 nano-articles (more than 40 percent of all nano-articles in 2020), and the U.S. is at the next stage with 24,425 papers. These countries have published nearly half of the whole world’s nano-articles.
In the following, India with 9 percent, Iran with 6 percent, and South Korea and Germany with 5 percent are the other head publishers, respectively.
Almost 9 percent of the whole scientific publications of 2020, indexed in the Web of Science database, have been relevant to nanotechnology.
There have been 191,304 nano-articles indexed in WoS that had to have a 9 percent growth compared to last year. The mentioned articles are 8.8 percent of the whole produced papers in 2020.
Iran ranked 43rd among the 100 most vibrant clusters of science and technology (S&T) worldwide for the third consecutive year, according to the Global Innovation Index (GII) 2020 report.
The country experienced a three-level improvement compared to 2019.
Iran’s share of the world’s top scientific articles is 3 percent, Gholam Hossein Rahimi She’erbaf, the deputy science minister, has announced.
The country’s share in the whole publications worldwide is 2 percent, he noted, highlighting, for the first three consecutive years, Iran has been ranked first in terms of quantity and quality of articles among Islamic countries.
Sourena Sattari, vice president for science and technology has said that Iran is playing the leading role in the region in the fields of fintech, ICT, stem cell, aerospace, and is unrivaled in artificial intelligence.
From our partner Tehran Times
Free And Equal Internet Access As A Human Right
Having internet access in a free and equal way is very important in contemporary world. Today, there are more than 4 billion people who are using internet all around the world. Internet has become a very important medium by which the right to freedom of speech and the right to reach information can be exercised. Internet has a central tool in commerce, education and culture.
Providing solutions to develop effective policies for both internet safety and equal Internet access must be the first priority of governments. The Internet offers individuals power to seek and impart information thus states and organizations like UN have important roles in promoting and protecting Internet safety. States and international organizations play a key role to ensure free and equal Internet access.
The concept of “network neutrality” is significant while analyzing equal access to Internet and state policies regulating it. Network Neutrality (NN) can be defined as the rule meaning all electronic communications and platforms should be exercised in a non-discriminatory way regardless of their type, content or origin. The importance of NN has been evident in COVID-19 pandemic when millions of students in underdeveloped regions got victimized due to the lack of access to online education.
Article 19/2 of the International Covenant on Civil and Political Rights notes the following:
“Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers either orally, in writing or in print, in the form of art, or through any other media of his choice.”
Internet access and network neutrality directly affect human rights. The lack of NN undermines human rights and causes basic human right violations like violating freedom of speech and freedom to reach information. There must be effective policies to pursue NN. Both nation-states and international organizations have important roles in making Internet free, safe and equally reachable for the people worldwide. States should take steps for promoting equal opportunities, including gender equality, in the design and implementation of information and technology. The governments should create and maintain, in law and in practice, a safe and enabling online environment in accordance with human rights.
It is known that, the whole world has a reliance on internet that makes it easy to fullﬁll basic civil tasks but this is also threatened by increasing personal and societal cyber security threats. In this regard, states must fulfill their commitment to develop effective policies to attain universal access to the Internet in a safe way.
As final remarks, it can be said that, Internet access should be free and equal for everyone. Creating effective tools to attain universal access to the Internet cannot be done only by states themselves. Actors like UN and EU have a major role in this process as well.
IEA: take urgent action to cut methane emissions from oil and gas sector
Methane emissions from the global oil and gas industry fell by an estimated 10% in 2020 as producers slashed output...
Commission sets out key actions for a united front to beat COVID-19
Two days ahead of the meeting of European leaders on a coordinated response to the COVID-19 crisis, the Commission set...
‘Complex’ emergency unfolding in Mozambique’s Cabo Delgado
UN agencies voiced deep concern on Wednesday over the worsening humanitarian crisis in Mozambique’s Cabo Delgado province, where attacks by...
Global War on Terror: Pakistan’s Role and Evolving Security Architecture for sustainable peace
If Afghanistan, according to former president of the United States (US) George W Bush was the center of terror, then...
What Social Movements Mean for African Politics
Africa’s transition from a continent of colonial protectorates to independent states has been met with developmental and political challenges. From...
Promoting Green Finance in Qatar: Post-Pandemic Opportunities and Challenges
The recent COVID-19 pandemic had significant implications for both national economies and the global financial system, in addition to hindering...
Thailand: Growth in Jobs Critical for Sustained COVID-19 Recovery
Thailand’s economy was severely impacted by the COVID-19 pandemic and is estimated to have shrunk by 6.5 percent in 2020....
Economy3 days ago
Bitcoin Price Bubble: A Mirror to the Financial Crisis?
Terrorism2 days ago
When shall the UNSC declare RSS a terrorist outfit?
Americas3 days ago
Latin America and China: The difficulties in relations and Covid-19
Russia2 days ago
Russia and Belarus: An increasingly difficult alliance
Middle East2 days ago
Saudi-Turkey Discourse: Is a Resolve Imminent?
Diplomacy2 days ago
The Growth of Soft Power in the World’s Largest Democracy
Americas3 days ago
Why won’t Bowdich evoke 9/11 now?
Economy2 days ago
‘Make That Trade!’ Biden Plans Unprecedented Stimulus for US Economy