Connect with us

Tech

The Ethical and Legal Issues of Artificial Intelligence

Published

on

Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical issues. Artificial intelligence adds a new dimension to these questions. Systems that use artificial intelligence technologies are becoming increasingly autonomous in terms of the complexity of the tasks they can perform, their potential impact on the world and the diminishing ability of humans to understand, predict and control their functioning. Most people underestimate the real level of automation of these systems, which have the ability to learn from their own experience and perform actions beyond the scope of those intended by their creators. This causes a number of ethical and legal difficulties that we will touch upon in this article.

Ethics and Artificial Intelligence

There is a well-known thought experiment in ethics called the trolley problem. The experiment raises a number of important ethical issues that are directly related to artificial intelligence. Imagine a runaway trolley going down the railway lines. There are five people tied to the track ahead. You are standing next to a lever. If you pull it, the trolley will switch to a different set of track. However, there is another person tied to that set of track. Do you pull the lever or not?

Source: Wikimedia.org

There is no clear-cut answer to this question. What is more, there are numerous situations in which such a decision may have to be made [1]. And different social groups tend to give different answers. For example, Buddhist monks are overwhelmingly willing to sacrifice the life of one person in order to save five, even if presented with a more complicated variation of the trolley problem.

As for artificial intelligence, such a situation could arise, for example, if a self-driving vehicle is travelling along a road in a situation where an accident is unavoidable. The question thus arises as to whose lives should take priority – those of the passengers, the pedestrians or neither. A special website has been created by the Massachusetts Institute of Technology that deals with this very issue: users can test out various scenarios out on themselves and decide which courses of action would be the most worthwhile.

Other questions also arise in this case: What actions can be allowed from the legal point of view? What should serve as a basis for such decisions? Who should ultimately be held responsible? This problem has already been addressed by companies and regulators. Representatives at Mercedes, for example, have said outright that their cars will prioritize the lives of passengers. The Federal Ministry of Transport and Digital Infrastructure of Germany responded to this immediately, anticipating future regulation by stating that making such a choice based on a set of criteria would be illegal, and that the car manufacturer be held responsible for any injury or loss of life.

Other countries may go a different route. Take the Chinese Social Credit System, for example, which rates its citizens based how law-abiding and how useful to society they are, etc. Those with low ratings will face sanctions. What is stopping the Chinese government from introducing a law that forces manufacturers of self-driving vehicles to sacrifice the lives of lower-rated citizens in the event of an unavoidable accident? Face recognition technologies and access to the relevant databases make it perfectly possible to identify potential victims and compare their social credit ratings.

The Main Problems Facing the Law

The legal problems run even deeper, especially in the case of robots. A system that learns from information it receives from the outside world can act in ways that its creators could not have predicted [2], and predictability is crucial to modern legal approaches. What is more, such systems can operate independently from their creators or operators thus complicating the task of determining responsibility. These characteristics pose problems related to predictability and the ability to act independently while at the same time not being held responsible [3].

There are numerous options in terms of regulation, including regulation that is based on existing norms and standards. For example, technologies that use artificial intelligence can be regulated as items subject to copyright or as property. Difficulties arise here, however, if we take into account the ability of such technologies to act autonomously, against the will of their creators, owners or proprietors. In this regard, it is possible to apply the rules that regulate a special kind of ownership, namely animals, since the latter are also capable of autonomous actions. In Russian Law, the general rules of ownership are applied to animals (Article 137 of the Civil Code of the Russian Federation); the issue of responsibility, therefore, comes under Article 1064 of the Civil Code of the Russian Federation: injury inflicted on the personality or property of an individual shall be subject to full compensation by the person who inflicted the damage.

Proposals on the application of the law on animals have been made [4], although they are somewhat limited. First, the application of legislation on the basis of analogy is unacceptable within the framework of criminal law. Second, these laws have been created primarily for household pets, which we can reasonably expect will not cause harm under normal circumstances. There have been calls in more developed legal systems to apply similar rules to those that regulate the keeping of wild animals, since the rules governing wild animals are more stringent [5]. The question arises here, however, of how to make a separation with regard to the specific features of artificial intelligence mentioned above. Moreover, stringent rules may actually slow down the introduction of artificial intelligence technologies due to the unexpected risks of liability for creators and inventors.

Another widespread suggestion is to apply similar norms to those that regulate the activities of legal entities [6]. Since a legal entity is an artificially constructed subject of the law [7], robots can be given similar status. The law can be sufficiently flexible and grant the rights to just about anybody. It can also restrict rights. For example, historically, slaves had virtually no rights and were effectively property. The opposite situation can also be observed, in which objects that do not demonstrate any explicit signs of the ability to do anything are vested with rights. Even today, there are examples of unusual objects that are recognized as legal entities, both in developed and developing countries. In 2017, a law was passed in New Zealand recognizing the status of the Whanganui River as a legal entity. The law states that the river is a legal entity and, as such, has all the rights, powers and obligations of a legal entity. The law thus transformed the river from a possession or property into a legal entity, which expanded the boundaries of what can be considered property and what cannot. In 2000, the Supreme Court of India recognized the main sacred text of the Sikhs, the Guru Granth Sahib, as a legal entity.

Even if we do not consider the most extreme cases and cite ordinary companies as an example, we can say that some legal systems make legal entities liable under civil and, in certain cases, criminal law [8]. Without determining whether a company (or state) can have free will or intent, or whether they can act deliberately or knowingly, they can be recognized as legally responsible for certain actions. In the same way, it is not necessary to ascribe intent or free will to robots to recognize them as responsible for their actions.

The analogy of legal entities, however, is problematic, as the concept of legal entity is necessary in order to carry out justice in a speedy and effective manner. But the actions of legal entities always go back to those of a single person or group of people, even if it is impossible to determine exactly who they are [9]. In other words, the legal responsibility of companies and similar entities is linked to the actions performed by their employees or representatives. What is more, legal entities are only deemed to be criminally liable if an individual performing the illegal action on behalf of the legal entity is determined [10]. The actions of artificial intelligence-based systems will not necessarily be traced back to the actions of an individual.

Finally, legal norms on the sources of increased danger can be applied to artificial intelligence-based systems. In accordance with Paragraph 1 of Article 1079 of the Civil Code of the A Russian Federation, legal entities and individuals whose activities are associated with increased danger for the surrounding population (the use of transport vehicles, mechanisms, etc.) shall be obliged to redress the injury inflicted by the source of increased danger, unless they prove that injury has been inflicted as a result of force majeure circumstances or at the intent of the injured person. The problem is identifying which artificial intelligence systems can be deemed sources of increased danger. The issue is similar to the one mentioned above regarding domestic and wild animals.

National and International Regulation

Many countries are actively creating the legal conditions for the development of technologies that use artificial intelligence. For example, the “Intelligent Robot Development and Dissemination Promotion Law” has been in place in South Korea since 2008. The law is aimed at improving the quality of life and developing the economy through the creation and promotion of a strategy for the sustainable development of the smart robot industry. Every five years, the government works out a basic plan to ensure that these goals are achieved.

I would like to pay particular attention here to two recent examples: France, which has declared its ambitions to become a European and world leader in artificial intelligence; and the European Union, which has put forward advanced rules for the regulation of smart robots.

France

In late March 2018, President of France Emmanuel Macron presented the country’s new national artificial intelligence strategy, which involves investing 1.5 billion Euros over the next five years to support research an innovation in the field. The strategy is based on the recommendations made in the report prepared under the supervision of French mathematician and National Assembly deputy Cédric Villani. The decision was made for the strategy to be aimed at four specific sectors: healthcare; transport; the environment and environmental protection; and security. The reasoning behind this is to focus potential of the comparative advantages and competencies in artificial intelligence on sectors where companies can play a key role at the global level, and because these technologies are important for the public interest, etc.

Seven key proposals are given, one of which is of particular interest for the purposes of this article – namely, to make artificial intelligence more open. It is true that the algorithms used in artificial intelligence are discrete and, in most cases, trade secrets. However, algorithms can be biased, for example, in the process of self-learning, they can absorb and adopt the stereotypes that exist in society or which are transferred to them by developers and make decisions based on them. There is already legal precedent for this. A defendant in the United States received a lengthy prison sentence on the basis of information obtained from an algorithm predicting the likelihood of repeat offences being committed. The defendant’s appeal against the use of an algorithm in the sentencing process was rejected because the criteria used to evaluate the possibility of repeat offences were a trade secret and therefore not presented. The French strategy proposes developing transparent algorithms that can be tested and verified, determining the ethical responsibility of those working in artificial intelligence, creating an ethics advisory committee, etc.

European Union

The creation of the resolution on the Civil Law Rules on Robotics marked the first step towards the regulation of artificial intelligence in the European Union. A working group on legal questions related to the development of robotics and artificial intelligence in the European Union was established back in 2015. The resolution is not a binding document, but it does give a number of recommendations to the European Commission on possible actions in the area of artificial intelligence, not only with regard to civil law, but also to the ethical aspects of robotics.

The resolution defines a “smart robot” as “one which has autonomy through the use of sensors and/or interconnectivity with the environment, which has at least a minor physical support, which adapts its behaviour and actions to the environment and which cannot be defined as having ‘life’ in the biological sense.” The proposal is made to “introduce a system for registering advanced robots that would be managed by an EU Agency for Robotics and Artificial Intelligence.” As regards liability for damage caused by robots, two options are suggested: “either strict liability (no fault required) or on a risk-management approach (liability of a person who was able to minimise the risks).” Liability, according to the resolution, “should be proportionate to the actual level of instructions given to the robot and to its degree of autonomy. Rules on liability could be complemented by a compulsory insurance scheme for robot users, and a compensation fund to pay out compensation in case no insurance policy covered the risk.”

The resolution proposes two codes of conduct for dealing with ethical issues: a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics Committees. The first code proposes four ethical principles in robotics engineering: 1) beneficence (robots should act in the best interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).

The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore, our attitude to autonomous systems (whether they are robots or something else), and our reinterpretation of their role in society and their place among us, can have a transformational effect. Legal personality determines what is important for society and allows the decision to made as to whether “something” is a valuable and reasonable object for the purposes of possessing rights and obligations.

Due to the specific features of artificial intelligence, suggestions have been put forward regarding the direct responsibility of certain systems [11]. According to this line of thought, there are no fundamental reasons why autonomous systems should not be legally liable for their actions. The question remains, however, about the necessity or desirability of introducing this kind of liability (at least at the present stage). It is also related to the ethical issues mentioned above. Perhaps making programmers or users of autonomous systems liable for the actions of those systems would be more effective. But this could slow down innovation. This is why we need to continue to search for the perfect balance.

In order to find this balance, we need to address a number of issues. For example: What goals are we pursuing in the development of artificial intelligence? And how effective will it be? The answers to these questions will help us to prevent situations like the one that appeared in Russia in the 17th century, when an animal (specifically goats) was exiled to Siberia for its actions [12].

First published at our partner RIAC

  1. 1. See, for example. D. Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong, Princeton University Press, 2013.
  2. 2. Asaro P., “From Mechanisms of Adaptation to Intelligence Amplifiers: The Philosophy of W. Ross Ashby,” in Wheeler M., Husbands P., and Holland O. (eds.) The Mechanical Mind in History, Cambridge, MA: MIT Press: pp. 149–184
  3. 3. Asaro P. The Liability Problem for Autonomous Artificial Agents // AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA. March 21–23, 2016, p. 191.
  4. 4. Arkhipov, V., Naumov, V. On Certain Issues Regarding the Theoretical Grounds for Developing Legislation on Robotics: Aspects of Will and Legal Personality // Zakon. 2017, No. 5, p. 167.
  5. 5. Asaro P. The Liability Problem for Autonomous Artificial Agents, p. 193.
  6. 6. Arkhipov, V., Naumov, V. Op. cit., p. 164.
  7. 7. See, for example. Winkler A. We the Corporations: How American Businesses Won Their Civil Rights. Liverlight, 2018. See a description here: https://www.nytimes.com/2018/03/05/books/review/adam-winkler-we-the-corporations.html
  8. 8. In countries that use the Anglo-Saxon legal system, the European Union and some Middle Eastern countries. This kind of liability also exists in certain former Soviet countries: Georgia, Kazakhstan, Moldova and Ukraine. It does not exist in Russia, although it is under discussion.
  9. 9. Brożek B., Jakubiec M. On the Legal Responsibility of Autonomous Machines // Artificial Intelligence Law. 2017, No. 25(3), pp. 293–304.
  10. 10. Khanna V.S. Corporate Criminal Liability: What Purpose Does It Serve? // Harvard Law Review. 1996, No. 109, pp. 1477–1534.
  11. 11. Hage J. Theoretical Foundations for the Responsibility of Autonomous Agents // Artificial Intelligence Law. 2017, No. 25(3), pp. 255–271.
  12. 12. U. Pagallo, The Laws of Robots. Crimes, Contracts, and Torts. Springer, 2013, p. 36.
Continue Reading
Comments

Tech

Our Shared Digital Future

MD Staff

Published

on

Building a digital economy and society that is trusted, inclusive and sustainable requires urgent attention in six priority areas according to a new report, Our Shared Digital Future, published by the World Economic Forum today.

The report represents a collaborative effort by business, government and civil society leaders, experts and practitioners. It follows an 18-month dialogue aimed at restoring the internet’s capacity for delivering positive social and economic development.

The report comes at a historic moment on the day when, for the first time, more than one-half of the world’s population is now connected to the internet. At the same time, less than one-half of those already online trust that technology will make their lives better.

With 60% of the global economy forecast to be digitized by 2022, there remains huge potential for the Fourth Industrial Revolution to lift more people out of poverty and strengthen societies and communities. However, success depends on effective collaboration between all stakeholder groups. The authors, in addition to unveiling six key areas for action, also highlight several existing efforts at global and local levels where collaboration is helping to restore trust and deliver broad-based societal benefits.

The six priority areas for multistakeholder collaboration are:

Internet access and adoption

Internet access growth has slowed from 19% in 2007 to 6% in 2017. At the same time, we have reached the milestone of 50% of the world’s population being connected to the internet. To close the digital divide, more investment is needed to not only provide access, but also improve adoption.

Good digital identity

By 2020, the average internet user will have more than 200 online accounts and by 2022, 150 million people are forecast to have blockchain-based digital identities. However, 1 billion people currently lack a formal identity, which excludes them from the growing digital economy. Good digital identity solutions are key to addressing this divide, empowering individuals, and protecting their rights in society.

Positive impact on society

By 2022, an estimated 60% of global GDP will be digitized. In 2018, companies are expected to spend more than $1.2 trillion on digital transformation efforts. Yet, only 45% of the world’s population feel that technology will improve their lives. Companies need to navigate digital disruption and develop new responsible business models and practices.

Cybersecurity

Cyberattacks result in annual losses of up to $400 billion to the global economy. More than 4.5 billion records were compromised by malicious actors in the first half of 2018, up from 2.7 billion records for the whole of 2017. A safe and secure digital environment requires global norms and practices to mitigate cyber-risks.

Governance of the Fourth Industrial Revolution

Policy-makers and traditional governance models are being challenged by the sheer magnitude and speed of the technological changes of the Fourth Industrial Revolution. Developing new and participatory governance mechanisms to complement traditional policy and regulation is essential to ensure widespread benefits, close the digital divide and address the global nature of these developments.

Data

The amount of data that keeps the digital economy flowing is growing exponentially. By 2020, there will be more than 20 billion connected devices globally. Yet there is no consensus on whether data is a type of new currency for companies to trade or a common public good that needs stricter rules and protection. The digital economy and society must bridge this gap by developing innovations that allow society to benefit from data while protecting privacy, innovation and criminal justice.

“The digital environment is like our natural environment,” said Derek O’Halloran, Head, Future of Digital Economy and Society, the World Economic Forum. “We all – governments, businesses, individuals – have a duty to ensure it remains clean, safe and healthy. This paper marks a step forward in offering a blueprint for a better internet we can all work towards: One that is inclusive, trustworthy and sustainable.”

The report is part of ongoing work by the World Economic Forum to provide a platform to accelerate, amplify or catalyse collaborative efforts from business, government, academia and civil society to advance progress towards an inclusive, trustworthy and sustainable digital economy. The report provides an overview of key issues for the digital economy and society, establishes priorities for multistakeholder collaboration for the year ahead, and highlights existing key initiatives and resources.

“Our existing institutions, mechanisms and models are struggling to effectively respond to the pace of digital change and its distributed nature. This report identifies critical areas of focus for public-private partnerships to help restore trust in an inclusive and prosperous digital future,” said Jim Smith, Chief Executive Officer, Thomson Reuters and Co-Chair, World Economic Forum System Initiative on Shaping the Future of Digital Economy and Society.

“While recognizing that digital developments fuel many opportunities in political, commercial and social spheres, a key point of this paper is the need to focus on inclusion and addressing digital divides; only through incorporating more voices and views – in the development of political and commercial policies – will we be able to create a society that truly benefits all,” said Lynn St. Amour, Chair of the UN Internet Governance Forum (IGF)’s Multistakeholder Advisory Group, and Co-Chair, World Economic Forum System Initiative on Shaping the Future of Digital Economy and Society.

Continue Reading

Tech

Internet milestone reached: More than 50 per cent go online

MD Staff

Published

on

For the first time, more than half of the world’s population of nearly 8 billion will be using the internet by the end of 2018, the United Nations telecommunications agency announced on Friday.

International Telecommunication Union (ITU) global and regional estimates for 2018 are “a pointer to the great strides the world is making towards building a more inclusive global information society,” Houlin Zhao, ITU Secretary-General, said.

The record figure of 3.9 billion people, or 51.2 per cent that will be online by the end of December, is an important milestone in the digital revolution, according to the ITU. The agency insists that this increased connectivity will help promote sustainable development everywhere.

The latest figures also spotlight Africa, which shows the strongest rate of growth in internet access, from around two per cent in 2005, to more than 24 per cent of the African population this year.

Europe and the Americas are the regions with the slowest growth rates, though the current figures show that 79.6 per cent and 69.6 per cent are online, respectively.

Overall, said the ITU, “in developed countries, slow and steady growth increased the percentage of population using the Internet, from 51.3 per cent in 2005 to 80.9 per cent in 2018.”

Despite this progress, ITU has warned that a lot of communities worldwide, still do not use the internet, particularly women and girls. The statistics show older people also disproportionately remain offline, as do those with disabilities, indigenous populations and some people living in the world’s poorest places.

In a bid to reduce inequalities, the agency is calling on more infrastructure investment from the public and private sectors, and to focus on ensuring that access remains affordable for all.

“We must encourage more investment from the public and private sectors and create a good environment to attract investments, and support technology and business innovation so that the digital revolution leaves no one offline,” said Mr. Zhao.

Continue Reading

Tech

Utilizing Artificial Intelligence for Environmental Sustainability

Published

on

The improvement in human development is becoming vividly contingent on the surrounding natural environment, and may be confined by its future deterioration as a response to the negative stimulus. Man-made problems like increasing population, urbanization and industrialisation, of which our mother earth is a victim in this century, have forced society to consider whether human beings are changing the very conditions essential to life on Earth. Antediluvian technologies have played a very meager role in the planning, prediction, supervision and control of environmental processes at different scales and within various time spans. An effective environment protection policy is largely dependent on the quality of information available and the utility of contemporary technologies like Artificial intelligence (AI), deep learning and data analytics that can be used to take an appropriate decision at an appropriate time. This convergence can help AI move from in vitro (in research labs) to in vivo (in everyday lives).

The global environment is in a bad shape. Natural disasters around the world are happening at an alarming rate, we have witnessed earthquakes, wildfires and cyclones that cause mass flooding and property damage. Around twenty per cent of species currently face extinction, and that number could rise to 50 per cent by 2100. And even if all the world economies keep their Paris climate pledges, by 2100, it’s predicted that average global temperatures will be 3˚C higher than in pre-industrial times, making it an invincible environmental catastrophe. There are reports which suggest that the recent fire break in California, United States of America and the floods in Kerala, India could have been mitigated effectively with proper supervision and planning. Here comes the role of AI.AI is considered to be the most dynamic game-changer in the global economy. According to a World Economic Forum report, Harnessing Artificial Intelligence for the Earth, AI refers to computer systems that “can sense their environment, think, learn, and act in response to what they perceive and their programmed purposes.” AI has helped environment researchers clinch almost 90 per cent accuracy in spotting climate change factors like tropical cyclones, weather fronts, tidal changes and atmospheric rivers, which can cause heavy precipitation and are often impossible for humans to identify on their own.  In India, AI has helped farmers get 30 per cent higher yields per hectare by providing information on preparing the land, applying fertilizer and choosing sowing dates, as reported by the Government of India in 2018. In Norway, AI has penetrated into the field of policy-making and helped create a flexible and autonomous electric grid, integrating more renewable energy.

The long list of technology and economy shapers, who believe that artificial intelligence, often encompassing machine learning and deep learning, is a “game changer” for climate change and environmental issues, includes Microsoft, Google, IBM and Tesla among others. Microsoft’s AI for Earth program has committed $50 million over five years to develop and test novel tech-applications for AI. In China, IBM’ Green Horizon project is utilizing an AI system that can forecast air pollution, track pollution sources and develop potential strategies and solutions to tackle it. For instance, data analysis can be used to determine whether it would be more effective to restrict carbon output close certain power plants in order to reduce pollution in a particular zone. The Ocean Data Alliance is developing a machine learning system to provide data from satellites and ocean exploration so that decision-makers can monitor shipping, ocean mining, fishing, coral bleaching or the outbreak of a marine disease. Modern technologies like artificial intelligence, geographic information system tools and movement detectors, are revamping the way wildlife reserves and conservation bodies are working across India.AI can also help prophesy the spread of invasive materials, keep a track of marine litter and measure water pollution levels. The 21st century is the age of data, with accuracy as the key, decision-makers and authorities will be able to respond to problems more quickly with real-time data. Considering the global evolution of AI and its application, it is evidentially predicted that by 2030, AI will add up to USD 15.7 trillion of the global economy which is more than the present output of China and India combined.  The United Nations recognize that AI has the potential to accelerate progress towards a dignified life, in peace and prosperity, for all people. The UN Artificial Intelligence Summit held in Geneva (2017) suggested refocusing the use of this technology, on achieving sustainable development goals and assisting global economies to eliminate poverty and to conserve natural resources and protect the environment.

Countries and civil societies develop incredible AI application systems with diverse features, but sometimes these systems do not take into consideration the good of individuals and society. So, it is important to develop systems which can deliver the change required to build a clean, resource-secure and inclusive economy, enabled by technology and supported by public policy and investment. Many industry giants like Microsoft, Google and Tesla, while pushing the parameters for human innovations, have made productive efforts in developing ‘Earth Friendly’ or ‘Eco-Friendly’ AI mechanisms. For instance, Google’s brainchild DeepMind AI has helped the organization to curb their data centre energy usage by 40 per cent making them more energy efficient and reducing overall greenhouse gas emissions.AI innovation will also be fundamental to the attainment of the United Nations Sustainable Development Goals (SDGs) and will also promote the resolution of humanity’s grand challenges by maximizing on the unequalled quantities of data now being generated on sentiment behaviour, human health, migration and more.

For any country to maximally benefit from the AI revolution, it must adopt a deliberate policy to drive AI innovation and proliferation in sectors affecting climate change. With powerful economies making rapid progress in AI-based research, it is imperative that the World looks at AI as a critical element of environmental sustainability. These recent advances in AI are a wake-up call to policymakers as our climate is under increasing strain. Aiming for sustainability is an opportunity of this generation. AI and other Fourth Industrial revolution ideas are the new innovative solutions that can revolutionize environmental protection measures.

Continue Reading

Latest

Trending

Copyright © 2018 Modern Diplomacy