Connect with us

Tech

The Ethical and Legal Issues of Artificial Intelligence

Published

on

Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical issues. Artificial intelligence adds a new dimension to these questions. Systems that use artificial intelligence technologies are becoming increasingly autonomous in terms of the complexity of the tasks they can perform, their potential impact on the world and the diminishing ability of humans to understand, predict and control their functioning. Most people underestimate the real level of automation of these systems, which have the ability to learn from their own experience and perform actions beyond the scope of those intended by their creators. This causes a number of ethical and legal difficulties that we will touch upon in this article.

Ethics and Artificial Intelligence

There is a well-known thought experiment in ethics called the trolley problem. The experiment raises a number of important ethical issues that are directly related to artificial intelligence. Imagine a runaway trolley going down the railway lines. There are five people tied to the track ahead. You are standing next to a lever. If you pull it, the trolley will switch to a different set of track. However, there is another person tied to that set of track. Do you pull the lever or not?

Source: Wikimedia.org

There is no clear-cut answer to this question. What is more, there are numerous situations in which such a decision may have to be made [1]. And different social groups tend to give different answers. For example, Buddhist monks are overwhelmingly willing to sacrifice the life of one person in order to save five, even if presented with a more complicated variation of the trolley problem.

As for artificial intelligence, such a situation could arise, for example, if a self-driving vehicle is travelling along a road in a situation where an accident is unavoidable. The question thus arises as to whose lives should take priority – those of the passengers, the pedestrians or neither. A special website has been created by the Massachusetts Institute of Technology that deals with this very issue: users can test out various scenarios out on themselves and decide which courses of action would be the most worthwhile.

Other questions also arise in this case: What actions can be allowed from the legal point of view? What should serve as a basis for such decisions? Who should ultimately be held responsible? This problem has already been addressed by companies and regulators. Representatives at Mercedes, for example, have said outright that their cars will prioritize the lives of passengers. The Federal Ministry of Transport and Digital Infrastructure of Germany responded to this immediately, anticipating future regulation by stating that making such a choice based on a set of criteria would be illegal, and that the car manufacturer be held responsible for any injury or loss of life.

Other countries may go a different route. Take the Chinese Social Credit System, for example, which rates its citizens based how law-abiding and how useful to society they are, etc. Those with low ratings will face sanctions. What is stopping the Chinese government from introducing a law that forces manufacturers of self-driving vehicles to sacrifice the lives of lower-rated citizens in the event of an unavoidable accident? Face recognition technologies and access to the relevant databases make it perfectly possible to identify potential victims and compare their social credit ratings.

The Main Problems Facing the Law

The legal problems run even deeper, especially in the case of robots. A system that learns from information it receives from the outside world can act in ways that its creators could not have predicted [2], and predictability is crucial to modern legal approaches. What is more, such systems can operate independently from their creators or operators thus complicating the task of determining responsibility. These characteristics pose problems related to predictability and the ability to act independently while at the same time not being held responsible [3].

There are numerous options in terms of regulation, including regulation that is based on existing norms and standards. For example, technologies that use artificial intelligence can be regulated as items subject to copyright or as property. Difficulties arise here, however, if we take into account the ability of such technologies to act autonomously, against the will of their creators, owners or proprietors. In this regard, it is possible to apply the rules that regulate a special kind of ownership, namely animals, since the latter are also capable of autonomous actions. In Russian Law, the general rules of ownership are applied to animals (Article 137 of the Civil Code of the Russian Federation); the issue of responsibility, therefore, comes under Article 1064 of the Civil Code of the Russian Federation: injury inflicted on the personality or property of an individual shall be subject to full compensation by the person who inflicted the damage.

Proposals on the application of the law on animals have been made [4], although they are somewhat limited. First, the application of legislation on the basis of analogy is unacceptable within the framework of criminal law. Second, these laws have been created primarily for household pets, which we can reasonably expect will not cause harm under normal circumstances. There have been calls in more developed legal systems to apply similar rules to those that regulate the keeping of wild animals, since the rules governing wild animals are more stringent [5]. The question arises here, however, of how to make a separation with regard to the specific features of artificial intelligence mentioned above. Moreover, stringent rules may actually slow down the introduction of artificial intelligence technologies due to the unexpected risks of liability for creators and inventors.

Another widespread suggestion is to apply similar norms to those that regulate the activities of legal entities [6]. Since a legal entity is an artificially constructed subject of the law [7], robots can be given similar status. The law can be sufficiently flexible and grant the rights to just about anybody. It can also restrict rights. For example, historically, slaves had virtually no rights and were effectively property. The opposite situation can also be observed, in which objects that do not demonstrate any explicit signs of the ability to do anything are vested with rights. Even today, there are examples of unusual objects that are recognized as legal entities, both in developed and developing countries. In 2017, a law was passed in New Zealand recognizing the status of the Whanganui River as a legal entity. The law states that the river is a legal entity and, as such, has all the rights, powers and obligations of a legal entity. The law thus transformed the river from a possession or property into a legal entity, which expanded the boundaries of what can be considered property and what cannot. In 2000, the Supreme Court of India recognized the main sacred text of the Sikhs, the Guru Granth Sahib, as a legal entity.

Even if we do not consider the most extreme cases and cite ordinary companies as an example, we can say that some legal systems make legal entities liable under civil and, in certain cases, criminal law [8]. Without determining whether a company (or state) can have free will or intent, or whether they can act deliberately or knowingly, they can be recognized as legally responsible for certain actions. In the same way, it is not necessary to ascribe intent or free will to robots to recognize them as responsible for their actions.

The analogy of legal entities, however, is problematic, as the concept of legal entity is necessary in order to carry out justice in a speedy and effective manner. But the actions of legal entities always go back to those of a single person or group of people, even if it is impossible to determine exactly who they are [9]. In other words, the legal responsibility of companies and similar entities is linked to the actions performed by their employees or representatives. What is more, legal entities are only deemed to be criminally liable if an individual performing the illegal action on behalf of the legal entity is determined [10]. The actions of artificial intelligence-based systems will not necessarily be traced back to the actions of an individual.

Finally, legal norms on the sources of increased danger can be applied to artificial intelligence-based systems. In accordance with Paragraph 1 of Article 1079 of the Civil Code of the A Russian Federation, legal entities and individuals whose activities are associated with increased danger for the surrounding population (the use of transport vehicles, mechanisms, etc.) shall be obliged to redress the injury inflicted by the source of increased danger, unless they prove that injury has been inflicted as a result of force majeure circumstances or at the intent of the injured person. The problem is identifying which artificial intelligence systems can be deemed sources of increased danger. The issue is similar to the one mentioned above regarding domestic and wild animals.

National and International Regulation

Many countries are actively creating the legal conditions for the development of technologies that use artificial intelligence. For example, the “Intelligent Robot Development and Dissemination Promotion Law” has been in place in South Korea since 2008. The law is aimed at improving the quality of life and developing the economy through the creation and promotion of a strategy for the sustainable development of the smart robot industry. Every five years, the government works out a basic plan to ensure that these goals are achieved.

I would like to pay particular attention here to two recent examples: France, which has declared its ambitions to become a European and world leader in artificial intelligence; and the European Union, which has put forward advanced rules for the regulation of smart robots.

France

In late March 2018, President of France Emmanuel Macron presented the country’s new national artificial intelligence strategy, which involves investing 1.5 billion Euros over the next five years to support research an innovation in the field. The strategy is based on the recommendations made in the report prepared under the supervision of French mathematician and National Assembly deputy Cédric Villani. The decision was made for the strategy to be aimed at four specific sectors: healthcare; transport; the environment and environmental protection; and security. The reasoning behind this is to focus potential of the comparative advantages and competencies in artificial intelligence on sectors where companies can play a key role at the global level, and because these technologies are important for the public interest, etc.

Seven key proposals are given, one of which is of particular interest for the purposes of this article – namely, to make artificial intelligence more open. It is true that the algorithms used in artificial intelligence are discrete and, in most cases, trade secrets. However, algorithms can be biased, for example, in the process of self-learning, they can absorb and adopt the stereotypes that exist in society or which are transferred to them by developers and make decisions based on them. There is already legal precedent for this. A defendant in the United States received a lengthy prison sentence on the basis of information obtained from an algorithm predicting the likelihood of repeat offences being committed. The defendant’s appeal against the use of an algorithm in the sentencing process was rejected because the criteria used to evaluate the possibility of repeat offences were a trade secret and therefore not presented. The French strategy proposes developing transparent algorithms that can be tested and verified, determining the ethical responsibility of those working in artificial intelligence, creating an ethics advisory committee, etc.

European Union

The creation of the resolution on the Civil Law Rules on Robotics marked the first step towards the regulation of artificial intelligence in the European Union. A working group on legal questions related to the development of robotics and artificial intelligence in the European Union was established back in 2015. The resolution is not a binding document, but it does give a number of recommendations to the European Commission on possible actions in the area of artificial intelligence, not only with regard to civil law, but also to the ethical aspects of robotics.

The resolution defines a “smart robot” as “one which has autonomy through the use of sensors and/or interconnectivity with the environment, which has at least a minor physical support, which adapts its behaviour and actions to the environment and which cannot be defined as having ‘life’ in the biological sense.” The proposal is made to “introduce a system for registering advanced robots that would be managed by an EU Agency for Robotics and Artificial Intelligence.” As regards liability for damage caused by robots, two options are suggested: “either strict liability (no fault required) or on a risk-management approach (liability of a person who was able to minimise the risks).” Liability, according to the resolution, “should be proportionate to the actual level of instructions given to the robot and to its degree of autonomy. Rules on liability could be complemented by a compulsory insurance scheme for robot users, and a compensation fund to pay out compensation in case no insurance policy covered the risk.”

The resolution proposes two codes of conduct for dealing with ethical issues: a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics Committees. The first code proposes four ethical principles in robotics engineering: 1) beneficence (robots should act in the best interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).

The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore, our attitude to autonomous systems (whether they are robots or something else), and our reinterpretation of their role in society and their place among us, can have a transformational effect. Legal personality determines what is important for society and allows the decision to made as to whether “something” is a valuable and reasonable object for the purposes of possessing rights and obligations.

Due to the specific features of artificial intelligence, suggestions have been put forward regarding the direct responsibility of certain systems [11]. According to this line of thought, there are no fundamental reasons why autonomous systems should not be legally liable for their actions. The question remains, however, about the necessity or desirability of introducing this kind of liability (at least at the present stage). It is also related to the ethical issues mentioned above. Perhaps making programmers or users of autonomous systems liable for the actions of those systems would be more effective. But this could slow down innovation. This is why we need to continue to search for the perfect balance.

In order to find this balance, we need to address a number of issues. For example: What goals are we pursuing in the development of artificial intelligence? And how effective will it be? The answers to these questions will help us to prevent situations like the one that appeared in Russia in the 17th century, when an animal (specifically goats) was exiled to Siberia for its actions [12].

First published at our partner RIAC

  1. 1. See, for example. D. Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong, Princeton University Press, 2013.
  2. 2. Asaro P., “From Mechanisms of Adaptation to Intelligence Amplifiers: The Philosophy of W. Ross Ashby,” in Wheeler M., Husbands P., and Holland O. (eds.) The Mechanical Mind in History, Cambridge, MA: MIT Press: pp. 149–184
  3. 3. Asaro P. The Liability Problem for Autonomous Artificial Agents // AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA. March 21–23, 2016, p. 191.
  4. 4. Arkhipov, V., Naumov, V. On Certain Issues Regarding the Theoretical Grounds for Developing Legislation on Robotics: Aspects of Will and Legal Personality // Zakon. 2017, No. 5, p. 167.
  5. 5. Asaro P. The Liability Problem for Autonomous Artificial Agents, p. 193.
  6. 6. Arkhipov, V., Naumov, V. Op. cit., p. 164.
  7. 7. See, for example. Winkler A. We the Corporations: How American Businesses Won Their Civil Rights. Liverlight, 2018. See a description here: https://www.nytimes.com/2018/03/05/books/review/adam-winkler-we-the-corporations.html
  8. 8. In countries that use the Anglo-Saxon legal system, the European Union and some Middle Eastern countries. This kind of liability also exists in certain former Soviet countries: Georgia, Kazakhstan, Moldova and Ukraine. It does not exist in Russia, although it is under discussion.
  9. 9. Brożek B., Jakubiec M. On the Legal Responsibility of Autonomous Machines // Artificial Intelligence Law. 2017, No. 25(3), pp. 293–304.
  10. 10. Khanna V.S. Corporate Criminal Liability: What Purpose Does It Serve? // Harvard Law Review. 1996, No. 109, pp. 1477–1534.
  11. 11. Hage J. Theoretical Foundations for the Responsibility of Autonomous Agents // Artificial Intelligence Law. 2017, No. 25(3), pp. 255–271.
  12. 12. U. Pagallo, The Laws of Robots. Crimes, Contracts, and Torts. Springer, 2013, p. 36.
Continue Reading
Comments

Tech

Digital Spending Increases, Greater Focus on Digital Strategy Is a Top Need for State Auditors

MD Staff

Published

on

photo: Deloitte

The 2018 Digital Government Transformation Survey released today by Deloitte and the National Association of State Auditors, Comptrollers and Treasurers (NASACT) reveals how its members are investing more in digital transformation, yet only 35 percent of respondents are satisfied with their organizations’ responses to digital trends. This is a drop of 29 points from the 2015 survey. Additionally, less than half of respondents stated they have a clear and coherent digital strategy.

“The survey reveals an eagerness for state financial professionals to use digital technologies on par with the private sector,” said R. Kinney Poynter, executive director, NASACT. “Our members want to take advantage of emerging technologies, but clearly impediments to being more digital remain.”

“One clear takeaway from the survey is that those NASACT member organizations who have a clear and coherent digital strategy consider their digital capabilities to be comparable or ahead of the private sector,” said Christina Dorfhuber, principal, Deloitte Consulting LLP, and a government and public services ERP strategy leader. “We also saw how respondents with a digital strategy were more satisfied with their organization’s reaction to new trends and more confident in their organization’s readiness to respond to new ones, demonstrating that much of an organization’s digital prowess hinges on that strategy.”

“The expectations for digital strategies and opportunities are clearly increasing for all organizations, including governments,” said Clark Partridge, state comptroller of Arizona and president-elect of NASACT. “As we expand our understanding, we can appropriately identify opportunities to leverage technology to re-engineer our processes and enhance the capacity of our workforce. The result is a greater capacity to successfully accomplish the work of government and deliver quality outcomes to citizens.”

The survey reveals three key themes:

A digital strategy is important. Most, but not all, respondents reported having a digital strategy and believe that there is more that needs to be done. Those with a digital strategy were more satisfied with their organization’s reaction to digital trends (54 percent versus 18 percent of respondents) and confident in the understanding of digital trends by their leaders (87 percent versus 30 percent).

Investing in automation and cognitive technologies. With more funding, organizations must determine which technologies to invest in. Currently only 11 percent of organizations reported a broad use of automation and cognitive technologies. Increasing these numbers will be critical as more audits are likely to be augmented by these technologies in the coming year.

Addressing the digital skills gap. While 65 percent of organizations indicated that training staff would be a key focus, 39 percent of organizations also noted they would augment staff with consultants and contractors. Additionally, only 48 percent of respondents believe their employees have sufficient skills to execute a digital strategy while 43 percent believe that employees have the skills for automation and cognitive technologies.

The report examined the need for more training and a skilled workforce in these new emerging technologies to eliminate the skills gap.

“Emerging technologies can have tremendous benefits for state organizations, but preparation is needed,” said William D. Eggers, executive director for Deloitte’s Center for Government Insights. “Public finance leaders looking to capitalize on emerging technologies should devise a roadmap for integrating these technologies into their day-to-day operations.”

The previous survey was conducted in 2015. This year’s survey includes feedback from more than 70 NASACT member offices. A more detailed analysis of the survey can be found here, including data specific to auditors, comptrollers and treasurers.

Continue Reading

Tech

AI Creating Big Winners in Finance but Others Stand to Lose as Risks Emerge

MD Staff

Published

on

Artificial intelligence is changing the finance industry, with some early big movers monetizing their investments in back-office AI applications. But as this trend widens, new systemic and security risks may be introduced in the financial system. These are some of the findings of a new World Economic Forum report, The New Physics of Financial Services – How artificial intelligence is transforming the financial ecosystem, prepared in collaboration with Deloitte.

“Big financial institutions are taking a page from the AI book of big tech: They develop AI applications and make them available as a ‘service’ through the cloud,” said Jesse McWaters, AI in Financial Services Project Lead at the World Economic Forum. “It is turning what were historically cost centres into new source of profitability, and creating a virtuous cycle of self-learning that accelerates their lead.”

The report points to Ping An’s One Connect and BlackRock’s Aladdin platform as prime examples of the trend:

In China, One Connect sells AI-powered services ranging from credit adjudication to instantaneous insurance claims settlement to hundreds of small and mid-sized Chinese banks and is expected to fetch up to $3 billion at public sale

In the US, Aladdin provides sophisticated risk analytics and comprehensive portfolio management tools that leverage machine learning to a range of asset managers and insurers and is expected by BlackRock’s Chief Executive Officer Larry Fink to provide 30% of the firm’s revenues by 2022

The report, which draws on interviews and workshops with hundreds of financial and technology experts, observes that the “size of the prize” driven through these as-a-service offerings and other applications of AI is much larger than that of the more narrow applications that drive efficiency through the automation of human effort.

The report predicts that AI will also accelerate the “race to the bottom” for many products, as price becomes highly comparable via aggregation services and third-party services commoditize back office excellence.

“AI’s role in financial services is often seen narrowly as driving efficiency through the automation of human effort, but much greater value can be driven through more innovative and transformative applications,” said Rob Galaski, Deloitte Global Banking & Capital Markets Consulting Leader.

As such, financial institutions are seeking to build new sources of differentiation on the back of AI, such as on-the-fly product customization and free advisory services built into products.

Canadian lender RBC is providing its automotive dealership clients with sophisticated demand-forecasting tools that complement the existing credit products it provides to these firms

IEX, a young New York-based stock exchange, is exploring the use of machine learning in creating new order types that protect trades from execution during unstable, potentially adverse conditions

The net result for customers will be “self-driving finance” – a customer experience where an individual’s or firm’s finances are effectively running themselves, engaging the client only to act as a trusted adviser on decisions of importance.

“A small business won’t go to a bank for a revolving line of credit,” said Bob Contri, Deloitte Global Financial Services Leader. “It will seek out a liquidity solution that anticipates how their need for growth capital will evolve and provides customized products to meet those needs,” he said.

But the expanding presence of AI in finance doesn’t come without tensions and risks.

First, financial institutions will be drawn closer to big tech since cloud computing is central to most AI strategies. But there is a chance that most of the benefits will escape them.

Second, the report warns that AI will raise new challenges for the financial ecosystem, particularly around regulation. The divergent path being taken by regulators around the world towards customer data could create a new form a regulatory arbitrage, project participants said.

Finally, the report points to systemic and security risks from creating a more networked finance system, where a few AI databases contain most clients’ information.

Continue Reading

Tech

Your new digital rights across Europe during summer holidays

MD Staff

Published

on

This summer, European citizens will enjoy more digital rights than ever before. Following the end of roaming charges across the European Union last year, holidaymakers can now travel with their online TV, film, sports, music or e-book subscriptions at no extra cost. In addition, everyone across Europe can enjoy world-class data protection rules that ensure all Europeans have better control over their personal data.

Andrus Ansip, Vice-President for the Digital Single Market said: “Europeans are already starting to feel the benefits of the Digital Single Market. This summer you will be able to bring your favourite TV programmes and sports matches with you wherever you travel in the EU. By the end of this year, you will also be able to buy festival tickets or rent cars online from all over the EU without being geo-blocked or re-routed.”

Věra Jourová, Commissioner for Justice, Consumers and Gender Equality added: “The digital world offers tremendous opportunities, but also challenges; for example, our personal data is a useful asset for many companies. With the modern data protection rules we have put in place, Europeans have gained control over their data whenever they shop, book their holidays online or just surf the internet.

Mariya Gabriel, Commissioner for the Digital Economy and Society said: “We are improving the daily life of our citizens, be it end of roaming charges or safer online environment. By completing all our digital initiatives we will bring even more positive change to consumers and businesses alike.

Digital rights already in daily use

Since June 2017, people have been able use their mobile phones while travelling in the EU just like they would at home, without paying extra charges. Since the EU abolished roaming charges, more than five times the amount of data has been consumed and almost two and a half times more phone calls have been made in the EU and the European Economic Area.

Since April 2018, consumers can access online content services they have subscribed to in their home country also when travelling across the EU, including among other films, series and sports broadcasts (see examples in factsheet).

Under the new data protection rules which have been in place across the EU since 25 May 2018, Europeans can safely transfer personal data between service providers such as the cloud or email; everyone now has the right to know if their data has been leaked or hacked, or how their personal data is being collected. Furthermore, with the ‘right to be forgotten’, personal data has to be deleted upon request, if there are no legitimate reasons for a company to keep it.

Finally, with the net neutrality rules applying since spring 2016, every European has access to open internet, guaranteeing their freedom without discrimination when choosing content, applications, services and information of their choice.

Coming soon

With some digital rights already in place, there is more to come in the upcoming months. From September, Europeans will have increasingly the right to use their national electronic identification (eID) across the whole EU to access public services.

As of December, everyone will benefit from the free flow of non-personal data, as they will have access to better and more competitive data storage and processing services in the EU, thus complementing the free movement of people, goods, services and capital. Entrepreneurs meanwhile will have the right to decide where in the EU they store and process all types of data.

As of 3 December, Europeans will be able to shop online without unjustified discrimination wherever they are in the EU. They will not have to worry about a website blocking or re-routing them just because they – or their credit card – come from a different country.

As of next year, citizens will be able to compare parcel delivery costs more easily and benefit from more affordable prices for cross-border parcel delivery.

Agreed rules on value added tax for e-commerce will allow entrepreneurs to take care of their cross-border VAT needs in one online portal and in their own language.

With the recently agreed European Electronic Communications Code, Europeans will have the right to switch internet services and telecoms providers in a simpler way. They will also have the right to receive public alerts on mobile phones in case of an emergency. The new rules will also guarantee a better and more affordable connectivity across the EU.

With the updated rules for audiovisual media, Europeans will have the right to a safe online environment that protects them from incitement to violence, hatred, terrorism, child pornography, racism and xenophobia.

Background

The Digital Single Market strategy was proposed by the Commission in May 2015 to make the EU’s single market fit for the digital age – tearing down regulatory walls and moving from 28 national markets to a single one. This has the potential to contribute €415 billion per year to our economy and create hundreds of thousands of new jobs.

Three years later, the strategy is well on its way: 17 legislative proposals have been agreed on, while 12 proposals are still on the table. There is a strong need to complete our regulatory framework for creating the Digital Single Market. Thanks to this the value of Europe’s data economy has the potential to top €700 billion by 2020, representing 4% of the EU’s economy.

Continue Reading

Latest

Trending

Copyright © 2018 Modern Diplomacy