Cybersecurity threats are outpacing the ability to overcome them unless all stakeholders begin to cooperate. The increasingly networked, digitized, and connected world is vulnerable to cyber-threats that can only be addressed by the combined capabilities of the public and private sectors, according to a new report by the World Economic Forum in collaboration with The Boston Consulting Group (BCG). Cyber Resilience: Playbook for Public-Private Collaboration is a tool to facilitate capacity-building, policies and processes necessary to support collaboration, safeguard cyberspace and strengthen cyber-resilience.
“We need to recognize cybersecurity as a public good and move beyond the polarizing rhetoric of the current security debate. Only through collective action can we hope to meet the global challenge of cybersecurity,” said Daniel Dobrygowski, Project Lead for Cyber Resilience at the World Economic Forum.
Working collaboratively in the cybersecurity space is difficult. Cyber-threats are complex, dynamic and increasingly personal as technology saturates our economy and society. Addressing these threats requires dialogue across industries and competencies, and on subjects from the technical to the ethical. Currently, dialogue between leaders in the public and private sectors is often off-target and at cross purposes. Policy implementation also varies by national context: every country has its own unique capabilities, vulnerabilities and priorities.
“There is no simple, elegant policy solution or silver bullet here. The iterative progress and feedback loop used in software development should be a model for improving policy,” said Walter Bohmayr, Global Leader of Cybersecurity at BCG.
The Cyber Resilience: Playbook for Public-Private Collaboration report helps leaders develop a baseline understanding of the key issues and pros and cons of different policy positions on cybersecurity. The policy models discussed in detail include Zero-Days, Vulnerability Liability, Botnet Disruption, Encryption, and 10 others.
In connecting norms and values to policy, the report encourages all actors to move past absolute and rigid positions towards more nuanced discussions aimed at solving key challenges, and presents the implications of policy choices on five key values: security, privacy, economic value, accountability and fairness. Cyber-resilience will continue to be a top-of-mind topic for decision-makers, and the Forum intends to continue leading future efforts in this space through its new Global Centre for Cybersecurity, which will be presented at the Annual Meeting in Davos.
Djibouti Launches Digital Transformation to Improve Services to Citizens
The World Bank announced today new support for Djibouti’s ongoing efforts to leverage digital technology to bring government closer to citizens and improve the impact, transparency and efficiency of its public administration. With a US$15 million credit from IDA, the World Bank’s fund for the poorest countries, the new project will support the roll out of digital systems to make it easier for citizens to access services, and for more efficient tax and customs administration to boost government revenues.
The four-year Public Administration Modernization Project will help the government implement the reforms, establish the legal framework and adopt the technologies necessary for digital transformation. A principal goal will be to unify the current variety of social registries into a single, integrated national identity system (e-ID) which citizens can use to access all public services. By the end of the project, the aim is to enroll half the population in the e-ID system, with women, who are significantly underrepresented in current identity systems, representing half of the enrolled.
“Djibouti has heard the call for improved services, and is committed to using the tools of e-government to respond to it,” said Ilyas Moussa, Djibouti’s Minister of Economy and Finance in-charge of Industry. “Working in partnership with the World Bank, we have developed a strategy for modernizing our public administration and reaping the benefits of greater transparency, inclusion and efficiency offered by digital technology.”
Along with supporting the publication of and access to available services through the government’s portal, the project will fund the piloting of a Citizen Service Center (CSC). The CSC will offer broadband connections and function as a one-stop-shop for knowledge of and how to access services. Citizens will be consulted on the design of the CSC, to ensure they are accessible to vulnerable populations such as women, the disabled and those in rural areas.
“Djibouti is putting its citizens at the heart of its digital transformation,” said Dr. Asad Alam, World Bank Country Director for Egypt, Yemen and Djibouti. “Giving citizens access to information, and the tools for holding government accountable are critical steps toward improving public services, and are central goals of the public administration modernization project.”
The project will also support the use of digital technology to increase the efficiency of tax and customs administration. The development of e-Tax and e-Customs will promote fairness and predictability, while mobilizing domestic revenues.
“Digital systems remove the need for physical interactions between citizens and officials, which can often be an opportunity for corruption. The digital transformation of key administrations allows the government to raise revenues while investing in accessibility, fairness and efficiency,” said Robert Yungu, World Bank Senior Public Sector Specialist and co-Task Team Leader for the project, along with World Bank Information and Communications Technology Specialist, Axel Rifon-Perez.
The World Bank’s portfolio in Djibouti consists of nine IDA-funded projects totaling US$105 million. The portfolio is focused on social safety nets, energy, rural community development, urban poverty reduction, health, education, governance and private sector development, with particular emphasis on women and youth.
A European approach on Artificial Intelligence
The EU Commission is proposing a European approach to make the most out of the opportunities offered by artificial intelligence (AI), while addressing the new challenges AI brings. Building on European values, the Commission is proposing a three-pronged approach: increasing public and private investments; preparing for socio-economic changes brought about by AI; and ensuring an appropriate ethical and legal framework.
Boosting the EU’s technological and industrial capacity and AI uptake across the economy
What kind of challenges can AI address? What kind of AI projects will the EU fund?
AI helps us solve many societal challenges from helping doctors make faster and more accurate medical diagnoses to assisting farmers in using fewer pesticides for their crops. It also helps public administrations to provide tailor-made responses to citizens and to decrease the number of traffic accidents. AI can help fight climate change or anticipate cybersecurity threats. The Commission will fund projects to support the use of AI in many applications, from health to transport, and to digitise industry. EU funding will also support projects to improve the performance of AI technology (e.g. the quality of speech recognition).
The Commission will support fundamental research, and also help bring more innovations to the market through the European Innovation Council pilot. Additionally, the Commission will support Member States’ efforts to jointly establish AI research excellence centres across Europe. The goal is to encourage networking and collaboration between the centres, including the exchange of researchers and joint research projects.
The Commission will also support the uptake of AI across Europe, with a toolbox for potential users, focusing on small and medium-sized enterprises, non-tech companies and public administrations. The set of measures will include an EU ‘AI-on-demand platform’ giving advice and easy access to the latest algorithms and expertise; a network of AI-focused Digital Innovation Hubs facilitating testing and experimentation; and industrial data platforms offering high quality datasets. Several priorities have also been identified for the post-2020 multiannual financial framework (such as increased support in fields such as explainable AI to develop AI systems in a way which allows humans to understand the basis of their action or AI systems which need less data).
How will the European Fund for Strategic Investments (EFSI) help companies to adopt AI and when?
The European Fund for Strategic Investments will support the development and the uptake of AI, as part of the wider efforts to promote digitisation. The Commission – together with its strategic partner, the European Investment Bank Group – aims to mobilise more than €500 million in total investments in the period 2018-2020 across a range of key sectors. To this end, a thematic investment platform under the EFSI could be set up. In addition, the European Commission and the European Investment Fund have just launched VentureEU, a €2.1 billion Pan-European Venture Capital Fund-of-Funds programme, to boost investment in innovative start-up and scale-up companies across Europe.
What are Digital Innovation Hubs and how will they contribute to the use of AI?
Digital Innovation Hubs are local ecosystems that help companies in their vicinity (especially small and medium-sized enterprises) to take advantage of digital opportunities. They offer expertise on technologies, testing, skills, business models, finance, market intelligence and networking. For example, a small company that produces metal parts for the automotive industry could consult the regional hub and ask for advice on how to improve the manufacturing process with AI. Experts from the hub would then visit the factory, analyse the production process, consult with other AI experts in the network of hubs, make a proposal and then implement it. These activities would be partially financed with EU money.
Preparing for socioeconomic changes
What is the Blueprint for Sectoral cooperation on Skills? Which sectors are targeted?
Europeans should have every opportunity to acquire the skills and knowledge they need and to master new technology. National schemes are essential for providing such up-skilling and training. They can benefit from support by the European Structural and Investment Funds (supporting skills development with €27 billion over the period 2014-2020, out of which the European Social Fund invests €2.3 billion specifically in digital skills) and should also benefit from support from the private sector.
The Blueprint for Sectoral cooperation on Skills identifies skills needed and gaps in a sector and connects with partners that can help address those needs by developing a common European strategy and curricula development. Five sectors were chosen to pilot the Blueprint in 2017 (automotive, maritime technology, space/geo information, textile/leather/clothing/footwear and tourism) and six other sectors have been recently added (additive manufacturing, construction, maritime shipping, paper-based value chain, renewable energy and green technologies and steel industry) with EU funding support of close to €50 million.
What is the Digital Opportunity Traineeship in advanced digital skills for students and recent graduates? How will it support AI?
The Digital Opportunity traineeship initiative will provide cross-border traineeships for up to 6,000 students and recent graduates as of summer 2018. It will give students of all disciplines the opportunity to get hands-on digital experience within companies, in fields demanded where there is a skills gap, and strengthen their ICT skills, in fields such as AI.
In addition to the Digital Opportunity traineeships, the Commission asked all Member States to develop national digital skills strategies by mid-2017 and to set up national coalitions to support their implementation. National Coalitions bring together ICT and ICT-intensive companies, education and training providers, education and employment ministries, public and private employment services, associations, non-profit organisations and social partners, who all develop measures to bring digital skills to all levels of society. Through the Digital Skills and Jobs Coalition the Commission will encourage business-education partnerships for AI.
The European Institute of Innovation & Technology also designs specific programmes at Master and PhD levels to address needs arising from the digital sector and digital transformation. The programmes combine in-depth technical skills with strong innovation and entrepreneurial components. They develop skills linked to data collection techniques, data analysis methods, computer science, electronic engineering, deep learning and image recognition. These are all skills needed in areas of AI applications such as self-driving cars and robotics and image/video identification with applications in security and safety.
Ensuring an appropriate ethical and legal framework
How is the Commission encouraging the transparency of algorithms?
Algorithms are behind more and more decisions that affect our everyday lives such as access to universities, getting a loan, or the selection of filtering of information; transparency is therefore crucial. In several areas, there are already EU rules for algorithmic decisions. Examples include automated decisions based on personal data (General Data Protection Regulation, GDPR) and for high-frequency trading on the stock-market (Markets in Financial Instruments Directive, MiFID II).
Algorithmic transparency will be a topic addressed in the AI ethics guidelines to be developed by the end of the year. The AI ethics guidelines will build on work from various relevant initiatives such as the Algorithmic Awareness Building Project which will address issues related to algorithmic transparency, accountability and fairness.
Algorithmic transparency is not about disclosure of source code as such. It can take different forms, depending on the situation, including meaningful explanation (as required in GDPR), or reporting to the competent authorities (as required in MiFID II).
What is the product liability directive? Why is guidance needed?
The EU has liability rules for defective products. The Product Liability Directive dates from 1985 and strikes a careful balance between protecting consumers and encouraging businesses to market innovative products. The Directive covers a broad range of products and possible scenarios.
In principle, if AI is integrated into a product and a defect can be proven in a product that caused material damage to a person, the producer will be liable to pay compensation.
The actual cause of events that lead to damage or incident is decisive for the attribution of liability. The Commission plans to issue an interpretative guidance clarifying concepts of the Directive in view of the new technologies, building on a first assessment on liability for emerging digital technologies published today.
How does the General Data Protection Regulation apply to AI?
The General Data Protection Regulation (GDPR) ensures a high standard of personal data protection, including the principles of data protection by design and by default. It has provisions on decision-making based solely on automated processing, including profiling (AI-based systems). In such cases, data subjects have the right to be provided with meaningful information about the logic involved in the decision.
The GDPR also gives individuals the right not to be subject solely to automated decision-making (except in certain situations) such as automatic refusal of an online credit application or e-recruiting practices without any human intervention. Such processing includes profiling that consists of any form of automated processing of personal data evaluating the personal aspects relating to a natural person (AI-based systems), in particular to analyse or predict aspects concerning the data subject’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements, where it produces legal effects concerning him or her or similarly significantly affects him or her.
What will the ethics guidelines be about? What role will the AI Alliance play?
Draft AI ethics guidelines will be developed on the basis of the EU’s Charter of Fundamental Rights, following a large consultation of stakeholders within the AI Alliance. The draft guidelines will build on the statement published by the European Group of Ethics in Science and New Technologies. They will address issues such as the future of work, fairness, safety, social inclusion, algorithmic transparency, and more broadly, will examine the impact on fundamental rights, including privacy, dignity, consumer protection and non-discrimination.
Given the scale of the challenge associated with AI, the full participation of all actors including businesses, academics, consumer organisations, trade unions, policy makers and representatives of civil society is essential. This is why the Commission wants to bring together a broad community of stakeholders around AI-relevant questions under the European AI Alliance. The Alliance will be set up by July 2018, and AI ethics guidelines will be published by the end of the year.
The Ethical and Legal Issues of Artificial Intelligence
Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical issues. Artificial intelligence adds a new dimension to these questions. Systems that use artificial intelligence technologies are becoming increasingly autonomous in terms of the complexity of the tasks they can perform, their potential impact on the world and the diminishing ability of humans to understand, predict and control their functioning. Most people underestimate the real level of automation of these systems, which have the ability to learn from their own experience and perform actions beyond the scope of those intended by their creators. This causes a number of ethical and legal difficulties that we will touch upon in this article.
Ethics and Artificial Intelligence
There is a well-known thought experiment in ethics called the trolley problem. The experiment raises a number of important ethical issues that are directly related to artificial intelligence. Imagine a runaway trolley going down the railway lines. There are five people tied to the track ahead. You are standing next to a lever. If you pull it, the trolley will switch to a different set of track. However, there is another person tied to that set of track. Do you pull the lever or not?
There is no clear-cut answer to this question. What is more, there are numerous situations in which such a decision may have to be made . And different social groups tend to give different answers. For example, Buddhist monks are overwhelmingly willing to sacrifice the life of one person in order to save five, even if presented with a more complicated variation of the trolley problem.
As for artificial intelligence, such a situation could arise, for example, if a self-driving vehicle is travelling along a road in a situation where an accident is unavoidable. The question thus arises as to whose lives should take priority – those of the passengers, the pedestrians or neither. A special website has been created by the Massachusetts Institute of Technology that deals with this very issue: users can test out various scenarios out on themselves and decide which courses of action would be the most worthwhile.
Other questions also arise in this case: What actions can be allowed from the legal point of view? What should serve as a basis for such decisions? Who should ultimately be held responsible? This problem has already been addressed by companies and regulators. Representatives at Mercedes, for example, have said outright that their cars will prioritize the lives of passengers. The Federal Ministry of Transport and Digital Infrastructure of Germany responded to this immediately, anticipating future regulation by stating that making such a choice based on a set of criteria would be illegal, and that the car manufacturer be held responsible for any injury or loss of life.
Other countries may go a different route. Take the Chinese Social Credit System, for example, which rates its citizens based how law-abiding and how useful to society they are, etc. Those with low ratings will face sanctions. What is stopping the Chinese government from introducing a law that forces manufacturers of self-driving vehicles to sacrifice the lives of lower-rated citizens in the event of an unavoidable accident? Face recognition technologies and access to the relevant databases make it perfectly possible to identify potential victims and compare their social credit ratings.
The Main Problems Facing the Law
The legal problems run even deeper, especially in the case of robots. A system that learns from information it receives from the outside world can act in ways that its creators could not have predicted , and predictability is crucial to modern legal approaches. What is more, such systems can operate independently from their creators or operators thus complicating the task of determining responsibility. These characteristics pose problems related to predictability and the ability to act independently while at the same time not being held responsible .
There are numerous options in terms of regulation, including regulation that is based on existing norms and standards. For example, technologies that use artificial intelligence can be regulated as items subject to copyright or as property. Difficulties arise here, however, if we take into account the ability of such technologies to act autonomously, against the will of their creators, owners or proprietors. In this regard, it is possible to apply the rules that regulate a special kind of ownership, namely animals, since the latter are also capable of autonomous actions. In Russian Law, the general rules of ownership are applied to animals (Article 137 of the Civil Code of the Russian Federation); the issue of responsibility, therefore, comes under Article 1064 of the Civil Code of the Russian Federation: injury inflicted on the personality or property of an individual shall be subject to full compensation by the person who inflicted the damage.
Proposals on the application of the law on animals have been made , although they are somewhat limited. First, the application of legislation on the basis of analogy is unacceptable within the framework of criminal law. Second, these laws have been created primarily for household pets, which we can reasonably expect will not cause harm under normal circumstances. There have been calls in more developed legal systems to apply similar rules to those that regulate the keeping of wild animals, since the rules governing wild animals are more stringent . The question arises here, however, of how to make a separation with regard to the specific features of artificial intelligence mentioned above. Moreover, stringent rules may actually slow down the introduction of artificial intelligence technologies due to the unexpected risks of liability for creators and inventors.
Another widespread suggestion is to apply similar norms to those that regulate the activities of legal entities . Since a legal entity is an artificially constructed subject of the law , robots can be given similar status. The law can be sufficiently flexible and grant the rights to just about anybody. It can also restrict rights. For example, historically, slaves had virtually no rights and were effectively property. The opposite situation can also be observed, in which objects that do not demonstrate any explicit signs of the ability to do anything are vested with rights. Even today, there are examples of unusual objects that are recognized as legal entities, both in developed and developing countries. In 2017, a law was passed in New Zealand recognizing the status of the Whanganui River as a legal entity. The law states that the river is a legal entity and, as such, has all the rights, powers and obligations of a legal entity. The law thus transformed the river from a possession or property into a legal entity, which expanded the boundaries of what can be considered property and what cannot. In 2000, the Supreme Court of India recognized the main sacred text of the Sikhs, the Guru Granth Sahib, as a legal entity.
Even if we do not consider the most extreme cases and cite ordinary companies as an example, we can say that some legal systems make legal entities liable under civil and, in certain cases, criminal law . Without determining whether a company (or state) can have free will or intent, or whether they can act deliberately or knowingly, they can be recognized as legally responsible for certain actions. In the same way, it is not necessary to ascribe intent or free will to robots to recognize them as responsible for their actions.
The analogy of legal entities, however, is problematic, as the concept of legal entity is necessary in order to carry out justice in a speedy and effective manner. But the actions of legal entities always go back to those of a single person or group of people, even if it is impossible to determine exactly who they are . In other words, the legal responsibility of companies and similar entities is linked to the actions performed by their employees or representatives. What is more, legal entities are only deemed to be criminally liable if an individual performing the illegal action on behalf of the legal entity is determined . The actions of artificial intelligence-based systems will not necessarily be traced back to the actions of an individual.
Finally, legal norms on the sources of increased danger can be applied to artificial intelligence-based systems. In accordance with Paragraph 1 of Article 1079 of the Civil Code of the A Russian Federation, legal entities and individuals whose activities are associated with increased danger for the surrounding population (the use of transport vehicles, mechanisms, etc.) shall be obliged to redress the injury inflicted by the source of increased danger, unless they prove that injury has been inflicted as a result of force majeure circumstances or at the intent of the injured person. The problem is identifying which artificial intelligence systems can be deemed sources of increased danger. The issue is similar to the one mentioned above regarding domestic and wild animals.
National and International Regulation
Many countries are actively creating the legal conditions for the development of technologies that use artificial intelligence. For example, the “Intelligent Robot Development and Dissemination Promotion Law” has been in place in South Korea since 2008. The law is aimed at improving the quality of life and developing the economy through the creation and promotion of a strategy for the sustainable development of the smart robot industry. Every five years, the government works out a basic plan to ensure that these goals are achieved.
I would like to pay particular attention here to two recent examples: France, which has declared its ambitions to become a European and world leader in artificial intelligence; and the European Union, which has put forward advanced rules for the regulation of smart robots.
In late March 2018, President of France Emmanuel Macron presented the country’s new national artificial intelligence strategy, which involves investing 1.5 billion Euros over the next five years to support research an innovation in the field. The strategy is based on the recommendations made in the report prepared under the supervision of French mathematician and National Assembly deputy Cédric Villani. The decision was made for the strategy to be aimed at four specific sectors: healthcare; transport; the environment and environmental protection; and security. The reasoning behind this is to focus potential of the comparative advantages and competencies in artificial intelligence on sectors where companies can play a key role at the global level, and because these technologies are important for the public interest, etc.
Seven key proposals are given, one of which is of particular interest for the purposes of this article – namely, to make artificial intelligence more open. It is true that the algorithms used in artificial intelligence are discrete and, in most cases, trade secrets. However, algorithms can be biased, for example, in the process of self-learning, they can absorb and adopt the stereotypes that exist in society or which are transferred to them by developers and make decisions based on them. There is already legal precedent for this. A defendant in the United States received a lengthy prison sentence on the basis of information obtained from an algorithm predicting the likelihood of repeat offences being committed. The defendant’s appeal against the use of an algorithm in the sentencing process was rejected because the criteria used to evaluate the possibility of repeat offences were a trade secret and therefore not presented. The French strategy proposes developing transparent algorithms that can be tested and verified, determining the ethical responsibility of those working in artificial intelligence, creating an ethics advisory committee, etc.
The creation of the resolution on the Civil Law Rules on Robotics marked the first step towards the regulation of artificial intelligence in the European Union. A working group on legal questions related to the development of robotics and artificial intelligence in the European Union was established back in 2015. The resolution is not a binding document, but it does give a number of recommendations to the European Commission on possible actions in the area of artificial intelligence, not only with regard to civil law, but also to the ethical aspects of robotics.
The resolution defines a “smart robot” as “one which has autonomy through the use of sensors and/or interconnectivity with the environment, which has at least a minor physical support, which adapts its behaviour and actions to the environment and which cannot be defined as having ‘life’ in the biological sense.” The proposal is made to “introduce a system for registering advanced robots that would be managed by an EU Agency for Robotics and Artificial Intelligence.” As regards liability for damage caused by robots, two options are suggested: “either strict liability (no fault required) or on a risk-management approach (liability of a person who was able to minimise the risks).” Liability, according to the resolution, “should be proportionate to the actual level of instructions given to the robot and to its degree of autonomy. Rules on liability could be complemented by a compulsory insurance scheme for robot users, and a compensation fund to pay out compensation in case no insurance policy covered the risk.”
The resolution proposes two codes of conduct for dealing with ethical issues: a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics Committees. The first code proposes four ethical principles in robotics engineering: 1) beneficence (robots should act in the best interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).
The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore, our attitude to autonomous systems (whether they are robots or something else), and our reinterpretation of their role in society and their place among us, can have a transformational effect. Legal personality determines what is important for society and allows the decision to made as to whether “something” is a valuable and reasonable object for the purposes of possessing rights and obligations.
Due to the specific features of artificial intelligence, suggestions have been put forward regarding the direct responsibility of certain systems . According to this line of thought, there are no fundamental reasons why autonomous systems should not be legally liable for their actions. The question remains, however, about the necessity or desirability of introducing this kind of liability (at least at the present stage). It is also related to the ethical issues mentioned above. Perhaps making programmers or users of autonomous systems liable for the actions of those systems would be more effective. But this could slow down innovation. This is why we need to continue to search for the perfect balance.
In order to find this balance, we need to address a number of issues. For example: What goals are we pursuing in the development of artificial intelligence? And how effective will it be? The answers to these questions will help us to prevent situations like the one that appeared in Russia in the 17th century, when an animal (specifically goats) was exiled to Siberia for its actions .
First published at our partner RIAC
- 1. See, for example. D. Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong, Princeton University Press, 2013.
- 2. Asaro P., “From Mechanisms of Adaptation to Intelligence Amplifiers: The Philosophy of W. Ross Ashby,” in Wheeler M., Husbands P., and Holland O. (eds.) The Mechanical Mind in History, Cambridge, MA: MIT Press: pp. 149–184
- 3. Asaro P. The Liability Problem for Autonomous Artificial Agents // AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA. March 21–23, 2016, p. 191.
- 4. Arkhipov, V., Naumov, V. On Certain Issues Regarding the Theoretical Grounds for Developing Legislation on Robotics: Aspects of Will and Legal Personality // Zakon. 2017, No. 5, p. 167.
- 5. Asaro P. The Liability Problem for Autonomous Artificial Agents, p. 193.
- 6. Arkhipov, V., Naumov, V. Op. cit., p. 164.
- 7. See, for example. Winkler A. We the Corporations: How American Businesses Won Their Civil Rights. Liverlight, 2018. See a description here: https://www.nytimes.com/2018/03/05/books/review/adam-winkler-we-the-corporations.html
- 8. In countries that use the Anglo-Saxon legal system, the European Union and some Middle Eastern countries. This kind of liability also exists in certain former Soviet countries: Georgia, Kazakhstan, Moldova and Ukraine. It does not exist in Russia, although it is under discussion.
- 9. Brożek B., Jakubiec M. On the Legal Responsibility of Autonomous Machines // Artificial Intelligence Law. 2017, No. 25(3), pp. 293–304.
- 10. Khanna V.S. Corporate Criminal Liability: What Purpose Does It Serve? // Harvard Law Review. 1996, No. 109, pp. 1477–1534.
- 11. Hage J. Theoretical Foundations for the Responsibility of Autonomous Agents // Artificial Intelligence Law. 2017, No. 25(3), pp. 255–271.
- 12. U. Pagallo, The Laws of Robots. Crimes, Contracts, and Torts. Springer, 2013, p. 36.
UNIDO at Hannover Messe 2018
The United Nations Industrial Development Organization (UNIDO) again had a strong presence at the annual Hannover Messe, the world’s leading...
Improving skills would drive job creation and growth in Spain
Spain should boost support for the unemployed and expand vocational education and training as part of a series of reforms...
Djibouti Launches Digital Transformation to Improve Services to Citizens
The World Bank announced today new support for Djibouti’s ongoing efforts to leverage digital technology to bring government closer to...
The issue of peace in North Korea and Asia
Much has already been decided in the best way for peace on the Korean peninsula and, indirectly, in the South...
ADB to Help Improve Water Governance, Develop Regional Urban Investment Plan for Mongolia
The Asian Development Bank (ADB) has agreed to provide two technical assistance (TA) grants totaling $2.5 million to help the...
Economic and investment potential of Gambia
The Gambia is a small country in West Africa and is entirely surrounded by Senegal except for its coastline on the Atlantic Ocean. English language is...
The Expansion of China’s Public Diplomacy Towards Pakistan
China is practicing public diplomacy globally but inducing neighboring regions is its initial priority. China’s active involvement in peacekeeping and...
Tech3 days ago
The Ethical and Legal Issues of Artificial Intelligence
Middle East2 days ago
A Mohammedan Game of Thrones: Iran, Saudi Arabia, and the Fight for Regional Hegemony
South Asia1 day ago
Pakistani Gwadar Port: A double-edged sword for Iran
Defense18 hours ago
What was the success-rate of the April 14th missiles against Syria?
Americas2 days ago
Tom Cotton: What’s the Reason for AIPAC’s $ 4.5 Million Support for the Young Senator?
Newsdesk19 hours ago
Data USA adds more than 7,300 profiles of higher education institutions to visualization platform
South Asia13 hours ago
The Expansion of China’s Public Diplomacy Towards Pakistan
Newsdesk3 days ago
ADB Operations Reach $32.2 Billion in 2017- ADB Annual Report