Connect with us

Tech

The Ethical and Legal Issues of Artificial Intelligence

Published

on

Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical issues. Artificial intelligence adds a new dimension to these questions. Systems that use artificial intelligence technologies are becoming increasingly autonomous in terms of the complexity of the tasks they can perform, their potential impact on the world and the diminishing ability of humans to understand, predict and control their functioning. Most people underestimate the real level of automation of these systems, which have the ability to learn from their own experience and perform actions beyond the scope of those intended by their creators. This causes a number of ethical and legal difficulties that we will touch upon in this article.

Ethics and Artificial Intelligence

There is a well-known thought experiment in ethics called the trolley problem. The experiment raises a number of important ethical issues that are directly related to artificial intelligence. Imagine a runaway trolley going down the railway lines. There are five people tied to the track ahead. You are standing next to a lever. If you pull it, the trolley will switch to a different set of track. However, there is another person tied to that set of track. Do you pull the lever or not?

Source: Wikimedia.org

There is no clear-cut answer to this question. What is more, there are numerous situations in which such a decision may have to be made [1]. And different social groups tend to give different answers. For example, Buddhist monks are overwhelmingly willing to sacrifice the life of one person in order to save five, even if presented with a more complicated variation of the trolley problem.

As for artificial intelligence, such a situation could arise, for example, if a self-driving vehicle is travelling along a road in a situation where an accident is unavoidable. The question thus arises as to whose lives should take priority – those of the passengers, the pedestrians or neither. A special website has been created by the Massachusetts Institute of Technology that deals with this very issue: users can test out various scenarios out on themselves and decide which courses of action would be the most worthwhile.

Other questions also arise in this case: What actions can be allowed from the legal point of view? What should serve as a basis for such decisions? Who should ultimately be held responsible? This problem has already been addressed by companies and regulators. Representatives at Mercedes, for example, have said outright that their cars will prioritize the lives of passengers. The Federal Ministry of Transport and Digital Infrastructure of Germany responded to this immediately, anticipating future regulation by stating that making such a choice based on a set of criteria would be illegal, and that the car manufacturer be held responsible for any injury or loss of life.

Other countries may go a different route. Take the Chinese Social Credit System, for example, which rates its citizens based how law-abiding and how useful to society they are, etc. Those with low ratings will face sanctions. What is stopping the Chinese government from introducing a law that forces manufacturers of self-driving vehicles to sacrifice the lives of lower-rated citizens in the event of an unavoidable accident? Face recognition technologies and access to the relevant databases make it perfectly possible to identify potential victims and compare their social credit ratings.

The Main Problems Facing the Law

The legal problems run even deeper, especially in the case of robots. A system that learns from information it receives from the outside world can act in ways that its creators could not have predicted [2], and predictability is crucial to modern legal approaches. What is more, such systems can operate independently from their creators or operators thus complicating the task of determining responsibility. These characteristics pose problems related to predictability and the ability to act independently while at the same time not being held responsible [3].

There are numerous options in terms of regulation, including regulation that is based on existing norms and standards. For example, technologies that use artificial intelligence can be regulated as items subject to copyright or as property. Difficulties arise here, however, if we take into account the ability of such technologies to act autonomously, against the will of their creators, owners or proprietors. In this regard, it is possible to apply the rules that regulate a special kind of ownership, namely animals, since the latter are also capable of autonomous actions. In Russian Law, the general rules of ownership are applied to animals (Article 137 of the Civil Code of the Russian Federation); the issue of responsibility, therefore, comes under Article 1064 of the Civil Code of the Russian Federation: injury inflicted on the personality or property of an individual shall be subject to full compensation by the person who inflicted the damage.

Proposals on the application of the law on animals have been made [4], although they are somewhat limited. First, the application of legislation on the basis of analogy is unacceptable within the framework of criminal law. Second, these laws have been created primarily for household pets, which we can reasonably expect will not cause harm under normal circumstances. There have been calls in more developed legal systems to apply similar rules to those that regulate the keeping of wild animals, since the rules governing wild animals are more stringent [5]. The question arises here, however, of how to make a separation with regard to the specific features of artificial intelligence mentioned above. Moreover, stringent rules may actually slow down the introduction of artificial intelligence technologies due to the unexpected risks of liability for creators and inventors.

Another widespread suggestion is to apply similar norms to those that regulate the activities of legal entities [6]. Since a legal entity is an artificially constructed subject of the law [7], robots can be given similar status. The law can be sufficiently flexible and grant the rights to just about anybody. It can also restrict rights. For example, historically, slaves had virtually no rights and were effectively property. The opposite situation can also be observed, in which objects that do not demonstrate any explicit signs of the ability to do anything are vested with rights. Even today, there are examples of unusual objects that are recognized as legal entities, both in developed and developing countries. In 2017, a law was passed in New Zealand recognizing the status of the Whanganui River as a legal entity. The law states that the river is a legal entity and, as such, has all the rights, powers and obligations of a legal entity. The law thus transformed the river from a possession or property into a legal entity, which expanded the boundaries of what can be considered property and what cannot. In 2000, the Supreme Court of India recognized the main sacred text of the Sikhs, the Guru Granth Sahib, as a legal entity.

Even if we do not consider the most extreme cases and cite ordinary companies as an example, we can say that some legal systems make legal entities liable under civil and, in certain cases, criminal law [8]. Without determining whether a company (or state) can have free will or intent, or whether they can act deliberately or knowingly, they can be recognized as legally responsible for certain actions. In the same way, it is not necessary to ascribe intent or free will to robots to recognize them as responsible for their actions.

The analogy of legal entities, however, is problematic, as the concept of legal entity is necessary in order to carry out justice in a speedy and effective manner. But the actions of legal entities always go back to those of a single person or group of people, even if it is impossible to determine exactly who they are [9]. In other words, the legal responsibility of companies and similar entities is linked to the actions performed by their employees or representatives. What is more, legal entities are only deemed to be criminally liable if an individual performing the illegal action on behalf of the legal entity is determined [10]. The actions of artificial intelligence-based systems will not necessarily be traced back to the actions of an individual.

Finally, legal norms on the sources of increased danger can be applied to artificial intelligence-based systems. In accordance with Paragraph 1 of Article 1079 of the Civil Code of the A Russian Federation, legal entities and individuals whose activities are associated with increased danger for the surrounding population (the use of transport vehicles, mechanisms, etc.) shall be obliged to redress the injury inflicted by the source of increased danger, unless they prove that injury has been inflicted as a result of force majeure circumstances or at the intent of the injured person. The problem is identifying which artificial intelligence systems can be deemed sources of increased danger. The issue is similar to the one mentioned above regarding domestic and wild animals.

National and International Regulation

Many countries are actively creating the legal conditions for the development of technologies that use artificial intelligence. For example, the “Intelligent Robot Development and Dissemination Promotion Law” has been in place in South Korea since 2008. The law is aimed at improving the quality of life and developing the economy through the creation and promotion of a strategy for the sustainable development of the smart robot industry. Every five years, the government works out a basic plan to ensure that these goals are achieved.

I would like to pay particular attention here to two recent examples: France, which has declared its ambitions to become a European and world leader in artificial intelligence; and the European Union, which has put forward advanced rules for the regulation of smart robots.

France

In late March 2018, President of France Emmanuel Macron presented the country’s new national artificial intelligence strategy, which involves investing 1.5 billion Euros over the next five years to support research an innovation in the field. The strategy is based on the recommendations made in the report prepared under the supervision of French mathematician and National Assembly deputy Cédric Villani. The decision was made for the strategy to be aimed at four specific sectors: healthcare; transport; the environment and environmental protection; and security. The reasoning behind this is to focus potential of the comparative advantages and competencies in artificial intelligence on sectors where companies can play a key role at the global level, and because these technologies are important for the public interest, etc.

Seven key proposals are given, one of which is of particular interest for the purposes of this article – namely, to make artificial intelligence more open. It is true that the algorithms used in artificial intelligence are discrete and, in most cases, trade secrets. However, algorithms can be biased, for example, in the process of self-learning, they can absorb and adopt the stereotypes that exist in society or which are transferred to them by developers and make decisions based on them. There is already legal precedent for this. A defendant in the United States received a lengthy prison sentence on the basis of information obtained from an algorithm predicting the likelihood of repeat offences being committed. The defendant’s appeal against the use of an algorithm in the sentencing process was rejected because the criteria used to evaluate the possibility of repeat offences were a trade secret and therefore not presented. The French strategy proposes developing transparent algorithms that can be tested and verified, determining the ethical responsibility of those working in artificial intelligence, creating an ethics advisory committee, etc.

European Union

The creation of the resolution on the Civil Law Rules on Robotics marked the first step towards the regulation of artificial intelligence in the European Union. A working group on legal questions related to the development of robotics and artificial intelligence in the European Union was established back in 2015. The resolution is not a binding document, but it does give a number of recommendations to the European Commission on possible actions in the area of artificial intelligence, not only with regard to civil law, but also to the ethical aspects of robotics.

The resolution defines a “smart robot” as “one which has autonomy through the use of sensors and/or interconnectivity with the environment, which has at least a minor physical support, which adapts its behaviour and actions to the environment and which cannot be defined as having ‘life’ in the biological sense.” The proposal is made to “introduce a system for registering advanced robots that would be managed by an EU Agency for Robotics and Artificial Intelligence.” As regards liability for damage caused by robots, two options are suggested: “either strict liability (no fault required) or on a risk-management approach (liability of a person who was able to minimise the risks).” Liability, according to the resolution, “should be proportionate to the actual level of instructions given to the robot and to its degree of autonomy. Rules on liability could be complemented by a compulsory insurance scheme for robot users, and a compensation fund to pay out compensation in case no insurance policy covered the risk.”

The resolution proposes two codes of conduct for dealing with ethical issues: a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics Committees. The first code proposes four ethical principles in robotics engineering: 1) beneficence (robots should act in the best interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).

The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore, our attitude to autonomous systems (whether they are robots or something else), and our reinterpretation of their role in society and their place among us, can have a transformational effect. Legal personality determines what is important for society and allows the decision to made as to whether “something” is a valuable and reasonable object for the purposes of possessing rights and obligations.

Due to the specific features of artificial intelligence, suggestions have been put forward regarding the direct responsibility of certain systems [11]. According to this line of thought, there are no fundamental reasons why autonomous systems should not be legally liable for their actions. The question remains, however, about the necessity or desirability of introducing this kind of liability (at least at the present stage). It is also related to the ethical issues mentioned above. Perhaps making programmers or users of autonomous systems liable for the actions of those systems would be more effective. But this could slow down innovation. This is why we need to continue to search for the perfect balance.

In order to find this balance, we need to address a number of issues. For example: What goals are we pursuing in the development of artificial intelligence? And how effective will it be? The answers to these questions will help us to prevent situations like the one that appeared in Russia in the 17th century, when an animal (specifically goats) was exiled to Siberia for its actions [12].

First published at our partner RIAC

  1. 1. See, for example. D. Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong, Princeton University Press, 2013.
  2. 2. Asaro P., “From Mechanisms of Adaptation to Intelligence Amplifiers: The Philosophy of W. Ross Ashby,” in Wheeler M., Husbands P., and Holland O. (eds.) The Mechanical Mind in History, Cambridge, MA: MIT Press: pp. 149–184
  3. 3. Asaro P. The Liability Problem for Autonomous Artificial Agents // AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA. March 21–23, 2016, p. 191.
  4. 4. Arkhipov, V., Naumov, V. On Certain Issues Regarding the Theoretical Grounds for Developing Legislation on Robotics: Aspects of Will and Legal Personality // Zakon. 2017, No. 5, p. 167.
  5. 5. Asaro P. The Liability Problem for Autonomous Artificial Agents, p. 193.
  6. 6. Arkhipov, V., Naumov, V. Op. cit., p. 164.
  7. 7. See, for example. Winkler A. We the Corporations: How American Businesses Won Their Civil Rights. Liverlight, 2018. See a description here: https://www.nytimes.com/2018/03/05/books/review/adam-winkler-we-the-corporations.html
  8. 8. In countries that use the Anglo-Saxon legal system, the European Union and some Middle Eastern countries. This kind of liability also exists in certain former Soviet countries: Georgia, Kazakhstan, Moldova and Ukraine. It does not exist in Russia, although it is under discussion.
  9. 9. Brożek B., Jakubiec M. On the Legal Responsibility of Autonomous Machines // Artificial Intelligence Law. 2017, No. 25(3), pp. 293–304.
  10. 10. Khanna V.S. Corporate Criminal Liability: What Purpose Does It Serve? // Harvard Law Review. 1996, No. 109, pp. 1477–1534.
  11. 11. Hage J. Theoretical Foundations for the Responsibility of Autonomous Agents // Artificial Intelligence Law. 2017, No. 25(3), pp. 255–271.
  12. 12. U. Pagallo, The Laws of Robots. Crimes, Contracts, and Torts. Springer, 2013, p. 36.
Continue Reading
Comments

Tech

Asia Needs a Region-Wide Approach to Harness Fintech’s Full Potential

MD Staff

Published

on

The importance of a region-wide approach to harness the potentials of fintech was emphasized at the High-Level Policy Dialogue: Regional Cooperation to Support Innovation, Inclusion and Stability in Asia on 11 October in Bali, Indonesia. Photo: ADB

Asia’s policy makers should strengthen cooperation to harness the potential of new financial technologies for inclusive growth. At the same time, they should work together to ensure they can respond better to the challenges posed by fintech.

New technologies such as mobile banking, big data, and peer-to-peer transfer networks are already extending the reach of financial services to those who were previously unbanked or out of reach, boosting incomes and living standards. Yet, fintech also comes with the risk of cyber fraud, data security, and privacy breaches. Disintermediation of fintech services or concentration of services among a few providers could also pose a risk to financial stability.

These and other issues were discussed at the High-Level Policy Dialogue on Regional Cooperation to Support Innovation, Inclusion, and Stability in Asia, organized by the Asian Development Bank (ADB), Bank Indonesia, and the ASEAN+3 Macroeconomic Research Office (AMRO).

The panel comprised Ms. Neav Chanthana, Deputy Governor of the National Bank of Cambodia; Mr. Diwa Guinigundo, Deputy Governor of Bangko Sentral ng Pilipinas; Ms. Mary Ellen Iskenderian, President and Chief Executive Officer of Women’s World Banking; Mr. Ravi Menon, Managing Director of the Monetary Authority of Singapore; Mr. Takehiko Nakao, President of ADB; Mr. Abdul Rasheed, Deputy Governor, Bank Negara Malaysia, and Mr. Veerathai Santiprabhob, Governor of the Bank of Thailand. Mr. Mirza Adityaswara, Senior Deputy Governor of Bank Indonesia, gave the opening remarks at the conference and Ms. Junhong Chang, Director of AMRO, gave the welcome remarks.

“Rapidly spreading new financial technologies hold huge promise for financial inclusion,” said Mr. Nakao. “We must foster an enabling environment for the technologies to flourish and strengthen regional cooperation to build harmonized regulatory standards and surveillance systems to prevent international money laundering, terrorism financing, and cybercrimes.”

“Technology is an enabler that weaves our economies and financial systems together, transmitting benefits but also risks across borders,” said Ms. Chang. “Given East Asia’s rapid economic growth, understanding and managing the impact of technology in our financial systems is essential for policymakers to maintain financial stability.”

“Asia, including Indonesia, is an ideal place for fintech to flourish,” said Mr. Adityaswara. “In Indonesia’s case, there are more than a quarter of a billion people living on thousand of islands, waiting to be integrated with the new technology; young people eager to enter the future digital world; more than fifty million small and medium-sized enterprises which can’t wait to get on board with e-commerce; a new society driven by a dynamic, democratic middle class which views the digital economy as something as inevitable as evolution.”

Despite Asia’s high economic growth in recent years, the financial sector is still under-developed in some countries. Fewer than 27% of adults in developing Asia have a bank account, well below the global median of 38%. Meanwhile, just 84% of firms have a checking or savings account, on a par with Africa but below Latin America’s 89% and emerging Europe’s 92%.

Financial inclusion could be increased through policies to promote financial innovation, by boosting financial literacy, and by expanding and upgrading digital infrastructure and networks. Regulations to prevent illegal activities, enhance cyber security, and protect consumers’ rights and privacy, would also build confidence in new financial technologies.

Continue Reading

Tech

Cutting-edge tech a ‘double-edged sword for developing countries’

MD Staff

Published

on

The latest technological advances, from artificial intelligence to electric cars, can be a “double-edged sword”, says the latest UN World Economic and Social Survey (WESS 2018), released on Monday.

The over-riding message of the report is that appropriate, effective policies are essential, if so-called “frontier technologies” are to change the world for the better, helping us to achieve the Sustainable Development Goals (SDGs) and addressing climate change: without good policy, they risk exacerbating existing inequality.

Amongst several positive indicators, WESS 2018 found that the energy sector is becoming more sustainable, with renewable energy technology and efficient energy storage systems giving countries the opportunity to “leapfrog” existing, often fossil fuel-based solutions.

The wellbeing of the most vulnerable is being enhanced through greater access to medicines, and millions in developing countries now have access to low-cost financial services via their mobile phones.

Referring to the report, UN Secretary-General António Guterres said that “good health and longevity, prosperity for all and environmental sustainability are within our reach if we harness the full power of these innovations.”

However, the UN chief warned of the importance of properly managing the use of new technologies, to ensure there is a net benefit to society: the report demonstrates that unmanaged implementation of developments such as artificial intelligence and automation can improve efficiency but also destroy quality jobs.

“Clearly, we need policies that can ensure frontier technologies are not only commercially viable but also equitable and ethical. This will require a rigorous, objective and transparent ongoing assessment, involving all stakeholders,” Mr. Guterres added

The Survey says that proactive and effective policies can help countries to avoid pitfalls and minimize the economic and social costs of technology-related disruption. It calls for regulation and institutions that promote innovation, and the use of new technologies for sustainable development.

With digital technology frequently crossing borders, international cooperation, the Survey shows, is needed to bring about harmonized standards, greater flexibility in the area of intellectual property rights and ensuring that the market does not remain dominated by a tiny number of extremely powerful companies.

Here, the UN has a vital role to play, by providing an objective assessment of the impact that emerging technologies have on sustainable development outcomes – including their effects on employment, wages and income distribution – and bringing together people, business and organizations from across the world to build strong consensus-led agreements.

Continue Reading

Tech

Our Trust Deficit with Artifical Intelligence Has Only Just Started

Eleonore Pauwels

Published

on

“We suffer from a bad case of trust-deficit disorder,” said UN Secretary-General António Guterres in his recent General Assembly speech. His diagnosis is right, and his focus on new technological developments underscores their crucial role shaping the future global political order. Indeed, artificial intelligence (AI) is poised to deepen the trust-deficit across the world.

The Secretary-General, echoing his recently released Strategy on New Technologies, repeatedly referenced rapidly developing fields of technology in his speech, rightly calling for greater cooperation between countries and among stakeholders, as well as for more diversity in the technology sector. His trust-deficit diagnosis reflects the urgent need to build a new social license and develop incentives to ensure that technological innovation, in particular AI, is deployed safely and aligned with the public interest.

However, AI-driven technologies do not easily fit into today’s models of international cooperation, and will in fact tend to undermine rather than enforce global governance mechanisms. Looking at three trends in AI, the UN faces an enormous set of interrelated challenges.

AI and Reality

First, AI is a potentially dominating technology whose powerful – both positive and negative –implications will be increasingly difficult to isolate and contain. Engineers design learning algorithms with a specific set of predictive and optimizing functions that can be used to both empower or control populations. Without sophisticated fail-safe protocols, the potential for misuse or weaponization of AI is pervasive and can be difficult to anticipate.

Take Deepfake as an example. Sophisticated AI programs can now manipulate sounds, images and videos, creating impersonations that are often impossible to distinguish from the original. Deep-learning algorithms can, with surprising accuracy, read human lips, synthetize speech, and to some extent simulate facial expressions. Once released outside of the lab, such simulations could easily be misused with wide-ranging impacts (indeed, this is already happening at a low level). On the eve of an election, Deepfake videos could falsely portray public officials being involved in money-laundering or human rights abuses; public panic could be sowed by videos warning of non-existent epidemics or cyberattacks; forged incidents could potentially lead to international escalation.

The capacity of a range of actors to influence public opinion with misleading simulations could have powerful long-term implications for the UN’s role in peace and security. By eroding the sense of trust and truth between citizens and the state—and indeed amongst states—truly fake news could be deeply corrosive to our global governance system.

AI Reading Us

Second, AI is already connecting and converging with a range of other technologies—including biotech—with significant implications for global security. AI systems around the world are trained to predict various aspects of our daily lives by making sense of massive data sets, such as cities’ traffic patterns, financial markets, consumer behaviour trend data, health records and even our genomes.

These AI technologies are increasingly able to harness our behavioural and biological data in innovative and often manipulative ways, with implications for all of us. For example, the My Friend Cayla smart doll sends voice and emotion data of the children who play with it to the cloud, which led to a US Federal Trade Commission complaint and its ban in Germany. In the US, emotional analysis is already being used in the courtroom to detect remorse in deposition videos. It could soon be part of job interviews to assess candidates’ responses and their fitness for a job.

The ability of AI to intrude upon—and potentially control—private human behaviour has direct implications for the UN’s human rights agenda. New forms of social and bio-control could in fact require a reimagining of the framework currently in place to monitor and implement the Universal Declaration of Human Rights, and will certainly require the multilateral system to better anticipate and understand this quickly emerging field.

AI as a Conflict Theatre

Finally, the ability of AI-driven technologies to influence large populations is of such immediate and overriding value that it is almost certain to be the theatre for future conflicts. There is a very real prospect of a “cyber race” in which powerful nations and large technology platforms enter into open competition for our collective data as the fuel to generate economic, medical and security supremacy across the globe. Forms of “cyber-colonization” are increasingly likely, as powerful states are able to harness AI and biotech together to understand and potentially control other countries’ populations and ecosystems.

Towards Global Governance of AI

Politically, legally and ethically, our societies are not prepared for the deployment of AI. The UN, established many decades before the emergence of these technologies, is in many ways poorly placed to develop the kind of responsible governance that will channel AI’s potential away from these risks and towards our collective safety and wellbeing. In fact, the resurgence of nationalist agendas across the world may point to a dwindling capacity of the multilateral system to play a meaningful role in the global governance of AI. Major corporations and powerful member states may see little value in bringing multilateral approaches to bear on what they consider lucrative and proprietary technologies.

There are, however, some important ways in which the UN can help build the kind of collaborative, transparent networks that may begin to treat our “trust-deficit disorder.” The Secretary-General’s recently-launched High-Level Panel on Digital Cooperation, is already working to build a collaborative partnership with the private sector and establish a common approach to new technologies. Such an initiative could eventually find ways to reward cooperation over competition, and to put in place common commitments to using AI-driven technologies for the public good.

Perhaps the most important challenge for the UN in this context is one of relevance, of re-establishing a sense of trust in the multilateral system. But if the above trends tell us anything, it is that AI-driven technologies are an issue for every individual and every state, and that without collective, collaborative forms of governance, there is a real risk that it will be a force that undermines global stability.

Continue Reading

Latest

Newsdesk22 mins ago

Eurasian Research on Modern China-Eurasia Conference

October 26-27, 2018,National Academy of Sciences, Armenia. Address: Marshal Bagramyan 24, Yerevan, Armenia. Organizers:“China-Eurasia” Council for Political and Strategic Research,...

South Asia4 hours ago

The “Neo-Cold War” in the Indian Ocean Region

Addressing an event earlier this week at London’s Oxford University, Sri Lankan Prime Minister Ranil Wickremesinghe said some people are...

Central Asia6 hours ago

Kazakh court case tests Chinese power

A Kazakh court is set to put to the test China’s ability to impose its will and strongarm Muslim nations...

Reports17 hours ago

Portugal’s post-crisis policies boosted growth and employment

A mix of sound economic and social policies and constructive social dialogue between the government, workers’ and employers’ organizations have...

Energy19 hours ago

Oil Market Report: Twin Peaks

Both global oil demand and supply are now close to new, historically significant peaks at 100 mb/d, and neither show...

Intelligence21 hours ago

Non-State Actors in Today’s Information Wars

Rivalries and confrontations between states in the information space are a feature of today’s international relations. Information is becoming one...

International Law23 hours ago

Human Rights Council election: 5 things you need to know about it

The United Nations General Assembly held secret-ballot elections for the Human Rights Council (HRC) on Friday.  As of 1 January...

Trending

Copyright © 2018 Modern Diplomacy