Connect with us

Science & Technology

The Ethical and Legal Issues of Artificial Intelligence

Published

on

Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical issues. Artificial intelligence adds a new dimension to these questions. Systems that use artificial intelligence technologies are becoming increasingly autonomous in terms of the complexity of the tasks they can perform, their potential impact on the world and the diminishing ability of humans to understand, predict and control their functioning. Most people underestimate the real level of automation of these systems, which have the ability to learn from their own experience and perform actions beyond the scope of those intended by their creators. This causes a number of ethical and legal difficulties that we will touch upon in this article.

Ethics and Artificial Intelligence

There is a well-known thought experiment in ethics called the trolley problem. The experiment raises a number of important ethical issues that are directly related to artificial intelligence. Imagine a runaway trolley going down the railway lines. There are five people tied to the track ahead. You are standing next to a lever. If you pull it, the trolley will switch to a different set of track. However, there is another person tied to that set of track. Do you pull the lever or not?

Source: Wikimedia.org

There is no clear-cut answer to this question. What is more, there are numerous situations in which such a decision may have to be made [1]. And different social groups tend to give different answers. For example, Buddhist monks are overwhelmingly willing to sacrifice the life of one person in order to save five, even if presented with a more complicated variation of the trolley problem.

As for artificial intelligence, such a situation could arise, for example, if a self-driving vehicle is travelling along a road in a situation where an accident is unavoidable. The question thus arises as to whose lives should take priority – those of the passengers, the pedestrians or neither. A special website has been created by the Massachusetts Institute of Technology that deals with this very issue: users can test out various scenarios out on themselves and decide which courses of action would be the most worthwhile.

Other questions also arise in this case: What actions can be allowed from the legal point of view? What should serve as a basis for such decisions? Who should ultimately be held responsible? This problem has already been addressed by companies and regulators. Representatives at Mercedes, for example, have said outright that their cars will prioritize the lives of passengers. The Federal Ministry of Transport and Digital Infrastructure of Germany responded to this immediately, anticipating future regulation by stating that making such a choice based on a set of criteria would be illegal, and that the car manufacturer be held responsible for any injury or loss of life.

Other countries may go a different route. Take the Chinese Social Credit System, for example, which rates its citizens based how law-abiding and how useful to society they are, etc. Those with low ratings will face sanctions. What is stopping the Chinese government from introducing a law that forces manufacturers of self-driving vehicles to sacrifice the lives of lower-rated citizens in the event of an unavoidable accident? Face recognition technologies and access to the relevant databases make it perfectly possible to identify potential victims and compare their social credit ratings.

The Main Problems Facing the Law

The legal problems run even deeper, especially in the case of robots. A system that learns from information it receives from the outside world can act in ways that its creators could not have predicted [2], and predictability is crucial to modern legal approaches. What is more, such systems can operate independently from their creators or operators thus complicating the task of determining responsibility. These characteristics pose problems related to predictability and the ability to act independently while at the same time not being held responsible [3].

There are numerous options in terms of regulation, including regulation that is based on existing norms and standards. For example, technologies that use artificial intelligence can be regulated as items subject to copyright or as property. Difficulties arise here, however, if we take into account the ability of such technologies to act autonomously, against the will of their creators, owners or proprietors. In this regard, it is possible to apply the rules that regulate a special kind of ownership, namely animals, since the latter are also capable of autonomous actions. In Russian Law, the general rules of ownership are applied to animals (Article 137 of the Civil Code of the Russian Federation); the issue of responsibility, therefore, comes under Article 1064 of the Civil Code of the Russian Federation: injury inflicted on the personality or property of an individual shall be subject to full compensation by the person who inflicted the damage.

Proposals on the application of the law on animals have been made [4], although they are somewhat limited. First, the application of legislation on the basis of analogy is unacceptable within the framework of criminal law. Second, these laws have been created primarily for household pets, which we can reasonably expect will not cause harm under normal circumstances. There have been calls in more developed legal systems to apply similar rules to those that regulate the keeping of wild animals, since the rules governing wild animals are more stringent [5]. The question arises here, however, of how to make a separation with regard to the specific features of artificial intelligence mentioned above. Moreover, stringent rules may actually slow down the introduction of artificial intelligence technologies due to the unexpected risks of liability for creators and inventors.

Another widespread suggestion is to apply similar norms to those that regulate the activities of legal entities [6]. Since a legal entity is an artificially constructed subject of the law [7], robots can be given similar status. The law can be sufficiently flexible and grant the rights to just about anybody. It can also restrict rights. For example, historically, slaves had virtually no rights and were effectively property. The opposite situation can also be observed, in which objects that do not demonstrate any explicit signs of the ability to do anything are vested with rights. Even today, there are examples of unusual objects that are recognized as legal entities, both in developed and developing countries. In 2017, a law was passed in New Zealand recognizing the status of the Whanganui River as a legal entity. The law states that the river is a legal entity and, as such, has all the rights, powers and obligations of a legal entity. The law thus transformed the river from a possession or property into a legal entity, which expanded the boundaries of what can be considered property and what cannot. In 2000, the Supreme Court of India recognized the main sacred text of the Sikhs, the Guru Granth Sahib, as a legal entity.

Even if we do not consider the most extreme cases and cite ordinary companies as an example, we can say that some legal systems make legal entities liable under civil and, in certain cases, criminal law [8]. Without determining whether a company (or state) can have free will or intent, or whether they can act deliberately or knowingly, they can be recognized as legally responsible for certain actions. In the same way, it is not necessary to ascribe intent or free will to robots to recognize them as responsible for their actions.

The analogy of legal entities, however, is problematic, as the concept of legal entity is necessary in order to carry out justice in a speedy and effective manner. But the actions of legal entities always go back to those of a single person or group of people, even if it is impossible to determine exactly who they are [9]. In other words, the legal responsibility of companies and similar entities is linked to the actions performed by their employees or representatives. What is more, legal entities are only deemed to be criminally liable if an individual performing the illegal action on behalf of the legal entity is determined [10]. The actions of artificial intelligence-based systems will not necessarily be traced back to the actions of an individual.

Finally, legal norms on the sources of increased danger can be applied to artificial intelligence-based systems. In accordance with Paragraph 1 of Article 1079 of the Civil Code of the A Russian Federation, legal entities and individuals whose activities are associated with increased danger for the surrounding population (the use of transport vehicles, mechanisms, etc.) shall be obliged to redress the injury inflicted by the source of increased danger, unless they prove that injury has been inflicted as a result of force majeure circumstances or at the intent of the injured person. The problem is identifying which artificial intelligence systems can be deemed sources of increased danger. The issue is similar to the one mentioned above regarding domestic and wild animals.

National and International Regulation

Many countries are actively creating the legal conditions for the development of technologies that use artificial intelligence. For example, the “Intelligent Robot Development and Dissemination Promotion Law” has been in place in South Korea since 2008. The law is aimed at improving the quality of life and developing the economy through the creation and promotion of a strategy for the sustainable development of the smart robot industry. Every five years, the government works out a basic plan to ensure that these goals are achieved.

I would like to pay particular attention here to two recent examples: France, which has declared its ambitions to become a European and world leader in artificial intelligence; and the European Union, which has put forward advanced rules for the regulation of smart robots.

France

In late March 2018, President of France Emmanuel Macron presented the country’s new national artificial intelligence strategy, which involves investing 1.5 billion Euros over the next five years to support research an innovation in the field. The strategy is based on the recommendations made in the report prepared under the supervision of French mathematician and National Assembly deputy Cédric Villani. The decision was made for the strategy to be aimed at four specific sectors: healthcare; transport; the environment and environmental protection; and security. The reasoning behind this is to focus potential of the comparative advantages and competencies in artificial intelligence on sectors where companies can play a key role at the global level, and because these technologies are important for the public interest, etc.

Seven key proposals are given, one of which is of particular interest for the purposes of this article – namely, to make artificial intelligence more open. It is true that the algorithms used in artificial intelligence are discrete and, in most cases, trade secrets. However, algorithms can be biased, for example, in the process of self-learning, they can absorb and adopt the stereotypes that exist in society or which are transferred to them by developers and make decisions based on them. There is already legal precedent for this. A defendant in the United States received a lengthy prison sentence on the basis of information obtained from an algorithm predicting the likelihood of repeat offences being committed. The defendant’s appeal against the use of an algorithm in the sentencing process was rejected because the criteria used to evaluate the possibility of repeat offences were a trade secret and therefore not presented. The French strategy proposes developing transparent algorithms that can be tested and verified, determining the ethical responsibility of those working in artificial intelligence, creating an ethics advisory committee, etc.

European Union

The creation of the resolution on the Civil Law Rules on Robotics marked the first step towards the regulation of artificial intelligence in the European Union. A working group on legal questions related to the development of robotics and artificial intelligence in the European Union was established back in 2015. The resolution is not a binding document, but it does give a number of recommendations to the European Commission on possible actions in the area of artificial intelligence, not only with regard to civil law, but also to the ethical aspects of robotics.

The resolution defines a “smart robot” as “one which has autonomy through the use of sensors and/or interconnectivity with the environment, which has at least a minor physical support, which adapts its behaviour and actions to the environment and which cannot be defined as having ‘life’ in the biological sense.” The proposal is made to “introduce a system for registering advanced robots that would be managed by an EU Agency for Robotics and Artificial Intelligence.” As regards liability for damage caused by robots, two options are suggested: “either strict liability (no fault required) or on a risk-management approach (liability of a person who was able to minimise the risks).” Liability, according to the resolution, “should be proportionate to the actual level of instructions given to the robot and to its degree of autonomy. Rules on liability could be complemented by a compulsory insurance scheme for robot users, and a compensation fund to pay out compensation in case no insurance policy covered the risk.”

The resolution proposes two codes of conduct for dealing with ethical issues: a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics Committees. The first code proposes four ethical principles in robotics engineering: 1) beneficence (robots should act in the best interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).

The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore, our attitude to autonomous systems (whether they are robots or something else), and our reinterpretation of their role in society and their place among us, can have a transformational effect. Legal personality determines what is important for society and allows the decision to made as to whether “something” is a valuable and reasonable object for the purposes of possessing rights and obligations.

Due to the specific features of artificial intelligence, suggestions have been put forward regarding the direct responsibility of certain systems [11]. According to this line of thought, there are no fundamental reasons why autonomous systems should not be legally liable for their actions. The question remains, however, about the necessity or desirability of introducing this kind of liability (at least at the present stage). It is also related to the ethical issues mentioned above. Perhaps making programmers or users of autonomous systems liable for the actions of those systems would be more effective. But this could slow down innovation. This is why we need to continue to search for the perfect balance.

In order to find this balance, we need to address a number of issues. For example: What goals are we pursuing in the development of artificial intelligence? And how effective will it be? The answers to these questions will help us to prevent situations like the one that appeared in Russia in the 17th century, when an animal (specifically goats) was exiled to Siberia for its actions [12].

First published at our partner RIAC

  1. 1. See, for example. D. Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong, Princeton University Press, 2013.
  2. 2. Asaro P., “From Mechanisms of Adaptation to Intelligence Amplifiers: The Philosophy of W. Ross Ashby,” in Wheeler M., Husbands P., and Holland O. (eds.) The Mechanical Mind in History, Cambridge, MA: MIT Press: pp. 149–184
  3. 3. Asaro P. The Liability Problem for Autonomous Artificial Agents // AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA. March 21–23, 2016, p. 191.
  4. 4. Arkhipov, V., Naumov, V. On Certain Issues Regarding the Theoretical Grounds for Developing Legislation on Robotics: Aspects of Will and Legal Personality // Zakon. 2017, No. 5, p. 167.
  5. 5. Asaro P. The Liability Problem for Autonomous Artificial Agents, p. 193.
  6. 6. Arkhipov, V., Naumov, V. Op. cit., p. 164.
  7. 7. See, for example. Winkler A. We the Corporations: How American Businesses Won Their Civil Rights. Liverlight, 2018. See a description here: https://www.nytimes.com/2018/03/05/books/review/adam-winkler-we-the-corporations.html
  8. 8. In countries that use the Anglo-Saxon legal system, the European Union and some Middle Eastern countries. This kind of liability also exists in certain former Soviet countries: Georgia, Kazakhstan, Moldova and Ukraine. It does not exist in Russia, although it is under discussion.
  9. 9. Brożek B., Jakubiec M. On the Legal Responsibility of Autonomous Machines // Artificial Intelligence Law. 2017, No. 25(3), pp. 293–304.
  10. 10. Khanna V.S. Corporate Criminal Liability: What Purpose Does It Serve? // Harvard Law Review. 1996, No. 109, pp. 1477–1534.
  11. 11. Hage J. Theoretical Foundations for the Responsibility of Autonomous Agents // Artificial Intelligence Law. 2017, No. 25(3), pp. 255–271.
  12. 12. U. Pagallo, The Laws of Robots. Crimes, Contracts, and Torts. Springer, 2013, p. 36.

Continue Reading
Comments

Science & Technology

Ten Ways the C-Suite Can Protect their Company against Cyberattack

MD Staff

Published

on

Cyberattacks are one of the top 10 global risks of highest concern in the next decade, with an estimated price tag of $90 trillion if cybersecurity efforts do not keep pace with technological change. While there is abundant guidance in the cybersecurity community, the application of prescribed action continues to fall short of what is required to ensure effective defence against cyberattacks. The challenges created by accelerating technological innovation have reached new levels of complexity and scale – today responsibility for cybersecurity in organizations is no longer one Chief Security Officer’s job, it involves everyone.

The Cybersecurity Guide for Leaders in Today’s Digital World was developed by the World Economic Forum Centre for Cybersecurity and several of its partners to assist the growing number of C-suite executives responsible for setting and implementing the strategy and governance of cybersecurity and resilience. The guide bridges the gap between leaders with and without technical backgrounds. Following almost one year of research, it outlines 10 tenets that describe how cyber resilience in the digital age can be formed through effective leadership and design.

“With effective cyber-risk management, business executives can achieve smarter, faster and more connected futures, driving business growth,” said Georges De Moura, Head of Industry Solutions, Centre for Cybersecurity, World Economic Forum. “From the steps necessary to think more like a business leader and develop better standards of cyber hygiene, through to the essential elements of crisis management, the report offers an excellent cybersecurity playbook for leaders in public and private sectors.”

“Practicing good cybersecurity is everyone’s responsibility, even if you don’t have the word “security” in your job title,” said Paige H. Adams, Global Chief Information Security Officer, Zurich Insurance Group. “This report provides a practical guide with ten basic tenets for business leaders to incorporate into their company’s day-to-day operations. Diligent application of these tenets and making them a part of your corporate culture will go a long way toward reducing risk and increasing cyber resilience.”

“The recommendation to foster internal and external partnerships is one of the most important, in my view,” said Sir Rob Wainwright, Senior Cyber Partner, Deloitte. “The dynamic nature of the threat, not least in terms of how it reflects the recent growth of an integrated criminal economy, calls on us to build a better global architecture of cyber cooperation. Such cooperation should include more effective platforms for information sharing within and across industries, releasing the benefits of data integration and analytics to build better levels of threat awareness and response capability for all.”

The Ten Tenets

1. Think Like a Business Leader – Cybersecurity leaders are business leaders first and foremost. They have to position themselves, teams and operations as business enablers. Transforming cybersecurity from a support function into a business-enabling function requires a broader view and a stronger communication skill set than was required previously.

2. Foster Internal and External Partnerships – Cybersecurity is a team sport. Today, information security teams need to partner with many internal groups and develop a shared vision, objectives and KPIs to ensure that timelines are met while delivering a highly secure and usable product to customers.

3. Build and Practice Strong Cyber Hygiene – Five core security principles are crucial: a clear understanding of the data supply chain, a strong patching strategy, organization-wide authentication, a secure active directory of contacts, and encrypted critical business processes.

4. Protect Access to Mission-Critical Assets – Not all user access is created equal. It is essential to have strong processes and automated systems in place to ensure appropriate access rights and approval mechanisms.

5. Protect Your Email Domain Against Phishing – Email is the most common point of entry for cyber attackers, with the median company receiving over 90% of their detected malware via this channel. The guide highlights six ways to protect employees’ emails.

6. Apply a Zero-Trust Approach to Securing Your Supply Chain – The high velocity of new applications developed alongside the adoption of open source and cloud platforms is unprecedented. Security-by-design practices must be embedded in the full lifecycle of the project.

7. Prevent, Monitor and Respond to Cyber Threats – The question is not if, but when a significant breach will occur. How well a company manages this inevitability is ultimately critical. Threat intelligence teams should perform proactive hunts throughout the organization’s infrastructure and keep the detection teams up to date on the latest trends.

8. Develop and Practice a Comprehensive Crisis Management Plan – Many organizations focus primarily on how to prevent and defend while not focusing enough on institutionalizing the playbook of crisis management. The guide outlines 12 vital components any company’s crisis plan should incorporate.

9. Build a Robust Disaster Recovery Plan for Cyberattacks – A disaster recovery and continuity plan must be tailored to security incident scenarios to protect an organization from cyberattacks and to instruct on how to react in case of a data breach. Furthermore, it can reduce the amount of time it takes to identify breaches and restore critical services for the business.

10. Create a Culture of Cybersecurity – Keeping an organization secure is every employee’s job. Tailoring trainings, incentivizing employees, building elementary security knowledge and enforcing sanctions on repeat offenders could aid thedevelopment of a culture of cybersecurity.

In the Fourth Industrial Revolution, all businesses are undergoing transformative digitalization of their industries that will open new markets. Cybersecurity leaders need to take a stronger and more strategic leadership role. Inherent to this new role is the imperative to move beyond the role of compliance monitors and enforcers.

Continue Reading

Science & Technology

Moving First on AI Has Competitive Advantages and Risks

MD Staff

Published

on

Financial institutions that implement AI early have the most to gain from its use, but also face the largest risks. The often-opaque nature of AI decisions and related concerns of algorithmic bias, fiduciary duty, uncertainty, and more have left implementation of the most cutting-edge AI uses at a standstill. However, a newly released report from the World Economic Forum, Navigating Uncharted Waters, shows how financial services firms and regulators can overcome these risks.

Using AI responsibly is about more than mitigating risks; its use in financial services presents an opportunity to raise the ethical bar for the financial system as a whole. It also offers financial services a competitive edge against their peers and new market entrants.

“AI offers financial services providers the opportunity to build on the trust their customers place in them to enhance access, improve customer outcomes and bolster market efficiency,” says Matthew Blake, Head of Financial Services, World Economic Forum. “This can offer competitive advantages to individual financial firms while also improving the broader financial system if implemented appropriately.”

Across several dimensions, AI introduces new complexities to age-old challenges in the financial services industry, and the governance frameworks of the past will not adequately address these new concerns.

Explaining AI decisions

Some forms of AI are not interpretable even by their creators, posing concerns for financial institutions and regulators who are unsure how to trust solutions they cannot understand or explain. This uncertainty has left the implementation of cutting-edge AI tools at a standstill. The Forum offers a solution: evolve past “one-size-fits-all” governance ideas to specific transparency requirements that consider the AI use case in question.

For example, it is important to clearly and simply explain why a customer was rejected for a loan, which can significantly impact their life. It is less important to explain a back-office function whose only objective is to convert scans of various documents to text. For the latter, accuracy is more important than transparency, as the ability of this AI application to create harm is limited.

Beyond “explainability”, the report explores new challenges surrounding bias and fairness, systemic risk, fiduciary duty, and collusion as they relate to the use of AI.

Bias and fairness

Algorithmic bias is another top concern for financial institutions, regulators and customers surrounding the use of AI in financial services. AI’s unique ability to rapidly process new and different types of data raise the concern that AI systems may develop unintended biases over time; combined with their opaque nature such biases could remain undetected. Despite these risks, AI also presents an opportunity to decrease unfair discrimination or exclusion, for example by analyzing alternative data that can be used to assess ‘thin file’ customers that traditional systems cannot understand due to a lack of information.

Systemic risk

The widespread adoption of AI also has the potential to alter the dynamics of the interactions between human actors and machines in the financial system, creating new sources of systemic risk. As the volume and velocity of interactions grow through automated agents, emerging risks may become increasingly difficult to detect, spread across various financial institutions, Fintechs, large technology companies, and other market participants. These new dynamics will require supervisory authorities to reinvent themselves as hubs of system-wide intelligence, using AI themselves to supervise AI systems.

Fiduciary duty

As AI systems take on an expanded set of tasks, they will increasingly interact with customers. As a result, fiduciary requirements to always act in the best interests of the customer may soon arise, raising the question if AI systems can be held “responsible” for their actions – and if not, who should be held accountable.

Algorithmic collusion

Given that AI systems can act autonomously, they may plausibly learn to engage in collusion without any instruction from their human creators, and perhaps even without any explicit, trackable communication. This challenges the traditional regulatory constructs for detecting and prosecuting collusion and may require a revisiting of the existing legal frameworks.

“Using AI in financial services will require an openness to new ways of safeguarding the ecosystem, different from the tools of the past,” says Rob Galaski, Global Leader, Banking & Capital Markets, Deloitte Consulting. “To accelerate the pace of AI adoption in the industry, institutions need to take the lead in developing and proposing new frameworks that address new challenges, working with regulators along the way.”

For each of the above described concerns, the report outlines the key underlying root causes of the issue and highlights the most pressing challenges, identifies how those challenges might be addressed through new tools and governance frameworks, and what opportunities might be unlocked by doing so.

The report was prepared in collaboration with Deloitte and follows five previous reports on financial innovation. The World Economic Forum will continue its work in Financial Services, with a particular focus on AI’s connections to other emerging technologies in its next phase of research through mid-2020.

Continue Reading

Science & Technology

US Blacklist of Chinese Surveillance Companies Creates Supply Chain Confusion

Published

on

The United States Department of Commerce’s decision to blacklist 28 Chinese public safety organizations and commercial entities hit at some of China’s most dominant vendors within the security industry. Of the eight commercial entities added to the blacklist, six of them are some of China’s most successful digital forensics, facial recognition, and AI companies. However, the two surveillance manufacturers who made this blacklist could have a significant impact on the global market at large—Dahua and Hikvision.

Putting geopolitics aside, Dahua’s and Hikvision’s positions within the overall global digital surveillance market makes their blacklisting somewhat of a shock, with the immediate effects touching off significant questions among U.S. partners, end users, and supply chain partners.

Frost & Sullivan’s research finds that, currently, Hikvision and Dahua rank second and third in total global sales among the $20.48 billion global surveillance market but are fast-tracking to become the top two vendors among IP surveillance camera manufacturers. Their insurgent rise among IP surveillance camera providers came about due to both companies’ aggressive growth pipelines, significant product libraries of high-quality surveillance cameras and new imaging technologies, and low-cost pricing models that provide customers with higher levels of affordability.

This is also not the first time that these two vendors have found themselves in the crosshairs of the U.S. government. In 2018, the U.S. initiated a ban on the sale and use of Hikvision and Dahua camera equipment within government-owned facilities, including the Department of Defense, military bases, and government-owned buildings. However, the vague language of the ban made it difficult for end users to determine whether they were just banned from new purchases of Dahua or Hikvision cameras or if they needed to completely rip-and-replace existing equipment with another brand. Systems integrators, distributors, and even technology partners themselves remained unsure of how they should handle the ban’s implications, only serving to sow confusion among U.S. customers.

In addition to confusion over how end users in the government space were to proceed regarding their Hikvision and Dahua equipment came the realization that both companies held significant customer share among commercial companies throughout the U.S. market—so where was the ban’s line being drawn for these entities? Were they to comply or not? If so, how? Again, these questions have remained unanswered since 2018.

Hikvision and Dahua each have built a strong presence within the U.S. market, despite the 2018 ban. Both companies are seen as regular participants in industry tradeshows and events, and remain active among industry partners throughout the surveillance ecosystem. Both companies have also attempted to work with the U.S. government to alleviate security concerns and draw clearer guidelines for their sales and distribution partners throughout the country. They even established regional operations centers and headquarters in the country.

While blacklisting does send a clearer message to end users, integrators, and distributors—for sales and usage of these companies’ technologies—remedies for future actions still remain unclear. When it comes to legacy Hikvision and Dahua cameras, the onus appears to be on end users and integrators to decide whether rip-and-replace strategies are the best way to comply with government rulings or to just leave the solutions in place and hope for the best.

As far as broader global impacts of this action, these will remain to be seen. While the 2018 ban did bring about talks of similar bans in other regions, none of these bans ever materialized. Dahua and Hikvision maintained their strong market positioning, even achieving higher-than-average growth rates in the past year. Blacklisting does send a stronger message to global regulators though, so market participants outside the U.S. will just have to adopt a wait-and-see posture to see how, if at all, they may need to prepare their own surveillance equipment supply chains for changes to come.

Continue Reading

Latest

EU Politics1 hour ago

Rwanda: EU provides €10.3 million for life-saving refugee support measures

During his visit to Rwanda, Commissioner for International Cooperation and Development Neven Mimica has announced a €10.3 million support package...

South Asia3 hours ago

The era emerged from “RuwanWeliSaya”: Aftermath of Presidential Election in Sri Lanka

Authors: Punsara Amarasinghe & Eshan Jayawardane Civilizational influence in shaping national political consciousness is an indispensable factor   that one cannot...

Urban Development9 hours ago

Banking on nature: a Mexican city adapts to climate change

The Mexican city of Xalapa is surrounded by ecosystems that not only harbor stunning flora and fauna, but also provide...

Reports11 hours ago

Africa: Urgent action needed to mobilise domestic resources as tax revenues plateau

The average tax-to-GDP ratio for the 26 countries participating in the new edition of Revenue Statistics in Africa was unchanged at 17.2%...

Europe13 hours ago

U.S. President Trump to meet Bulgaria’s Prime Minister at the White House: What to expect?

Next Monday, 25 November, President Trump will welcome Bulgarian Prime Minister Borissov at the White House for a bilateral meeting....

Americas15 hours ago

Poll Shows Trump’s Israel Policy Is Opposed Even by Republicans

On Monday, November 18th, Reuters headlined “U.S. backs Israel on settlements, angering Palestinians and clouding peace process” and reported that,...

Africa17 hours ago

The Geopolitics of natural resources of Western Sahara

In the post-bipolar international legal literature, the right to self-determination is part of the broader concept of human rights, and...

Trending

Copyright © 2019 Modern Diplomacy