Connect with us

Science & Technology

The Ethical and Legal Issues of Artificial Intelligence

Published

on

Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical issues. Artificial intelligence adds a new dimension to these questions. Systems that use artificial intelligence technologies are becoming increasingly autonomous in terms of the complexity of the tasks they can perform, their potential impact on the world and the diminishing ability of humans to understand, predict and control their functioning. Most people underestimate the real level of automation of these systems, which have the ability to learn from their own experience and perform actions beyond the scope of those intended by their creators. This causes a number of ethical and legal difficulties that we will touch upon in this article.

Ethics and Artificial Intelligence

There is a well-known thought experiment in ethics called the trolley problem. The experiment raises a number of important ethical issues that are directly related to artificial intelligence. Imagine a runaway trolley going down the railway lines. There are five people tied to the track ahead. You are standing next to a lever. If you pull it, the trolley will switch to a different set of track. However, there is another person tied to that set of track. Do you pull the lever or not?

Source: Wikimedia.org

There is no clear-cut answer to this question. What is more, there are numerous situations in which such a decision may have to be made [1]. And different social groups tend to give different answers. For example, Buddhist monks are overwhelmingly willing to sacrifice the life of one person in order to save five, even if presented with a more complicated variation of the trolley problem.

As for artificial intelligence, such a situation could arise, for example, if a self-driving vehicle is travelling along a road in a situation where an accident is unavoidable. The question thus arises as to whose lives should take priority – those of the passengers, the pedestrians or neither. A special website has been created by the Massachusetts Institute of Technology that deals with this very issue: users can test out various scenarios out on themselves and decide which courses of action would be the most worthwhile.

Other questions also arise in this case: What actions can be allowed from the legal point of view? What should serve as a basis for such decisions? Who should ultimately be held responsible? This problem has already been addressed by companies and regulators. Representatives at Mercedes, for example, have said outright that their cars will prioritize the lives of passengers. The Federal Ministry of Transport and Digital Infrastructure of Germany responded to this immediately, anticipating future regulation by stating that making such a choice based on a set of criteria would be illegal, and that the car manufacturer be held responsible for any injury or loss of life.

Other countries may go a different route. Take the Chinese Social Credit System, for example, which rates its citizens based how law-abiding and how useful to society they are, etc. Those with low ratings will face sanctions. What is stopping the Chinese government from introducing a law that forces manufacturers of self-driving vehicles to sacrifice the lives of lower-rated citizens in the event of an unavoidable accident? Face recognition technologies and access to the relevant databases make it perfectly possible to identify potential victims and compare their social credit ratings.

The Main Problems Facing the Law

The legal problems run even deeper, especially in the case of robots. A system that learns from information it receives from the outside world can act in ways that its creators could not have predicted [2], and predictability is crucial to modern legal approaches. What is more, such systems can operate independently from their creators or operators thus complicating the task of determining responsibility. These characteristics pose problems related to predictability and the ability to act independently while at the same time not being held responsible [3].

There are numerous options in terms of regulation, including regulation that is based on existing norms and standards. For example, technologies that use artificial intelligence can be regulated as items subject to copyright or as property. Difficulties arise here, however, if we take into account the ability of such technologies to act autonomously, against the will of their creators, owners or proprietors. In this regard, it is possible to apply the rules that regulate a special kind of ownership, namely animals, since the latter are also capable of autonomous actions. In Russian Law, the general rules of ownership are applied to animals (Article 137 of the Civil Code of the Russian Federation); the issue of responsibility, therefore, comes under Article 1064 of the Civil Code of the Russian Federation: injury inflicted on the personality or property of an individual shall be subject to full compensation by the person who inflicted the damage.

Proposals on the application of the law on animals have been made [4], although they are somewhat limited. First, the application of legislation on the basis of analogy is unacceptable within the framework of criminal law. Second, these laws have been created primarily for household pets, which we can reasonably expect will not cause harm under normal circumstances. There have been calls in more developed legal systems to apply similar rules to those that regulate the keeping of wild animals, since the rules governing wild animals are more stringent [5]. The question arises here, however, of how to make a separation with regard to the specific features of artificial intelligence mentioned above. Moreover, stringent rules may actually slow down the introduction of artificial intelligence technologies due to the unexpected risks of liability for creators and inventors.

Another widespread suggestion is to apply similar norms to those that regulate the activities of legal entities [6]. Since a legal entity is an artificially constructed subject of the law [7], robots can be given similar status. The law can be sufficiently flexible and grant the rights to just about anybody. It can also restrict rights. For example, historically, slaves had virtually no rights and were effectively property. The opposite situation can also be observed, in which objects that do not demonstrate any explicit signs of the ability to do anything are vested with rights. Even today, there are examples of unusual objects that are recognized as legal entities, both in developed and developing countries. In 2017, a law was passed in New Zealand recognizing the status of the Whanganui River as a legal entity. The law states that the river is a legal entity and, as such, has all the rights, powers and obligations of a legal entity. The law thus transformed the river from a possession or property into a legal entity, which expanded the boundaries of what can be considered property and what cannot. In 2000, the Supreme Court of India recognized the main sacred text of the Sikhs, the Guru Granth Sahib, as a legal entity.

Even if we do not consider the most extreme cases and cite ordinary companies as an example, we can say that some legal systems make legal entities liable under civil and, in certain cases, criminal law [8]. Without determining whether a company (or state) can have free will or intent, or whether they can act deliberately or knowingly, they can be recognized as legally responsible for certain actions. In the same way, it is not necessary to ascribe intent or free will to robots to recognize them as responsible for their actions.

The analogy of legal entities, however, is problematic, as the concept of legal entity is necessary in order to carry out justice in a speedy and effective manner. But the actions of legal entities always go back to those of a single person or group of people, even if it is impossible to determine exactly who they are [9]. In other words, the legal responsibility of companies and similar entities is linked to the actions performed by their employees or representatives. What is more, legal entities are only deemed to be criminally liable if an individual performing the illegal action on behalf of the legal entity is determined [10]. The actions of artificial intelligence-based systems will not necessarily be traced back to the actions of an individual.

Finally, legal norms on the sources of increased danger can be applied to artificial intelligence-based systems. In accordance with Paragraph 1 of Article 1079 of the Civil Code of the A Russian Federation, legal entities and individuals whose activities are associated with increased danger for the surrounding population (the use of transport vehicles, mechanisms, etc.) shall be obliged to redress the injury inflicted by the source of increased danger, unless they prove that injury has been inflicted as a result of force majeure circumstances or at the intent of the injured person. The problem is identifying which artificial intelligence systems can be deemed sources of increased danger. The issue is similar to the one mentioned above regarding domestic and wild animals.

National and International Regulation

Many countries are actively creating the legal conditions for the development of technologies that use artificial intelligence. For example, the “Intelligent Robot Development and Dissemination Promotion Law” has been in place in South Korea since 2008. The law is aimed at improving the quality of life and developing the economy through the creation and promotion of a strategy for the sustainable development of the smart robot industry. Every five years, the government works out a basic plan to ensure that these goals are achieved.

I would like to pay particular attention here to two recent examples: France, which has declared its ambitions to become a European and world leader in artificial intelligence; and the European Union, which has put forward advanced rules for the regulation of smart robots.

France

In late March 2018, President of France Emmanuel Macron presented the country’s new national artificial intelligence strategy, which involves investing 1.5 billion Euros over the next five years to support research an innovation in the field. The strategy is based on the recommendations made in the report prepared under the supervision of French mathematician and National Assembly deputy Cédric Villani. The decision was made for the strategy to be aimed at four specific sectors: healthcare; transport; the environment and environmental protection; and security. The reasoning behind this is to focus potential of the comparative advantages and competencies in artificial intelligence on sectors where companies can play a key role at the global level, and because these technologies are important for the public interest, etc.

Seven key proposals are given, one of which is of particular interest for the purposes of this article – namely, to make artificial intelligence more open. It is true that the algorithms used in artificial intelligence are discrete and, in most cases, trade secrets. However, algorithms can be biased, for example, in the process of self-learning, they can absorb and adopt the stereotypes that exist in society or which are transferred to them by developers and make decisions based on them. There is already legal precedent for this. A defendant in the United States received a lengthy prison sentence on the basis of information obtained from an algorithm predicting the likelihood of repeat offences being committed. The defendant’s appeal against the use of an algorithm in the sentencing process was rejected because the criteria used to evaluate the possibility of repeat offences were a trade secret and therefore not presented. The French strategy proposes developing transparent algorithms that can be tested and verified, determining the ethical responsibility of those working in artificial intelligence, creating an ethics advisory committee, etc.

European Union

The creation of the resolution on the Civil Law Rules on Robotics marked the first step towards the regulation of artificial intelligence in the European Union. A working group on legal questions related to the development of robotics and artificial intelligence in the European Union was established back in 2015. The resolution is not a binding document, but it does give a number of recommendations to the European Commission on possible actions in the area of artificial intelligence, not only with regard to civil law, but also to the ethical aspects of robotics.

The resolution defines a “smart robot” as “one which has autonomy through the use of sensors and/or interconnectivity with the environment, which has at least a minor physical support, which adapts its behaviour and actions to the environment and which cannot be defined as having ‘life’ in the biological sense.” The proposal is made to “introduce a system for registering advanced robots that would be managed by an EU Agency for Robotics and Artificial Intelligence.” As regards liability for damage caused by robots, two options are suggested: “either strict liability (no fault required) or on a risk-management approach (liability of a person who was able to minimise the risks).” Liability, according to the resolution, “should be proportionate to the actual level of instructions given to the robot and to its degree of autonomy. Rules on liability could be complemented by a compulsory insurance scheme for robot users, and a compensation fund to pay out compensation in case no insurance policy covered the risk.”

The resolution proposes two codes of conduct for dealing with ethical issues: a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics Committees. The first code proposes four ethical principles in robotics engineering: 1) beneficence (robots should act in the best interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).

The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore, our attitude to autonomous systems (whether they are robots or something else), and our reinterpretation of their role in society and their place among us, can have a transformational effect. Legal personality determines what is important for society and allows the decision to made as to whether “something” is a valuable and reasonable object for the purposes of possessing rights and obligations.

Due to the specific features of artificial intelligence, suggestions have been put forward regarding the direct responsibility of certain systems [11]. According to this line of thought, there are no fundamental reasons why autonomous systems should not be legally liable for their actions. The question remains, however, about the necessity or desirability of introducing this kind of liability (at least at the present stage). It is also related to the ethical issues mentioned above. Perhaps making programmers or users of autonomous systems liable for the actions of those systems would be more effective. But this could slow down innovation. This is why we need to continue to search for the perfect balance.

In order to find this balance, we need to address a number of issues. For example: What goals are we pursuing in the development of artificial intelligence? And how effective will it be? The answers to these questions will help us to prevent situations like the one that appeared in Russia in the 17th century, when an animal (specifically goats) was exiled to Siberia for its actions [12].

First published at our partner RIAC

  1. 1. See, for example. D. Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong, Princeton University Press, 2013.
  2. 2. Asaro P., “From Mechanisms of Adaptation to Intelligence Amplifiers: The Philosophy of W. Ross Ashby,” in Wheeler M., Husbands P., and Holland O. (eds.) The Mechanical Mind in History, Cambridge, MA: MIT Press: pp. 149–184
  3. 3. Asaro P. The Liability Problem for Autonomous Artificial Agents // AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA. March 21–23, 2016, p. 191.
  4. 4. Arkhipov, V., Naumov, V. On Certain Issues Regarding the Theoretical Grounds for Developing Legislation on Robotics: Aspects of Will and Legal Personality // Zakon. 2017, No. 5, p. 167.
  5. 5. Asaro P. The Liability Problem for Autonomous Artificial Agents, p. 193.
  6. 6. Arkhipov, V., Naumov, V. Op. cit., p. 164.
  7. 7. See, for example. Winkler A. We the Corporations: How American Businesses Won Their Civil Rights. Liverlight, 2018. See a description here: https://www.nytimes.com/2018/03/05/books/review/adam-winkler-we-the-corporations.html
  8. 8. In countries that use the Anglo-Saxon legal system, the European Union and some Middle Eastern countries. This kind of liability also exists in certain former Soviet countries: Georgia, Kazakhstan, Moldova and Ukraine. It does not exist in Russia, although it is under discussion.
  9. 9. Brożek B., Jakubiec M. On the Legal Responsibility of Autonomous Machines // Artificial Intelligence Law. 2017, No. 25(3), pp. 293–304.
  10. 10. Khanna V.S. Corporate Criminal Liability: What Purpose Does It Serve? // Harvard Law Review. 1996, No. 109, pp. 1477–1534.
  11. 11. Hage J. Theoretical Foundations for the Responsibility of Autonomous Agents // Artificial Intelligence Law. 2017, No. 25(3), pp. 255–271.
  12. 12. U. Pagallo, The Laws of Robots. Crimes, Contracts, and Torts. Springer, 2013, p. 36.

Continue Reading
Comments

Science & Technology

Is your security compromised due to “Spy software” know how

Published

on

Spy software is often referred to as spyware is a set of programs that gives access to user/ administrators to track or monitor anyone’s smart devices (such as desktop, laptop, or smart phone) from anywhere across the globe.

Spyware is a threat, not only to businesses but individual users as well, since it can steal sensitive information and harm anyone’s network. It is controversial due to its frequent violation to end user’s privacy. It can attack user’s device, steal sensitive data (such as bank account or credit card information, or personal identity) or web data and share it with data firms, advertisers, or external users.

There are numerous online spyware designed for almost no cost, whose ultimate goal is to track and sell users data. Some spy software can install additional software and change the settings on user’s device, which could be difficult to identify.

Below are four main types of spyware, each has its unique features to track and record users activity:

Tracking cookies: These are the most common type of trackers, these monitor the user’s internet usage activities, such as searches, downloads, and history, for advertising and selling purposes.

System monitors: These spy software records everything on your device from emails, keystrokes, visited websites, chat-room dialogues, and much more.

Adware: This spyware is used for marketing purpose, it tracks users downloads and browser history, and suggests or displays the same or related products, this can often lead to slow device.

Trojan: This spyware is the most malicious software. It can be used to track sensitive information such as bank information or identification numbers.

Spyware can attack any operating system such as windows, android, or Apple. Windows operating systems are more prone to attack, but in past few years Apple’s operating systems are also becoming vulnerable to attacks.

According to a recent investigation by the Guardian and 16 other media organizations, found that there is a widespread and continuous abuse of NSO’s hacking spyware Pegasus, on Government officials, human rights activists, lawyers and journalists worldwide which was only intended to use against terrorists and criminals.

The research, conducted by the Pegasus technical partner Amnesty’s Security Lab, found traces of the Pegasus activity on 37 out of the 67 examined phones. Out of 37 phones, 34 were iPhones, and 23 showed signs of a Pegasus infection, while remaining 11 showed signs of attempted infection. However, only three out of 15 Android phones were infected by Pegasus software.

Attacks like the Pegasus might have a short shelf life, and are used to target specific individuals. But evidences from past have proved that attackers target large group of people and are often successful.

Below are the most common ways devices can become infected with spyware:

  • Downloading software or apps from unreliable sources or unofficial app publishers
  • Accepting cookies or pop-up without reading
  • Downloading or watching online pirated media content
  • Opening attachments from unfamiliar senders

Spyware can be extremely unsafe if you have been infected. Its damage can range from short term device issue (such as slow system, system crashing, or overheating device) to long-term financial threat.

Here’s what you can do protect your devices from spyware:

Reliable antivirus software: Firstly look for security solutions available on internet (some are available for free) and enable the antivirus software. If your system or device is already infected with virus, check out for security providers offering spyware identification and removal.

-For instance, you can install a toolkit (the Mobile Verification Tool or the MVT) provided by Amnesty International. This toolkit will alert you with presence of the Pegasus Spyware on your device.

-The toolkit scans the backup file of your device for any evidence of infection. It works on both Apple and Android operating systems, but is more accurate for Apple operating system.

-You can also download and run Norton Power Eraser a free virus removal tool.

Update your system regularly: Set up an update which runs automatically. Such automatic updates can not only block hackers from viewing your web or device activity, but can also eliminate software errors.

Be vigilant of cookies compliance: Cookies that records/ tracks users browsing habits and personally identifiable information (PII) are commonly known as adware spyware. Accept cookies only from reliable sites or download a cookie blocker.

Strong authentication passwords: Try to enable Multi-factor Authentication (MFA) wherever possible, or if not possible create different password for all accounts. Change your password for each account after a certain period of time.

-Password breaches can still occur with these precautions. In such case change your password immediately.

Be cautious of free software: Read the terms and conditions on software licenses, before accepting. Free software might be unlimited but, your data could be recorded with those free software’s.

Do not open any files from unknown or suspicious account: Do not open any email attachments or text on mobile from a suspicious, unknown, or untrustworthy source/number.

Conclusion:

Spyware could be extremely dangerous, however it can be prevented and removed by being precautious and using a trustworthy antivirus tool. Next gen technologies can also help in checking and removing malicious content. For instance, Artificial intelligence could aid the organizations identify malicious software, and frequently update its algorithms of patterns similar to predict future malware attacks.

Continue Reading

Science & Technology

Implementation of virtual reality and the effects in cognitive warfare

Published

on

Photo: Lux Interaction/Unsplash

With the increasing use of new technologies in warfare situations, virtual reality presents an opportunity for the domain of cognitive warfare. Nowadays, cognitive skills are treated equally as their physical counterparts, seeking to standardize new innovative techniques. Virtual reality (VR) can be used as a tool that can increase the cognitive capabilities of soldiers. As it is understandable in today’s terms, VR impacts the brain directly. That means that our visual organs (eyes) see one object or one surrounding area, but brain cells perceive and react to that differently. VR has been used extensively in new teaching methods because of the increased probability of improving the memory and learning capabilities of students.

Besides its theoretical teaching approach and improvement of learning, VR can be used systematically towards more practical skills. In medicine for example students can have a full medicine lesson on a virtual human being seeing the body projected in 3D, revolutionizing the whole field of medicine. If that can be used in the medical field, theoretically it will be possible to be used in combat situations, projecting a specific battlefield in VR, increasing the chances of successful engagement, and reducing the chance of casualties. Knowing your terrain is equally important as knowing your adversary.

The use of VR will also allow us to experience new domains relating to the physical health of a person. It is argued that VR might provide us with the ability to effectively control pain management. Since VR can stimulate visual senses, then it would be safe to say that this approach can have higher effectiveness in treating chronic pain, depression, or even PTSD. The idea behind this usage is that the brain itself is already powerful enough, yet sometimes when pain overwhelms us we tend to lose effectiveness on some of our senses, such as the visual sense. An agonizing pain can blurry our vision, something that we cannot control; unless of course theoretically, we use VR. The process can consist of different sounds and visual aids that can trick the mind into thinking that it is somewhere that might be the polar opposite of where it is. Technically speaking, the mind would be able to do that simply because it works as a powerful computer, where our pain receptors can override and actually make us think that we are not in such terrible pain.

Although the benefits of VR could be useful for our health we would still need to deal with problems that concern our health when we use a VR set.  It is possible that the brain can get overloaded with new information and the new virtual environments. VR poses some problems to some people, regarding the loss of the real environment and creating feelings of nausea or extreme headaches. As a result, new techniques from cognitive psychologists have emerged to provide a solution to the problem. New technologies have appeared that can desaturate colors towards the edge of the headset in order to limit the probability of visual confusion. Besides that, research shows that even the implementation of a virtual nose when someone wears a VR headset can prevent motion sickness, something that our brain does already in reality.

However, when it comes to combatants and the implementation of VR in soldiers, one must think of maybe more effective and fast solutions to eliminate the problems that concern the confusion of the brain. Usage of specific pharmaceuticals might be the key. One example could be Modafinil which has been prescribed in the U.S. since 1998 to treat sleep-related conditions. Researchers believe it can produce the same effects as caffeine. With that being said, the University of Oxford analyzed 24 studies, where participants were asked to complete complex assignments after taking Modafinil and found out that those who took the drug were more accurate, which suggests that it may affect higher cognitive functions.

Although some of its long-term effects are yet to be studied, Modafinil is by far the safest drug that can be used in cognitive situations. Theoretically speaking, if a long exposure to VR can cause headaches and an inability to concentrate, then an appropriate dose of Modafinil can counter the effects of VR. It can be more suitable and useful to use on soldiers, whose cognitive skills are better than civilians, to test the full effect of a mix of virtual technology and pharmaceuticals. VR can be a significant military component and a simulation training program. It can provide new cognitive experiences based on foreign and unknown terrains that might be difficult to be approached in real life. New opportunities arise every day with the technologies, and if anyone wanted to take a significant advantage over adversaries in the cognitive warfare field, then VR would provide a useful tool for military decision-making.

Continue Reading

Science & Technology

Vaccine Equity and Beyond: Intellectual Property Rights Face a Crucial Test

Published

on

research coronavirus

The debate over intellectual property rights (IPRs), particularly patents, and access to medicine is not new. IPRs are considered to drive innovation by protecting the results of investment-intensive R&D, yet arguably also foster inequitable access to affordable medicines.

In a global public health emergency such as the COVID-19 pandemic, where countries face acute shortages of life-saving vaccines, should public health be prioritized over economic gain and the international trade rules designed to protect IPRs?

The Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs), to which all 164 member states of the World Trade Organization (WTO) are a party, establish minimum standards for protecting different forms of IPRs. 

In October 2020, India and South Africa – countries with strong generic drug manufacturing infrastructure – invoked WTO rules to seek a temporary waiver of IPRs (patents, copyrights, trade secrets, and industrial designs) on equipment, drugs, and vaccines related to the “prevention, containment or treatment of COVID-19.” A waiver would mean that countries could locally produce equipment and vaccines without permission from holders of IPRs. This step would serve to eliminate the monopolistic nature of IPRs that give exclusive rights to the holder of IPRs and enable them to impose procedural licensing constraints.

Brazil, Japan, the European Union (EU), and the United States (US) initially rejected the waiver proposal. That stance changed with the rise of new COVID-19 mutations and the associated increase in deaths, with several countries facing a public health crisis due to vaccine supply shortages. The position of many states began shifting in favor of the India-South Africa proposal, which now has the backing of 62 WTO members, with the US declaring support for the intent of the temporary waiver to secure “better access, more manufacturing capability, more shots in arms.” Several international bodies, the World Health Organization (WHO), and the UN Committee on Economic, Social and Cultural Rights have voiced support.

Some countries disagree about the specific IPRs to be waived or the mechanisms by which IPRs should be made available. The EU submitted a proposal to use TRIPS flexibilities such as compulsory licensing, while others advocate for voluntary licensing. The TRIPS Council is conducting meetings to prepare an amended proposal to the General Council (the WTO’s highest-level decision-making body in Geneva) by the end of July 2021.

The crisis in India illustrates the urgency of the situation. India produces and supplies Covishield, licensed by AstraZeneca; and Covaxin, which is yet to be included on the WHO’s Emergency Use Listing (EUL). Due to the devastating public health crisis, India halted its export of vaccines and caused a disruption in the global vaccine supply, even to the COVID-19 Vaccines Global Access (COVAX) program. In the meantime, the world’s poorest nations lack sufficient, critical vaccine supplies.

International law recognizes some flexibility in public health emergencies. An example would be the Doha Declaration on TRIPS and Public Health in 2001, which, while maintaining the commitments, stresses the need for TRIPS to be part of the wider national and international action to address public health problems. Consistent with that, the body of international human rights law, including the International Covenant on Economic, Social and Cultural Rights (ICESCR), protects the right to the highest attainable standard of health.

But as we race against time, the current IPR framework may not allow for the swift response required. It is the rigorous requirements before a vaccine is considered safe to use under Emergency Use Authorizations and procedural delays which illuminate why IPR waivers on already approved vaccines are needed. Capitalizing on the EUL’s approved vaccines that have proven efficacy to date and easing IPR restrictions will aid in the timely supply and access of vaccines.

A TRIPS waiver may not solve the global vaccine shortage. In fact, some argue that the shortages are not an inherent flaw in the IP regime, considering other supply chain disruptions that persist, such as the ones disrupting microchips, pipette tips, and furniture. However, given that patent licensing gives a company a monopoly on vaccine commercialization, other companies with manufacturing capacity cannot produce the vaccine to scale up production and meet supply demands.

Neither does a temporary waiver mean that pharmaceutical companies cannot monetize their work. States should work with pharmaceuticals in setting up compensation and insurance schemes to ensure adequate remuneration.

At the College of Law at Hamad Bin Khalifa University, our aim is to address today’s legal challenges with a future-oriented view. We see COVID-19 as a case study in how we respond to imminent and existential threats. As global warming alters the balance of our ecosystem, threats will cascade in a way that is hard to predict. When unpredictable health emergencies emerge, it will be human ingenuity that helps us overcome them. Even the global IP regime, as a legal system that regulates ideas, is being tested, and should be agile enough to respond in time, like the scientists who sprang into action and worked tirelessly to develop the vaccines that will soon bring back a semblance of normal life as we know it.

Continue Reading

Publications

Latest

Trending