Connect with us

Science & Technology

The Ethical and Legal Issues of Artificial Intelligence

Published

on

Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical issues. Artificial intelligence adds a new dimension to these questions. Systems that use artificial intelligence technologies are becoming increasingly autonomous in terms of the complexity of the tasks they can perform, their potential impact on the world and the diminishing ability of humans to understand, predict and control their functioning. Most people underestimate the real level of automation of these systems, which have the ability to learn from their own experience and perform actions beyond the scope of those intended by their creators. This causes a number of ethical and legal difficulties that we will touch upon in this article.

Ethics and Artificial Intelligence

There is a well-known thought experiment in ethics called the trolley problem. The experiment raises a number of important ethical issues that are directly related to artificial intelligence. Imagine a runaway trolley going down the railway lines. There are five people tied to the track ahead. You are standing next to a lever. If you pull it, the trolley will switch to a different set of track. However, there is another person tied to that set of track. Do you pull the lever or not?

Source: Wikimedia.org

There is no clear-cut answer to this question. What is more, there are numerous situations in which such a decision may have to be made [1]. And different social groups tend to give different answers. For example, Buddhist monks are overwhelmingly willing to sacrifice the life of one person in order to save five, even if presented with a more complicated variation of the trolley problem.

As for artificial intelligence, such a situation could arise, for example, if a self-driving vehicle is travelling along a road in a situation where an accident is unavoidable. The question thus arises as to whose lives should take priority – those of the passengers, the pedestrians or neither. A special website has been created by the Massachusetts Institute of Technology that deals with this very issue: users can test out various scenarios out on themselves and decide which courses of action would be the most worthwhile.

Other questions also arise in this case: What actions can be allowed from the legal point of view? What should serve as a basis for such decisions? Who should ultimately be held responsible? This problem has already been addressed by companies and regulators. Representatives at Mercedes, for example, have said outright that their cars will prioritize the lives of passengers. The Federal Ministry of Transport and Digital Infrastructure of Germany responded to this immediately, anticipating future regulation by stating that making such a choice based on a set of criteria would be illegal, and that the car manufacturer be held responsible for any injury or loss of life.

Other countries may go a different route. Take the Chinese Social Credit System, for example, which rates its citizens based how law-abiding and how useful to society they are, etc. Those with low ratings will face sanctions. What is stopping the Chinese government from introducing a law that forces manufacturers of self-driving vehicles to sacrifice the lives of lower-rated citizens in the event of an unavoidable accident? Face recognition technologies and access to the relevant databases make it perfectly possible to identify potential victims and compare their social credit ratings.

The Main Problems Facing the Law

The legal problems run even deeper, especially in the case of robots. A system that learns from information it receives from the outside world can act in ways that its creators could not have predicted [2], and predictability is crucial to modern legal approaches. What is more, such systems can operate independently from their creators or operators thus complicating the task of determining responsibility. These characteristics pose problems related to predictability and the ability to act independently while at the same time not being held responsible [3].

There are numerous options in terms of regulation, including regulation that is based on existing norms and standards. For example, technologies that use artificial intelligence can be regulated as items subject to copyright or as property. Difficulties arise here, however, if we take into account the ability of such technologies to act autonomously, against the will of their creators, owners or proprietors. In this regard, it is possible to apply the rules that regulate a special kind of ownership, namely animals, since the latter are also capable of autonomous actions. In Russian Law, the general rules of ownership are applied to animals (Article 137 of the Civil Code of the Russian Federation); the issue of responsibility, therefore, comes under Article 1064 of the Civil Code of the Russian Federation: injury inflicted on the personality or property of an individual shall be subject to full compensation by the person who inflicted the damage.

Proposals on the application of the law on animals have been made [4], although they are somewhat limited. First, the application of legislation on the basis of analogy is unacceptable within the framework of criminal law. Second, these laws have been created primarily for household pets, which we can reasonably expect will not cause harm under normal circumstances. There have been calls in more developed legal systems to apply similar rules to those that regulate the keeping of wild animals, since the rules governing wild animals are more stringent [5]. The question arises here, however, of how to make a separation with regard to the specific features of artificial intelligence mentioned above. Moreover, stringent rules may actually slow down the introduction of artificial intelligence technologies due to the unexpected risks of liability for creators and inventors.

Another widespread suggestion is to apply similar norms to those that regulate the activities of legal entities [6]. Since a legal entity is an artificially constructed subject of the law [7], robots can be given similar status. The law can be sufficiently flexible and grant the rights to just about anybody. It can also restrict rights. For example, historically, slaves had virtually no rights and were effectively property. The opposite situation can also be observed, in which objects that do not demonstrate any explicit signs of the ability to do anything are vested with rights. Even today, there are examples of unusual objects that are recognized as legal entities, both in developed and developing countries. In 2017, a law was passed in New Zealand recognizing the status of the Whanganui River as a legal entity. The law states that the river is a legal entity and, as such, has all the rights, powers and obligations of a legal entity. The law thus transformed the river from a possession or property into a legal entity, which expanded the boundaries of what can be considered property and what cannot. In 2000, the Supreme Court of India recognized the main sacred text of the Sikhs, the Guru Granth Sahib, as a legal entity.

Even if we do not consider the most extreme cases and cite ordinary companies as an example, we can say that some legal systems make legal entities liable under civil and, in certain cases, criminal law [8]. Without determining whether a company (or state) can have free will or intent, or whether they can act deliberately or knowingly, they can be recognized as legally responsible for certain actions. In the same way, it is not necessary to ascribe intent or free will to robots to recognize them as responsible for their actions.

The analogy of legal entities, however, is problematic, as the concept of legal entity is necessary in order to carry out justice in a speedy and effective manner. But the actions of legal entities always go back to those of a single person or group of people, even if it is impossible to determine exactly who they are [9]. In other words, the legal responsibility of companies and similar entities is linked to the actions performed by their employees or representatives. What is more, legal entities are only deemed to be criminally liable if an individual performing the illegal action on behalf of the legal entity is determined [10]. The actions of artificial intelligence-based systems will not necessarily be traced back to the actions of an individual.

Finally, legal norms on the sources of increased danger can be applied to artificial intelligence-based systems. In accordance with Paragraph 1 of Article 1079 of the Civil Code of the A Russian Federation, legal entities and individuals whose activities are associated with increased danger for the surrounding population (the use of transport vehicles, mechanisms, etc.) shall be obliged to redress the injury inflicted by the source of increased danger, unless they prove that injury has been inflicted as a result of force majeure circumstances or at the intent of the injured person. The problem is identifying which artificial intelligence systems can be deemed sources of increased danger. The issue is similar to the one mentioned above regarding domestic and wild animals.

National and International Regulation

Many countries are actively creating the legal conditions for the development of technologies that use artificial intelligence. For example, the “Intelligent Robot Development and Dissemination Promotion Law” has been in place in South Korea since 2008. The law is aimed at improving the quality of life and developing the economy through the creation and promotion of a strategy for the sustainable development of the smart robot industry. Every five years, the government works out a basic plan to ensure that these goals are achieved.

I would like to pay particular attention here to two recent examples: France, which has declared its ambitions to become a European and world leader in artificial intelligence; and the European Union, which has put forward advanced rules for the regulation of smart robots.

France

In late March 2018, President of France Emmanuel Macron presented the country’s new national artificial intelligence strategy, which involves investing 1.5 billion Euros over the next five years to support research an innovation in the field. The strategy is based on the recommendations made in the report prepared under the supervision of French mathematician and National Assembly deputy Cédric Villani. The decision was made for the strategy to be aimed at four specific sectors: healthcare; transport; the environment and environmental protection; and security. The reasoning behind this is to focus potential of the comparative advantages and competencies in artificial intelligence on sectors where companies can play a key role at the global level, and because these technologies are important for the public interest, etc.

Seven key proposals are given, one of which is of particular interest for the purposes of this article – namely, to make artificial intelligence more open. It is true that the algorithms used in artificial intelligence are discrete and, in most cases, trade secrets. However, algorithms can be biased, for example, in the process of self-learning, they can absorb and adopt the stereotypes that exist in society or which are transferred to them by developers and make decisions based on them. There is already legal precedent for this. A defendant in the United States received a lengthy prison sentence on the basis of information obtained from an algorithm predicting the likelihood of repeat offences being committed. The defendant’s appeal against the use of an algorithm in the sentencing process was rejected because the criteria used to evaluate the possibility of repeat offences were a trade secret and therefore not presented. The French strategy proposes developing transparent algorithms that can be tested and verified, determining the ethical responsibility of those working in artificial intelligence, creating an ethics advisory committee, etc.

European Union

The creation of the resolution on the Civil Law Rules on Robotics marked the first step towards the regulation of artificial intelligence in the European Union. A working group on legal questions related to the development of robotics and artificial intelligence in the European Union was established back in 2015. The resolution is not a binding document, but it does give a number of recommendations to the European Commission on possible actions in the area of artificial intelligence, not only with regard to civil law, but also to the ethical aspects of robotics.

The resolution defines a “smart robot” as “one which has autonomy through the use of sensors and/or interconnectivity with the environment, which has at least a minor physical support, which adapts its behaviour and actions to the environment and which cannot be defined as having ‘life’ in the biological sense.” The proposal is made to “introduce a system for registering advanced robots that would be managed by an EU Agency for Robotics and Artificial Intelligence.” As regards liability for damage caused by robots, two options are suggested: “either strict liability (no fault required) or on a risk-management approach (liability of a person who was able to minimise the risks).” Liability, according to the resolution, “should be proportionate to the actual level of instructions given to the robot and to its degree of autonomy. Rules on liability could be complemented by a compulsory insurance scheme for robot users, and a compensation fund to pay out compensation in case no insurance policy covered the risk.”

The resolution proposes two codes of conduct for dealing with ethical issues: a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics Committees. The first code proposes four ethical principles in robotics engineering: 1) beneficence (robots should act in the best interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).

The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore, our attitude to autonomous systems (whether they are robots or something else), and our reinterpretation of their role in society and their place among us, can have a transformational effect. Legal personality determines what is important for society and allows the decision to made as to whether “something” is a valuable and reasonable object for the purposes of possessing rights and obligations.

Due to the specific features of artificial intelligence, suggestions have been put forward regarding the direct responsibility of certain systems [11]. According to this line of thought, there are no fundamental reasons why autonomous systems should not be legally liable for their actions. The question remains, however, about the necessity or desirability of introducing this kind of liability (at least at the present stage). It is also related to the ethical issues mentioned above. Perhaps making programmers or users of autonomous systems liable for the actions of those systems would be more effective. But this could slow down innovation. This is why we need to continue to search for the perfect balance.

In order to find this balance, we need to address a number of issues. For example: What goals are we pursuing in the development of artificial intelligence? And how effective will it be? The answers to these questions will help us to prevent situations like the one that appeared in Russia in the 17th century, when an animal (specifically goats) was exiled to Siberia for its actions [12].

First published at our partner RIAC

  1. 1. See, for example. D. Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong, Princeton University Press, 2013.
  2. 2. Asaro P., “From Mechanisms of Adaptation to Intelligence Amplifiers: The Philosophy of W. Ross Ashby,” in Wheeler M., Husbands P., and Holland O. (eds.) The Mechanical Mind in History, Cambridge, MA: MIT Press: pp. 149–184
  3. 3. Asaro P. The Liability Problem for Autonomous Artificial Agents // AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA. March 21–23, 2016, p. 191.
  4. 4. Arkhipov, V., Naumov, V. On Certain Issues Regarding the Theoretical Grounds for Developing Legislation on Robotics: Aspects of Will and Legal Personality // Zakon. 2017, No. 5, p. 167.
  5. 5. Asaro P. The Liability Problem for Autonomous Artificial Agents, p. 193.
  6. 6. Arkhipov, V., Naumov, V. Op. cit., p. 164.
  7. 7. See, for example. Winkler A. We the Corporations: How American Businesses Won Their Civil Rights. Liverlight, 2018. See a description here: https://www.nytimes.com/2018/03/05/books/review/adam-winkler-we-the-corporations.html
  8. 8. In countries that use the Anglo-Saxon legal system, the European Union and some Middle Eastern countries. This kind of liability also exists in certain former Soviet countries: Georgia, Kazakhstan, Moldova and Ukraine. It does not exist in Russia, although it is under discussion.
  9. 9. Brożek B., Jakubiec M. On the Legal Responsibility of Autonomous Machines // Artificial Intelligence Law. 2017, No. 25(3), pp. 293–304.
  10. 10. Khanna V.S. Corporate Criminal Liability: What Purpose Does It Serve? // Harvard Law Review. 1996, No. 109, pp. 1477–1534.
  11. 11. Hage J. Theoretical Foundations for the Responsibility of Autonomous Agents // Artificial Intelligence Law. 2017, No. 25(3), pp. 255–271.
  12. 12. U. Pagallo, The Laws of Robots. Crimes, Contracts, and Torts. Springer, 2013, p. 36.

Science & Technology

Kissinger and the current situation considering the development of Artificial Intelligence and the Ukrainian crisis

Avatar photo

Published

on

Copyright World Economic Forum (www.weforum.org)

Kissinger has recently published some reflections on the course of world politics in recent decades, with references to the return of the 20th century conflicts brought to light by the development of new weaponry and strategic scenarios mediated by Artificial Intelligence. Kissinger has also referred to the situation in Ukraine and the equilibria between the United States, Russia and China.

Kissinger has stated that instant communication and the technological revolution have combined to provide new meaning and urgency to two crucial issues that leaders must address:

1) what is essential for national security?

2) what is necessary for peaceful international coexistence?

Although a plethora of empires existed, aspirations for world order were confined by geography and technology to specific regions. This was also true for the Roman and Chinese empires, which encompassed a wide range of societies and cultures. These were regional orders that co-evolved as world orders.

From the 16th century onwards, the development of technology, medicine and economic and political organisation expanded Europe’s ability to project its power and government systems around the world. From the mid-17th century, the Westphalian system was based on respect for sovereignty and international law. Later that system took root throughout the world and, after the end of traditional colonialism, it led to the emergence of States which – largely formally abandoned by the former motherlands – insisted on defining, and even defying, the rules of the established world order – at least the countries that really got rid of imperialistic domination, such as the People’s Republic of China, the Democratic People’s Republic of Korea, etc.

Since the end of World War II, mankind has lived in a delicate balance between relative security and legitimacy. In no previous period of history would the consequences of an error in this balance have been more severe or catastrophic. The contemporary age has introduced a level of destructiveness that potentially enables mankind to self-destruct. Advanced systems of mutual destruction were aimed at pursuing not ultimate victory but rather at preventing others’ attack.

This is the reason why shortly after the Japanese nuclear tragedy of 1945, the deployment of nuclear weapons began to become incalculable, unconstrained by consequences and based on the certainty of security systems.

For seventy-six years (1946-2022) while advanced weapons grew in power, complexity and accuracy, no country was convinced to actually use them, even in conflict with non-nuclear countries. Both the United States of America and the Soviet Union that accepted defeat at the hands of non-nuclear countries without resorting to their own most lethal weapons: as in the case of the Korean War, Vietnam, Afghanistan (both the Soviets and the Americans in that case).

To this day, such nuclear dilemmas have not disappeared, but have instead changed as more States have developed more refined weapons than the “nuclear bomb” and the essentially bipolar distribution of destructive capabilities of the former Cold War has been replaced by very high-tech options – a topic addressed in my various articles.

Cyber weapons and artificial intelligence applications (such as autonomous weapon systems) greatly complicate the current dangerous war prospects. Unlike nuclear weapons, cyber weapons and artificial intelligence are ubiquitous, relatively inexpensive to develop and easy to use.

Cyber weapons combine the capacity for massive impact with the ability to obscure the attribution of attacks, which is crucial when the attacker is no longer a precise reference but becomes a “quiz”.

As we have often pointed out, artificial intelligence can also overcome the need for human operators, and enable weapons to launch themselves based on their own calculations and their ability to choose targets with almost absolute precision and accuracy.

Because the threshold for their use is so low and their destructive ability so great, the use of such weapons – or even their mere threat – can turn a crisis into a war or turn a limited war into a nuclear war through unintentional or uncontrollable escalation. To put it in simple terms, there will no longer be the need to drop the “bomb” first, as it would be downgraded to a weapon of retaliation against possible and not certain enemies. On the contrary, with the help of artificial intelligence, third parties could make sure that the first cyber-attack is attributed to those who have never attacked.

The impact of this technology makes its application a cataclysm, thus making its use so limited that it becomes unmanageable.

No diplomacy has yet been invented to explicitly threaten its use without the risk of an anticipated response. So much so that arms control Summits seem to have been played down by these uncontrollable novelties, ranging from unmarked drone attacks to cyberattacks from the depths of the Net.

Technological developments are currently accompanied by a political transformation. Today we are witnessing the resurgence of rivalry between the great powers, amplified by the spread and advancement of surprising technologies. When in the early 1970s the People’s Republic of China embarked on its re-entry into the international diplomatic system at the initiative of Zhou Enlai and, at the end of that decade, on its full re-entry into the international arena thanks to Deng Xiaoping, its human and economic potential was vast, but its technology and actual power were relatively limited.

Meanwhile, China’s growing economic and strategic capabilities have forced the United States of America to confront –

for the first time in its history – a geopolitical competitor whose resources are potentially comparable to its own.

Each side sees itself as a unicum, but in a different way. The United States of America acts on the assumption that its values are universally applicable and will eventually be adopted everywhere. The People’s Republic of China, instead, expects that the uniqueness of its ultra-millennial civilisation and the impressive economic leap forward will inspire other countries to emulate it to break free from imperialist domination and show respect for Chinese priorities.

Both the US “manifest destiny” missionary impulse and the Chinese sense of grandeur and cultural eminence – of China as such, including Taiwan – imply a kind of subordination-fear of each other. Due to the nature of their economies and high technology, each country is affecting what the other has so far considered its core interests.

In the 21st century China seems to have embarked on playing an international role to which it considers itself entitled by its achievements over the millennia. The United States of America, on the other hand, is taking action to project power, purpose, and diplomacy around the world to maintain a global equilibrium established in its post-war experience, responding to tangible and imagined challenges to this world order.

For the leadership on both sides, these security requirements seem self-evident. They are supported by their respective citizens. Yet security is only part of the wide picture. The fundamental issue for the planet’s existence is whether the two giants can learn to combine the inevitable strategic rivalry with a concept and practice of coexistence.

Russia – unlike the United States of America and China –  lacks the market power, demographic clout and diversified industrial base.

Spanning eleven time zones and enjoying few natural defensive demarcations, Russia has acted according to its own geographical and historical imperatives. Russia’s foreign policy represents a mystical patriotism in a Third Rome-style imperial law, with a lingering perception of insecurity essentially stemming from the country’s long-standing vulnerability to invasion across the plains of Eastern Europe.

For centuries, its leaders from Peter the Great to Stalin – who, by the way, was not even Russian, but felt he was so in the internationalist spirit that led to the creation of the USSR on 30 December 1922 – have sought to isolate Russia’s vast territory with a safety belt imposed around its diffuse border. Today Kissinger tells us that the same priority is manifested once again in the attack on Ukraine – and we add that few people understand and many others pretend not to understand this.

The mutual impact of these societies has been shaped by their strategic assessments, which stem from their history. The Ukrainian conflict is a case in point. After the dissolution of the Warsaw Pact, and the turning of its Member States (Bulgaria, Czechoslovakia, German Democratic Republic, Poland, Romania, Hungary) into “Western” countries, the whole territory – from the security line established in central Europe up to Russia’s national border – has opened up to a new strategic design. Stability depended on the fact that the Warsaw Pact in itself – especially after the Conference on Security and Cooperation in Europe held in Helsinki in 1975 – allayed Europe’s traditional fears of Russian domination (indeed, Soviet domination, at the time), and assuaged Russia’s traditional concerns about Western offensives – from the Swedes to Napoleon until Hitler. Hence, the strategic geography of Ukraine embodies these concerns emerging again in Russia. If Ukraine were to join NATO, the security line between Russia and the West would be placed within just over 500 kilometres of Moscow, actually eliminating the traditional buffer that saved Russia when Sweden, France and Germany tried to occupy it in previous centuries.

If the security border were to be established on the Western side of Ukraine, Russian forces would be within easy reach of Budapest and Warsaw. The February 2022 invasion of Ukraine is a flagrant violation of the international law mentioned above, and is thus largely a consequence of a failed or otherwise inadequately undertaken strategic dialogue. The experience of two nuclear entities confronting each other militarily – although not resorting to their destructive weapons – underlines the urgency of the fundamental problem, as Ukraine is only a tool of the West. Dario Fo once said that China was an invention of Albania to scare the Soviet Union. We can say that Ukraine is currently an invention of the West to scare Russia – and this is not a joke. An invention for which Ukrainians and Russians are paying with their blood.

Hence the triangular relationship between the United States of America, the People’s Republic of China, and the Russian Federation will eventually resume, even if Russia will be weakened by the demonstration of its intended military limitations in Ukraine, the widespread rejection of its conduct, and the scope and impact of sanctions against it. But it will retain nuclear and cyber capabilities for doomsday scenarios.

In the US-Chinese relationship, instead, the conundrum is whether two different concepts of national greatness can learn to peacefully coexist side by side and how. In the case of Russia, the challenge is whether the country can reconcile its vision of itself with the self-determination and security of the countries in what it has long called its “near abroad” (mainly Central Asia and Eastern Europe), and do so as part of an international system rather than through domination.

It now seems possible that an order based on universal rules, however worthy in its conception, will be replaced in practice, for an indefinite period of time, by an at least partially decoupled world. Such a division encourages a search at its margins for spheres of influence. In such a case, how will countries that do not agree on global rules of conduct be able to operate within an agreed equilibrium design? Will the quest for domination overwhelm the analysis of coexistence?

In a world of increasingly formidable technology that can either elevate or dismantle human civilisation, there is no definitive solution to the competition between great powers, let alone a military one. An unbridled technological race, justified by the foreign policy ideology in which each side is convinced of the other’s malicious intent, risks creating a catastrophic cycle of mutual suspicion like the one that triggered World War I, but with incomparably greater consequences.

All sides are therefore now obliged to re-examine their first principles of international behaviour and relate them to the possibilities of coexistence. For the leaders of high-tech companies, there is a moral and strategic imperative to pursue – both within their own countries and with potential adversary countries – an ongoing discussion on the implications of technology and how its military applications could be limited.

The topic is too important to be neglected until crises arise. The arms control dialogues that helped toning down and showing restraint during the nuclear age, as well as the high-level research on the consequences of emerging technologies, could prompt reflection and promote habits of mutual strategic self-restraint.

An irony of the current world is that one of its glories – the revolutionary explosion of technology – has emerged so quickly, and with such optimism, that it has outgrown its dangers, and inadequate systematic efforts have been made to understand its capabilities.

Technologists develop amazing devices, but have had few opportunities to explore and evaluate their comparative implications within a historical framework. As I pointed out in a previous article, political leaders too often lack adequate understanding of the strategic and philosophical implications of the machines and algorithms available to them. At the same time, the technological revolution is eroding human consciousness and perceptions of the nature of reality. The last great transformation – the Enlightenment – replaced the age of faith with repeatable experiments and logical deductions. Now it is supplanted by dependence on algorithms, which work in the opposite direction, offering results in search of an explanation. Exploring these new frontiers will require considerable efforts on the part of national leaders to reduce, and ideally bridge, the gaps between the worlds of technology, politics, history and philosophy.

The leaders of current great powers need not immediately develop a detailed vision of how to solve the dilemmas described here. Kissinger warns that, however, they must be clear about what is to be avoided and what cannot be tolerated. The wise must anticipate challenges before they manifest themselves as crises. Lacking a moral and strategic vision, the current era is unbridled. The extent of our future still defies understanding not so much of what will happen but of what has already happened.

Continue Reading

Science & Technology

The non-limits of Artificial Intelligence and moral and survival issues

Avatar photo

Published

on

The man-made artificial brain is autonomous because it is capable of emotional expressiveness and self-consciousness. Efforts to develop a strong Artificial Intelligence have also made considerable progress in the field of neural engineering, as well as in our understanding of the human brain. But while some focus on the still distant dream of a thinking computer, some believe that the journey is more important than the destination. The priority is to use scientists’ opportunities and discoveries to develop new methods for the early detection of cancer and in the hope of finding a cure for Alzheimer’s disease: in short, to save lives.

If mankind is to survive and advance to higher levels, a new kind of thinking is essential: Albert Einstein said as much over seventy years ago and the idea could not be more relevant and topical today. Controlled intelligent machines will soon enable us to overcome our toughest challenges, not only to cure diseases, but to eradicate poverty and hunger, to heal the planet and to build a better future for all of us: for that future to become a reality for our children. We have always wanted to change the world, but for the time being we should be content to understand it first.

For as many as 130,000 years, our ability for reasoning has remained unchanged. All the intelligence of neuroscientists, mathematical engineers and hackers pales in comparison to the most basic artificial intelligence. Once activated, a sentient machine would soon surpass the limits of biology, and in a short time its analytical power would exceed the collective intelligence of all human beings in the history of the world.

Just imagine such an entity with a full range of human emotions, including self-consciousness. Some scientists call it singularity, others supernaturality. This means that the path to building such a super-intelligence requires us to unlock the most fundamental secrets of the universe. What is the nature of consciousness? Will a machine soul with artificial intelligence exist? And if so, where will it reside? Some might ask whether we want to create a god through artificial intelligence: the question is fundamental, since wanting to create a god or replace it – as in the case of cloning – is what man has always done.

Many scientists, however, do not understand the way in which the problem struggles in the tension between the potential of technology and its dangers. They only tend to the goal of doing something never achieved, of surpassing their colleagues, of being better: this is what used to be called “championism” aimed solely at individual selfishness detached from the true needs of the group, of the community, of mankind.

In recent years, the United States of America, Germany, the United Kingdom, the European Union, the G20, the OECD, the Institute of Electrical and Electronics Engineers, Google, Microsoft, the Partnership on AI (a non-profit coalition committed to the responsible use of Artificial Intelligence), and other institutions, governments, and companies have proposed ethical standards, principles, and framework constraints in various dimensions, as well as the establishment of a corresponding ethics or advisory committee on Artificial Intelligence. The development of Artificial Intelligence is inseparable from the consideration and supervision of ethics and moral considerations.

It is not yet known what kind of capabilities the development of Artificial Intelligence will achieve in the future and in what form it will coexist with humans. After all, the current Artificial Intelligence is still in the early stage of development, but the general direction is clear, i.e. “reliable Artificial Intelligence”, “technology for the good of people”, etc. – in short, to induce Artificial Intelligence to build a better life for human beings.

It should be said, however, that, at this stage, Artificial Intelligence is mainly limited to military objectives, such as the Maven project contract between the US Department of Defence (DoD) and Google. The background is to use Artificial Intelligence to interpret video images so as to enable drones to attack specific targets more accurately. After all, Google plans not to renew the DoD-Maven project under citizens’ pressure. The medium- and long-term constraints, instead, must be to steer the development of human-guided machines so that they use Artificial Intelligence technology to serve humans and not military purposes of mutual destruction.

The body structure of humans and that of machines are both the union of atoms and molecules, but the quantity and combination are very different. The transmission of biological information is mainly in the form of chemical and electrical synapses, i.e. the interchange of electrical and chemical signals, which can also be achieved in the future in machines by technical means. Nevertheless, including all the matter and material structures around us, machines can be guided and constructed by special invisible and intangible frequencies. Our bodies and external matter itself are only the gaols to be guided and manifested. As mentioned above, only human consciousness is so far impossible to recreate as the source is generated, or is guided and controlled by a hidden form, which could also be the quantum entanglement. In this regard, gravitational waves from a black hole are thought to alter people’s consciousness. Hawking radiation is a real phenomenon. It is the radiation that is released outside the event horizon of a black hole due to relativistic quantum effects: it has been observed and measured. If consciousness is related to quantum entanglement, then those same electrons could be related to those in the nucleus of our brain cells. Gravitational waves can project consciousness into another space-time. This is a further reason why for sidereal travels in the vicinity of black holes, it is not recommended to send human crews, but rather machines that do not suffer the loss of a consciousness that it is good they should not actually have.

Consciousness has completely different definitions in philosophy, psychology and biology. It is generally believed to be people’s ability to recognise the environment and themselves. At the current level of technology, we can only surmise what controls consciousness. Some studies have shown that the claustrum is the switch of brain consciousness, but this is currently only at the stage of experimental speculation. The claustrum is a thin layer of grey matter, a bilateral collection of neurons and supporting glial cells that connects to cortical regions (e.g. the pre-frontal cortex), or to subcortical regions (e.g. the thalamus) of the brain. It is located between the insula medially and the putamen laterally, separated by the extreme and external capsules, respectively.

Consciousness is assumed to be the effect of the magnetic field of the human mind. In quantum mechanics, scientists believe that pure magnetic fields (and pure electric fields) are the effects caused by virtual photons that, however, are photons the reality of which cannot be directly observed.

The conclusions of a study conducted at the University of California, Berkeley, show that human DNA is a channel for the reception of energy, which enables human beings to proceed normally. Energy reception mainly refers to the acquisition and transfer of photons, which make the water molecules around the DNA full of energy and strengthen the helical structure. The human body is composed of organs and organs are made up of hundreds of millions of cells.

Each cell is thought to have a certain magnetic field and human organs composed of cells also have an additional magnetic field. The magnetic field of the mind interferes with the magnetic field of each cell, thus affecting and conditioning the development of bodily functions and the behaviour of the human being.

Today, it is more reliable to say that consciousness is the connection of neuronal synapses formed after synaptic growth in childhood. It gradually begins to form and has the ability of immediate memory, which is activated by the bodily functions themselves. Since birth, each of us is destined to evolve, and hence see the “real world” that we perceive as limited by the functional characteristics of our body, which makes us accept the reality in front of us as a summation of habits (established experience) and unforeseen events to be resolved (intelligence). From childhood to adulthood, from birth to death, human thoughts, choices, basic senses and personality are all limited by the inherited structures and ways of thinking existing in the brain. All this is directed by the so-called consciousness. All decisions are the result of “self-awareness”, a further synonym for consciousness.

Everything around us is a function of a huge cosmic Brownian motion, which appears to be regular but is actually irregular. Brownian motion is a natural phenomenon whose mathematical representation describes the time course of a very broad class of random phenomena that have a rationally determined outcome, what we mistakenly call “coincidence”. When analysed, it is actually just a progressive series of daily interactions that lead to a certain climax. Let me give a tragic example.

A lady leaves her house in Paris, stops to feed her cat: it takes her 20 seconds. She gets into her car, crosses the city, stops at a crossroads. The car following her skids and swerves, the headlights blind the view of the driver coming in the other direction, and… bang… Princess Diana crashes into a tunnel and Elton John sells a lot of records for millions of pounds and other related profitable activities. The simplest things make a huge difference, and coincidences do not exist except in the limited view of our mental perception accustomed to “rational” habit.

Bats use ultrasonic waves to identify the world; snakes use infrared beams to find their prey, and humpback whales can communicate hundreds of kilometres away. The world in their eyes is completely different from that of humans. What we see, hear and smell is only what we think, as what our senses perceive is only a fraction of what is happening around us. This means that we cannot prove that the world seen by some animals may not be the real world.

If humans have the ability to control the formation and development of consciousness and inject such a structure of consciousness into a humanoid machine driven by the same neuronal function, there could be a situation in which neither the machine nor the humans can distinguish whether or not the other is a machine or a human: this is ontology.

In terms of composition of the elements: biological and physiological characteristics; methods for transmitting information; ideology and other characteristics. There is no absolutely correct difference – hence how can there be an ethics for humans as seen by a machine that has consciousness?

It is just that – no matter how hard humans try – they may not be able to discover or control the generation of consciousness in a machine, including hidden existences such as dark matter (a hypothetical component of matter that, unlike known matter, would not emit electromagnetic radiation and would currently be detectable only indirectly through its gravitational effects) and dark energy (a form of energy that cannot be directly detected and is homogeneously spread throughout the space), which cannot be identified.

Apart from the unique sense of freedom that human beings regard as such, what component has not the characteristics that correspond to the periodic table of elements? Our consciousness can also be the result of the seemingly natural but irregular movements of various hormones, cells and synapses in the body guided by hidden substances. In turn, the wisdom and skill of Artificial Intelligence may one day surpass the limits of human beings, but even so, it is unlikely that among human beings the fittest will survive on the basis of a Darwinistic approach. For example, in ancient times, the savage phase was prone to cannibalism due to problems of survival and, above all, of intelligence related to brain development.

In modern society, after solving the problem of food and clothing, humans have started to pay attention to the earth, the environment, ecology and respect for animals. Animals instinctively understand that in order to satisfy their needs, they need to live in harmony with their whole and the environment. If human beings really are the basis of the wisdom of the entire planet, will highly intelligent machines also take care of us as small animals and pets, on par with our dog or the aforementioned cat in Paris?

It is therefore our duty to be ethically concerned about issues arising from Artificial Intelligence: it is the justified fear of being overwhelmed by those we now think we control.

Continue Reading

Science & Technology

«Chip war» against China threatens to undermine America

Published

on

The Biden Administration has been expanding sanctions against the electronic industry of China. In turn, Silicon Valley companies are being increasingly viewed as a major instrument of big politics. However, the “geopoliticization” of the IT industry on the part of Washington threatens to further undermine the international positions of the United States in this significant sector of the economy.

China’s progress in IT technologies has been a point of Washington’s concern for years. Unlike before, when they talked mostly about the “threat to the economic positions” of the USA and the West as a whole, now they are signaling concern over “security issues”. A number of  new restrictions introduced early in October were designed, as western observers say, to slow down the development of the Chinese IT industry to such an extent that would guarantee the United States supremacy in applying cutting-edge IT technologies for military purposes. Among other measures, Biden has substantially limited the participation of US residents in developing technologies for the Chinese IT sector.

As Bloomberg reported a few days ago, “the United States intends to restrict China’s  access to AI and quantum computing technologies”. The White House has been elaborating administrative measures with a view to establish tough limitations and control of western investments in a number of critically important technology-related sectors of China. Quantum computers and AI are among top-priority issues. In case of implementation, the new restrictions will enhance the earlier adopted ones.

It looks like that Biden Administration has been trying to revive the practice introduced by Trump. In 2018 the Trump Administration imposed a wide range of sanctions against China’s IT giant Huawei, which was accused, without any proof, of assisting in “espionage schemes and secret surveillance projects conducted by the Chinese authorities”. They slapped a complete ban on the supplies of American parts which were critical for Huawei products to be competitive on the global market. At present, western sources are signaling content, though not over the cessation of “secret espionage” but over the fact that Huawei’s export revenues have decreased considerably, along with the range of products.

Meanwhile, according to The Economist, Trump’s «success» had a negative side. The Republican Administration overtly ignored the interests of the allies and partners. As a result, western investors began to invest in businesses and supply chains of components that were exempt from the control of the American supervisory bodies. Japanese companies offered a full range of electronic components as produce that was free from technological restrictions imposed by the United States. Some US leading companies, which supplied billions of dollars’  worth of products to the Chinese market annually, started to open branches and representative offices in foreign jurisdictions, thereby bypassing Washington’s restrictions.

By early 2022, having acknowledged a limited range of the existing sanctions, the Biden Administration introduced a new variant of export control, which envisaged tough restrictions on the export to China of components whose characteristics go beyond a certain technological level. At the beginning of October, the ban was expanded to include chips which were produced less than 14 nanometers, or, in some cases, less than 16 nanometers. Such harsh restrictions, along with the unilateral measures taken by Washington, continue to trigger discontent among many nominal allies of the United States.

Biden has also been taking steps to encourage the return of microelectronics production facilities to the United States. In spring, the White House presented a bill on chips and science, The CHIPS and Science Act, which provides for the allocation of at least 52 billion dollars as subsidies for the construction of new “factories” to produce state-of-the-art processors on the territory of the United States. Also in spring, Biden spoke at a ceremony on the site of a future enterprise to be built by Intel – a top processor manufacturer in the US. Plans to build a new “factory” in the US have been voiced by Taiwanese TSMC – a major and the most technologically advanced microprocessor producer in the world.

However, America is facing a lot of “opposition”. According to Foreign Affairs, the CHIPS and Science Act, which came into force at the end of August, is not enough to restore US leading positions in microelectronics. An influx of financial resources will not settle all the problems. What is needed is a breakthrough in the managerial and technological culture and a clear understanding on the part of Washington politicians of all the subtleties and issues the contemporary microelectronics industry is confronted with.

By now, most Silicon Valley companies have lost the spirit of technological “iron” innovations. A wide variety of new “high-tech” companies founded in the USA in the 2000s do not produce products that can be touched with hands. The lion’s share of profit comes from advertising in apps or search systems. The hype over trendy software novelties, which spread across America, enabled the competitors from Asia to break forward in designs, particularly in the production of sophisticated microchips. In addition, globalization in its current shape, aimed at outsourcing the productions of end products to where they are the most cost-effective, has played an evil joke on America. «De-industrilization» covered not only a large number of American industries – it spread on to affect the thinking patterns of their managers and engineers.

At the same time, even American experts admit that the harder they try to “contain” China, the harder it becomes for Washington to persuade its allies in Europe and Asia to follow suit. Active assistance from other countries in restricting the export of indispensable parts, machines and technologies to China is vital, for without it the Unites States risks inflicting irreparable damage to its own electronics sector, in the first place. Investors will surely opt for areas where they can avoid draconian US restrictions and where they can continue to develop mutually profitable business ties with China.

America is “stuck” having to choose between the less tough approach towards restrictions in the exchange of technologies, which can  yield a greater effect, on the one hand, and the attempts to “suppress” the advanced Chinese microelectronics over a short period of time, on the other, risking inflicting substantial damage to its own IT potential.

Firstly, many American producers of semi-conductors depend heavily on the supplies to the Chinese market, one of the world’s biggest. As The Financial Times reports, the share of supplies to China makes up one third in the portfolio of orders of Applied Materials, a California-based company, which produces machinery for the processing of silicon wafers. 27 percent belongs to Intel. And 31 percent – to Lam Research, one of the leading suppliers of processor manufacturing equipment.

Secondly, the slowdown of American and global economy may lead to a decrease in sales in the microelectronics sector, which is bound to produce a negative impact on the prospects for new investments. The IT industry is thus facing a slowing down, if not a recession, time. According to The Economist, about 30 major American microchip producers signal an 11-billion-dollar reduction in cumulative revenue prospects for the third quarter since July. The combined capitalization of US-based chip producers has dropped by more than 1.5 trillion dollars this year.

Political pressure has been building up as well, as Washington requires the microelectronics industry to reduce its dependence on China at an early date. Thus, the deteriorating situation on the market is being worsened further by yet more administrative and political restrictions.

Meanwhile, leaders of US companies fear that Beijing may impose measures in response, introducing yet more limitations on the access of American producers to its vast domestic market. As reported by The Financial Times, Europe  is concerned that a further expansion of sanction restrictions on the part of the United States will inflict ever more damage to companies and consumers in the Old World. The Chinese producers may find themselves without so-much-needed parts and components. A decrease in supplies will also affect European aerospace enterprises, car manufacturers, producers of medical equipment, and the cloud-based computing sector. The producers of electronic parts from Taiwan, including the key player TSMC, and their counterparts from South Korea, are likely to run into difficulties supplying their businesses in China, which account for dozens of percent of the total output. Japanese companies have been holding heated debates on the middle-and long-term consequences of the restrictions on the use of American components while dealing with Chinese counteragents.

Finally, the severance of scientific ties with China will ruin the innovative potential of American designers. Chinese researchers are already demonstrating a much higher citation index in a whole range of research and technology areas, compared to their American colleagues. According to Beijing Review, during the Trump presidency they launched the so-called “Chinese Initiative” – a set of administrative measures aimed at tracking down potential spies among scientists and engineers of Chinese descent. As the anti-Chinese sentiments gain momentum, throwing America into an atmosphere of hostility and suspicion towards Chinese scientists and experts, thousands of researchers and engineers, including from the microelectronics industry, have left or are planning to leave the USA and move to China, the Asian American Scholar Forum (AASF) said.

Washington’s “efforts” could sadly result in a further decrease of the share of American producers on the global market and an overall drop in the global influence of the US technological sector. All these are happening amid a decrease in demand, caused by an oncoming recession.  In the past, the United States repeatedly and successfully forced the destructive dilemma “security or development” on their opponents. Now, America itself risk being trapped by its own hopelessly outdated logic of the past.  

From our partner International Affairs

Continue Reading

Publications

Latest

South Asia4 hours ago

The Taliban Finally Granted Permission to the Former President Karzai to leave Afghanistan

Based on the information, the former president of Afghanistan, Hamid Karzai, was permitted to leave the country. At a time,...

South Asia8 hours ago

The Charisma and Chaos of Imran Khan

The chances of Imran Khan winning the elections of 2018 were quite murky. Despite his unparalleled fan base and populist...

Southeast Asia11 hours ago

Can ‘border guard’ diplomacy strengthen ties between Myanmar-Bangladesh?

The 8th Border Conference between Border Guard Bangladesh (BGB) and Myanmar Border Guard Police (BGP) has started. The conference, which...

Economy13 hours ago

The Upcoming Recession and its Ramifications on the World Economies

The recent decision of the new head of Twitter, Elon Musk, to sack approximately 50 percent of the workforce is only indicative...

South Asia22 hours ago

Chattisgarh Elections 2023: Future of United Progressive Alliance and BJP

Chattisgarh, the 9th largest state of India by area and 17th most populous state with population of 30 Million will...

Eastern Europe1 day ago

Azerbaijan is to open an embassy in Israel: timely or little late?

“Time to open that bottle!” tweeted with joy George Deek, Israel`s Ambassador in Azerbaijan on November 18, by posting a...

biden-foreign-policy biden-foreign-policy
Americas1 day ago

Ron Paul: Biden Administration accept that it has a “Zelensky problem”

“Last week the world stood on the very edge of a nuclear war, as Ukraine’s US-funded president, Vladimir Zelensky, urged...

Trending