Connect with us

Science & Technology

The Ethical and Legal Issues of Artificial Intelligence

Published

on

Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical issues. Artificial intelligence adds a new dimension to these questions. Systems that use artificial intelligence technologies are becoming increasingly autonomous in terms of the complexity of the tasks they can perform, their potential impact on the world and the diminishing ability of humans to understand, predict and control their functioning. Most people underestimate the real level of automation of these systems, which have the ability to learn from their own experience and perform actions beyond the scope of those intended by their creators. This causes a number of ethical and legal difficulties that we will touch upon in this article.

Ethics and Artificial Intelligence

There is a well-known thought experiment in ethics called the trolley problem. The experiment raises a number of important ethical issues that are directly related to artificial intelligence. Imagine a runaway trolley going down the railway lines. There are five people tied to the track ahead. You are standing next to a lever. If you pull it, the trolley will switch to a different set of track. However, there is another person tied to that set of track. Do you pull the lever or not?

Source: Wikimedia.org

There is no clear-cut answer to this question. What is more, there are numerous situations in which such a decision may have to be made [1]. And different social groups tend to give different answers. For example, Buddhist monks are overwhelmingly willing to sacrifice the life of one person in order to save five, even if presented with a more complicated variation of the trolley problem.

As for artificial intelligence, such a situation could arise, for example, if a self-driving vehicle is travelling along a road in a situation where an accident is unavoidable. The question thus arises as to whose lives should take priority – those of the passengers, the pedestrians or neither. A special website has been created by the Massachusetts Institute of Technology that deals with this very issue: users can test out various scenarios out on themselves and decide which courses of action would be the most worthwhile.

Other questions also arise in this case: What actions can be allowed from the legal point of view? What should serve as a basis for such decisions? Who should ultimately be held responsible? This problem has already been addressed by companies and regulators. Representatives at Mercedes, for example, have said outright that their cars will prioritize the lives of passengers. The Federal Ministry of Transport and Digital Infrastructure of Germany responded to this immediately, anticipating future regulation by stating that making such a choice based on a set of criteria would be illegal, and that the car manufacturer be held responsible for any injury or loss of life.

Other countries may go a different route. Take the Chinese Social Credit System, for example, which rates its citizens based how law-abiding and how useful to society they are, etc. Those with low ratings will face sanctions. What is stopping the Chinese government from introducing a law that forces manufacturers of self-driving vehicles to sacrifice the lives of lower-rated citizens in the event of an unavoidable accident? Face recognition technologies and access to the relevant databases make it perfectly possible to identify potential victims and compare their social credit ratings.

The Main Problems Facing the Law

The legal problems run even deeper, especially in the case of robots. A system that learns from information it receives from the outside world can act in ways that its creators could not have predicted [2], and predictability is crucial to modern legal approaches. What is more, such systems can operate independently from their creators or operators thus complicating the task of determining responsibility. These characteristics pose problems related to predictability and the ability to act independently while at the same time not being held responsible [3].

There are numerous options in terms of regulation, including regulation that is based on existing norms and standards. For example, technologies that use artificial intelligence can be regulated as items subject to copyright or as property. Difficulties arise here, however, if we take into account the ability of such technologies to act autonomously, against the will of their creators, owners or proprietors. In this regard, it is possible to apply the rules that regulate a special kind of ownership, namely animals, since the latter are also capable of autonomous actions. In Russian Law, the general rules of ownership are applied to animals (Article 137 of the Civil Code of the Russian Federation); the issue of responsibility, therefore, comes under Article 1064 of the Civil Code of the Russian Federation: injury inflicted on the personality or property of an individual shall be subject to full compensation by the person who inflicted the damage.

Proposals on the application of the law on animals have been made [4], although they are somewhat limited. First, the application of legislation on the basis of analogy is unacceptable within the framework of criminal law. Second, these laws have been created primarily for household pets, which we can reasonably expect will not cause harm under normal circumstances. There have been calls in more developed legal systems to apply similar rules to those that regulate the keeping of wild animals, since the rules governing wild animals are more stringent [5]. The question arises here, however, of how to make a separation with regard to the specific features of artificial intelligence mentioned above. Moreover, stringent rules may actually slow down the introduction of artificial intelligence technologies due to the unexpected risks of liability for creators and inventors.

Another widespread suggestion is to apply similar norms to those that regulate the activities of legal entities [6]. Since a legal entity is an artificially constructed subject of the law [7], robots can be given similar status. The law can be sufficiently flexible and grant the rights to just about anybody. It can also restrict rights. For example, historically, slaves had virtually no rights and were effectively property. The opposite situation can also be observed, in which objects that do not demonstrate any explicit signs of the ability to do anything are vested with rights. Even today, there are examples of unusual objects that are recognized as legal entities, both in developed and developing countries. In 2017, a law was passed in New Zealand recognizing the status of the Whanganui River as a legal entity. The law states that the river is a legal entity and, as such, has all the rights, powers and obligations of a legal entity. The law thus transformed the river from a possession or property into a legal entity, which expanded the boundaries of what can be considered property and what cannot. In 2000, the Supreme Court of India recognized the main sacred text of the Sikhs, the Guru Granth Sahib, as a legal entity.

Even if we do not consider the most extreme cases and cite ordinary companies as an example, we can say that some legal systems make legal entities liable under civil and, in certain cases, criminal law [8]. Without determining whether a company (or state) can have free will or intent, or whether they can act deliberately or knowingly, they can be recognized as legally responsible for certain actions. In the same way, it is not necessary to ascribe intent or free will to robots to recognize them as responsible for their actions.

The analogy of legal entities, however, is problematic, as the concept of legal entity is necessary in order to carry out justice in a speedy and effective manner. But the actions of legal entities always go back to those of a single person or group of people, even if it is impossible to determine exactly who they are [9]. In other words, the legal responsibility of companies and similar entities is linked to the actions performed by their employees or representatives. What is more, legal entities are only deemed to be criminally liable if an individual performing the illegal action on behalf of the legal entity is determined [10]. The actions of artificial intelligence-based systems will not necessarily be traced back to the actions of an individual.

Finally, legal norms on the sources of increased danger can be applied to artificial intelligence-based systems. In accordance with Paragraph 1 of Article 1079 of the Civil Code of the A Russian Federation, legal entities and individuals whose activities are associated with increased danger for the surrounding population (the use of transport vehicles, mechanisms, etc.) shall be obliged to redress the injury inflicted by the source of increased danger, unless they prove that injury has been inflicted as a result of force majeure circumstances or at the intent of the injured person. The problem is identifying which artificial intelligence systems can be deemed sources of increased danger. The issue is similar to the one mentioned above regarding domestic and wild animals.

National and International Regulation

Many countries are actively creating the legal conditions for the development of technologies that use artificial intelligence. For example, the “Intelligent Robot Development and Dissemination Promotion Law” has been in place in South Korea since 2008. The law is aimed at improving the quality of life and developing the economy through the creation and promotion of a strategy for the sustainable development of the smart robot industry. Every five years, the government works out a basic plan to ensure that these goals are achieved.

I would like to pay particular attention here to two recent examples: France, which has declared its ambitions to become a European and world leader in artificial intelligence; and the European Union, which has put forward advanced rules for the regulation of smart robots.

France

In late March 2018, President of France Emmanuel Macron presented the country’s new national artificial intelligence strategy, which involves investing 1.5 billion Euros over the next five years to support research an innovation in the field. The strategy is based on the recommendations made in the report prepared under the supervision of French mathematician and National Assembly deputy Cédric Villani. The decision was made for the strategy to be aimed at four specific sectors: healthcare; transport; the environment and environmental protection; and security. The reasoning behind this is to focus potential of the comparative advantages and competencies in artificial intelligence on sectors where companies can play a key role at the global level, and because these technologies are important for the public interest, etc.

Seven key proposals are given, one of which is of particular interest for the purposes of this article – namely, to make artificial intelligence more open. It is true that the algorithms used in artificial intelligence are discrete and, in most cases, trade secrets. However, algorithms can be biased, for example, in the process of self-learning, they can absorb and adopt the stereotypes that exist in society or which are transferred to them by developers and make decisions based on them. There is already legal precedent for this. A defendant in the United States received a lengthy prison sentence on the basis of information obtained from an algorithm predicting the likelihood of repeat offences being committed. The defendant’s appeal against the use of an algorithm in the sentencing process was rejected because the criteria used to evaluate the possibility of repeat offences were a trade secret and therefore not presented. The French strategy proposes developing transparent algorithms that can be tested and verified, determining the ethical responsibility of those working in artificial intelligence, creating an ethics advisory committee, etc.

European Union

The creation of the resolution on the Civil Law Rules on Robotics marked the first step towards the regulation of artificial intelligence in the European Union. A working group on legal questions related to the development of robotics and artificial intelligence in the European Union was established back in 2015. The resolution is not a binding document, but it does give a number of recommendations to the European Commission on possible actions in the area of artificial intelligence, not only with regard to civil law, but also to the ethical aspects of robotics.

The resolution defines a “smart robot” as “one which has autonomy through the use of sensors and/or interconnectivity with the environment, which has at least a minor physical support, which adapts its behaviour and actions to the environment and which cannot be defined as having ‘life’ in the biological sense.” The proposal is made to “introduce a system for registering advanced robots that would be managed by an EU Agency for Robotics and Artificial Intelligence.” As regards liability for damage caused by robots, two options are suggested: “either strict liability (no fault required) or on a risk-management approach (liability of a person who was able to minimise the risks).” Liability, according to the resolution, “should be proportionate to the actual level of instructions given to the robot and to its degree of autonomy. Rules on liability could be complemented by a compulsory insurance scheme for robot users, and a compensation fund to pay out compensation in case no insurance policy covered the risk.”

The resolution proposes two codes of conduct for dealing with ethical issues: a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics Committees. The first code proposes four ethical principles in robotics engineering: 1) beneficence (robots should act in the best interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).

The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore, our attitude to autonomous systems (whether they are robots or something else), and our reinterpretation of their role in society and their place among us, can have a transformational effect. Legal personality determines what is important for society and allows the decision to made as to whether “something” is a valuable and reasonable object for the purposes of possessing rights and obligations.

Due to the specific features of artificial intelligence, suggestions have been put forward regarding the direct responsibility of certain systems [11]. According to this line of thought, there are no fundamental reasons why autonomous systems should not be legally liable for their actions. The question remains, however, about the necessity or desirability of introducing this kind of liability (at least at the present stage). It is also related to the ethical issues mentioned above. Perhaps making programmers or users of autonomous systems liable for the actions of those systems would be more effective. But this could slow down innovation. This is why we need to continue to search for the perfect balance.

In order to find this balance, we need to address a number of issues. For example: What goals are we pursuing in the development of artificial intelligence? And how effective will it be? The answers to these questions will help us to prevent situations like the one that appeared in Russia in the 17th century, when an animal (specifically goats) was exiled to Siberia for its actions [12].

First published at our partner RIAC

  1. 1. See, for example. D. Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong, Princeton University Press, 2013.
  2. 2. Asaro P., “From Mechanisms of Adaptation to Intelligence Amplifiers: The Philosophy of W. Ross Ashby,” in Wheeler M., Husbands P., and Holland O. (eds.) The Mechanical Mind in History, Cambridge, MA: MIT Press: pp. 149–184
  3. 3. Asaro P. The Liability Problem for Autonomous Artificial Agents // AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA. March 21–23, 2016, p. 191.
  4. 4. Arkhipov, V., Naumov, V. On Certain Issues Regarding the Theoretical Grounds for Developing Legislation on Robotics: Aspects of Will and Legal Personality // Zakon. 2017, No. 5, p. 167.
  5. 5. Asaro P. The Liability Problem for Autonomous Artificial Agents, p. 193.
  6. 6. Arkhipov, V., Naumov, V. Op. cit., p. 164.
  7. 7. See, for example. Winkler A. We the Corporations: How American Businesses Won Their Civil Rights. Liverlight, 2018. See a description here: https://www.nytimes.com/2018/03/05/books/review/adam-winkler-we-the-corporations.html
  8. 8. In countries that use the Anglo-Saxon legal system, the European Union and some Middle Eastern countries. This kind of liability also exists in certain former Soviet countries: Georgia, Kazakhstan, Moldova and Ukraine. It does not exist in Russia, although it is under discussion.
  9. 9. Brożek B., Jakubiec M. On the Legal Responsibility of Autonomous Machines // Artificial Intelligence Law. 2017, No. 25(3), pp. 293–304.
  10. 10. Khanna V.S. Corporate Criminal Liability: What Purpose Does It Serve? // Harvard Law Review. 1996, No. 109, pp. 1477–1534.
  11. 11. Hage J. Theoretical Foundations for the Responsibility of Autonomous Agents // Artificial Intelligence Law. 2017, No. 25(3), pp. 255–271.
  12. 12. U. Pagallo, The Laws of Robots. Crimes, Contracts, and Torts. Springer, 2013, p. 36.

Continue Reading
Comments

Science & Technology

Future Goals in the AI Race: Explainable AI and Transfer Learning

Published

on

Recent years have seen breakthroughs in neural network technology: computers can now beat any living person at the most complex game invented by humankind, as well as imitate human voices and faces (both real and non-existent) in a deceptively realistic manner. Is this a victory for artificial intelligence over human intelligence? And if not, what else do researchers and developers need to achieve to make the winners in the AI race the “kings of the world?”

Background

Over the last 60 years, artificial intelligence (AI) has been the subject of much discussion among researchers representing different approaches and schools of thought. One of the crucial reasons for this is that there is no unified definition of what constitutes AI, with differences persisting even now. This means that any objective assessment of the current state and prospects of AI, and its crucial areas of research, in particular, will be intricately linked with the subjective philosophical views of researchers and the practical experience of developers.

In recent years, the term “general intelligence,” meaning the ability to solve cognitive problems in general terms, adapting to the environment through learning, minimizing risks and optimizing the losses in achieving goals, has gained currency among researchers and developers. This led to the concept of artificial general intelligence (AGI), potentially vested not in a human, but a cybernetic system of sufficient computational power. Many refer to this kind of intelligence as “strong AI,” as opposed to “weak AI,” which has become a mundane topic in recent years.

As applied AI technology has developed over the last 60 years, we can see how many practical applications – knowledge bases, expert systems, image recognition systems, prediction systems, tracking and control systems for various technological processes – are no longer viewed as examples of AI and have become part of “ordinary technology.” The bar for what constitutes AI rises accordingly, and today it is the hypothetical “general intelligence,” human-level intelligence or “strong AI,” that is assumed to be the “real thing” in most discussions. Technologies that are already being used are broken down into knowledge engineering, data science or specific areas of “narrow AI” that combine elements of different AI approaches with specialized humanities or mathematical disciplines, such as stock market or weather forecasting, speech and text recognition and language processing.

Different schools of research, each working within their own paradigms, also have differing interpretations of the spheres of application, goals, definitions and prospects of AI, and are often dismissive of alternative approaches. However, there has been a kind of synergistic convergence of various approaches in recent years, and researchers and developers are increasingly turning to hybrid models and methodologies, coming up with different combinations.

Since the dawn of AI, two approaches to AI have been the most popular. The first, “symbolic” approach, assumes that the roots of AI lie in philosophy, logic and mathematics and operate according to logical rules, sign and symbolic systems, interpreted in terms of the conscious human cognitive process. The second approach (biological in nature), referred to as connectionist, neural-network, neuromorphic, associative or subsymbolic, is based on reproducing the physical structures and processes of the human brain identified through neurophysiological research. The two approaches have evolved over 60 years, steadily becoming closer to each other. For instance, logical inference systems based on Boolean algebra have transformed into fuzzy logic or probabilistic programming, reproducing network architectures akin to neural networks that evolved within the neuromorphic approach. On the other hand, methods based on “artificial neural networks” are very far from reproducing the functions of actual biological neural networks and rely more on mathematical methods from linear algebra and tensor calculus.

Are There “Holes” in Neural Networks?

In the last decade, it was the connectionist, or subsymbolic, approach that brought about explosive progress in applying machine learning methods to a wide range of tasks. Examples include both traditional statistical methodologies, like logistical regression, and more recent achievements in artificial neural network modelling, like deep learning and reinforcement learning. The most significant breakthrough of the last decade was brought about not so much by new ideas as by the accumulation of a critical mass of tagged datasets, the low cost of storing massive volumes of training samples and, most importantly, the sharp decline of computational costs, including the possibility of using specialized, relatively cheap hardware for neural network modelling. The breakthrough was brought about by a combination of these factors that made it possible to train and configure neural network algorithms to make a quantitative leap, as well as to provide a cost-effective solution to a broad range of applied problems relating to recognition, classification and prediction. The biggest successes here have been brought about by systems based on “deep learning” networks that build on the idea of the “perceptron” suggested 60 years ago by Frank Rosenblatt. However, achievements in the use of neural networks also uncovered a range of problems that cannot be solved using existing neural network methods.

First, any classic neural network model, whatever amount of data it is trained on and however precise it is in its predictions, is still a black box that does not provide any explanation of why a given decision was made, let alone disclose the structure and content of the knowledge it has acquired in the course of its training. This rules out the use of neural networks in contexts where explainability is required for legal or security reasons. For example, a decision to refuse a loan or to carry out a dangerous surgical procedure needs to be justified for legal purposes, and in the event that a neural network launches a missile at a civilian plane, the causes of this decision need to be identifiable if we want to correct it and prevent future occurrences.

Second, attempts to understand the nature of modern neural networks have demonstrated their weak ability to generalize. Neural networks remember isolated, often random, details of the samples they were exposed to during training and make decisions based on those details and not on a real general grasp of the object represented in the sample set. For instance, a neural network that was trained to recognize elephants and whales using sets of standard photos will see a stranded whale as an elephant and an elephant splashing around in the surf as a whale. Neural networks are good at remembering situations in similar contexts, but they lack the capacity to understand situations and cannot extrapolate the accumulated knowledge to situations in unusual settings.

Third, neural network models are random, fragmentary and opaque, which allows hackers to find ways of compromising applications based on these models by means of adversarial attacks. For example, a security system trained to identify people in a video stream can be confused when it sees a person in unusually colourful clothing. If this person is shoplifting, the system may not be able to distinguish them from shelves containing equally colourful items. While the brain structures underlying human vision are prone to so-called optical illusions, this problem acquires a more dramatic scale with modern neural networks: there are known cases where replacing an image with noise leads to the recognition of an object that is not there, or replacing one pixel in an image makes the network mistake the object for something else.

Fourth, the inadequacy of the information capacity and parameters of the neural network to the image of the world it is shown during training and operation can lead to the practical problem of catastrophic forgetting. This is seen when a system that had first been trained to identify situations in a set of contexts and then fine-tuned to recognize them in a new set of contexts may lose the ability to recognize them in the old set. For instance, a neural machine vision system initially trained to recognize pedestrians in an urban environment may be unable to identify dogs and cows in a rural setting, but additional training to recognize cows and dogs can make the model forget how to identify pedestrians, or start confusing them with small roadside trees.

Growth Potential?

The expert community sees a number of fundamental problems that need to be solved before a “general,” or “strong,” AI is possible. In particular, as demonstrated by the biggest annual AI conference held in Macao, “explainable AI” and “transfer learning” are simply necessary in some cases, such as defence, security, healthcare and finance. Many leading researchers also think that mastering these two areas will be the key to creating a “general,” or “strong,” AI.

Explainable AI allows for human beings (the user of the AI system) to understand the reasons why a system makes decisions and approve them if they are correct, or rework or fine-tune the system if they are not. This can be achieved by presenting data in an appropriate (explainable) manner or by using methods that allow this knowledge to be extracted with regard to specific precedents or the subject area as a whole. In a broader sense, explainable AI also refers to the capacity of a system to store, or at least present its knowledge in a human-understandable and human-verifiable form. The latter can be crucial when the cost of an error is too high for it only to be explainable post factum. And here we come to the possibility of extracting knowledge from the system, either to verify it or to feed it into another system.

Transfer learning is the possibility of transferring knowledge between different AI systems, as well as between man and machine so that the knowledge possessed by a human expert or accumulated by an individual system can be fed into a different system for use and fine-tuning. Theoretically speaking, this is necessary because the transfer of knowledge is only fundamentally possible when universal laws and rules can be abstracted from the system’s individual experience. Practically speaking, it is the prerequisite for making AI applications that will not learn by trial and error or through the use of a “training set,” but can be initialized with a base of expert-derived knowledge and rules – when the cost of an error is too high or when the training sample is too small.

How to Get the Best of Both Worlds?

There is currently no consensus on how to make an artificial general intelligence that is capable of solving the abovementioned problems or is based on technologies that could solve them.

One of the most promising approaches is probabilistic programming, which is a modern development of symbolic AI. In probabilistic programming, knowledge takes the form of algorithms and source, and target data is not represented by values of variables but by a probabilistic distribution of all possible values. Alexei Potapov, a leading Russian expert on artificial general intelligence, thinks that this area is now in a state that deep learning technology was in about ten years ago, so we can expect breakthroughs in the coming years.

Another promising “symbolic” area is Evgenii Vityaev’s semantic probabilistic modelling, which makes it possible to build explainable predictive models based on information represented as semantic networks with probabilistic inference based on Pyotr Anokhin’s theory of functional systems.

One of the most widely discussed ways to achieve this is through so-called neuro-symbolic integration – an attempt to get the best of both worlds by combining the learning capabilities of subsymbolic deep neural networks (which have already proven their worth) with the explainability of symbolic probabilistic modelling and programming (which hold significant promise). In addition to the technological considerations mentioned above, this area merits close attention from a cognitive psychology standpoint. As viewed by Daniel Kahneman, human thought can be construed as the interaction of two distinct but complementary systems: System 1 thinking is fast, unconscious, intuitive, unexplainable thinking, whereas System 2 thinking is slow, conscious, logical and explainable. System 1 provides for the effective performance of run-of-the-mill tasks and the recognition of familiar situations. In contrast, System 2 processes new information and makes sure we can adapt to new conditions by controlling and adapting the learning process of the first system. Systems of the first kind, as represented by neural networks, are already reaching Gartner’s so-called plateau of productivity in a variety of applications. But working applications based on systems of the second kind – not to mention hybrid neuro-symbolic systems which the most prominent industry players have only started to explore – have yet to be created.

This year, Russian researchers, entrepreneurs and government officials who are interested in developing artificial general intelligence have a unique opportunity to attend the first AGI-2020 international conference in St. Petersburg in late June 2020, where they can learn about all the latest developments in the field from the world’s leading experts.

From our partner RIAC

Continue Reading

Science & Technology

How as strategist we can compete with the sentient Artificial intelligence?

Published

on

Universe is made up of humans, stars, galaxies, milky ways, black holes other objects linked and connected with each other. Everything in the universe has its level of mechanisms and complexities. Humans are very complex creatures man-made objects are more complex and difficult to understand. With the passage of time human beings are more evolved and become more advanced technologically. Human inventions are reached to that level of advancement, which initiates a competition between machines and humans, itself. Humans are the most intelligent mortals on the earth but now human are being challenged by the intelligence (artificial intelligence), which was invented as helping hand for humans to increase efficiency. Here it is important to question that whether human’s intelligence was not enough to survive in the fast growing technological world? Or the man-made intelligence has reached to its peak so that humans come in competition with machines and human intelligence is challenged by the artificial intelligence? If there is competition, then how strategists could compete with artificial intelligence? To answer these questions we first need to know what artificial intelligence actually is.

Artificial intelligence was presented by John McCarthy in 1955; he characterized computerized reasoning in 1956 at Dartmouth Conference, the main counterfeit consciousness meeting that: Every fragment of learning or another element of insight can on a basic level be so unequivocally depicted that a machine can be made to empower it. An endeavor will be made to learn how to influence machines to exploit vernacular, mount deliberations and ideas, take care of sort of issues now held for people, and enhance themselves. There are seven main features of artificial intelligence as follows:-

“Simulating higher functions of brain

Programming a computer to use general language

Arrangement of hypothetical neurons in a manner  so that they can form concept

Way to determine and measure problem complexity

Self-improvement

Abstraction: it is defined as quality of dealing with ideas , not with events

Creativity and randomness”

Another definition is given by Elaine rich who expressed that counterfeit consciousness is tied in with making computer to do such thing which are presently being finished by human. He said that each computer is artificial intelligence framework. Jack Copland expressed that critical elements of artificial intelligence are speculation discovering that empowers the student to perform in the circumstance that are beforehand experienced. At that point its thinking, to reason is to make inference fittingly, critical thinking implied that by giving information it can finish up comes about lastly trickiness intends to break down a checked situation and investigating the highlights and connection between the articles and self-driving autos are its case.

Artificial intelligence is very common in the developed nations and developing nations are using artificial intelligence according to resources. Now question is that how artificial intelligence is being utilized in the above mentioned fields? Use of AI will be elaborated with help of phenomenon and examples of related fields for better understanding.

World is being more advanced and technologies are improving as well. In this situation states become conscious about their security. At this point states are involving AI approaches in their defense systems and some states are already using artificially integrated technologies. On 11 May 2017, Dan Coats, the executive of US National Intelligence, conveyed declaration to the US Congress on his yearly Worldwide Threat Assessment. In the openly discharged archive, he said that (AI) is progressing computational abilities that advantage the economy, yet those advances likewise empower new military capacities for our enemies’. In the meantime, the US Department of Defense (DOD) is taking a shot at such frameworks. Undertaking Maven, for example, otherwise called the Algorithmic Warfare Cross-Functional Team (AWCFT), is intended to quicken the incorporation of huge information, machine learning and AI into US military capacities. While the underlying focal point of AWCFT is on computer vision calculations for protest identification and characterization, it will unite all current calculation based-innovation activities related with US resistance knowledge. Command, control, communications, computers, intelligence, surveillance and reconnaissance (C4ISR) are achieving new statures of proficiency that empower information accumulation and preparing at exceptional scale and speed. At the point when the example acknowledgment calculations being produced in China, Russia, the UK, the US and somewhere else are combined with exact weapons frameworks, they will additionally expand the strategic preferred standpoint of unmanned elevated vehicles (UAVs) and other remotely worked stages. China’s resistance part has made achievements in UAV ‘swarming’ innovation, including an exhibition of 1,000 EHang UAVs flying in arrangement at the Guangzhou flying demonstration in February 2017. Potential situations could incorporate contending UAV swarms attempting to hinder each other’s C4ISR arrange, while at the same time drawing in dynamic targets.

Humans are the most intelligent creatures that created an artificial intelligence technology. The technology we human introduced is more intelligent than us and works fastest than humans. So here is big question marks that can humans compete with the artificial intelligence in near future. Now days it seems that AI is replacing humans in every field of life so what will be condition after decades or two. There is an alarming competition started between the human and AI. AI was called as demon by Tesla Elon Musk. A well physicist Stephen Hawking also stated that in future artificial intelligence could be proved as a bad omen for humanity. But signs of all this clear and we can clearly see the replacement of humans. We human are somehow losing the competition. But it is also clear that a creator can be destructor also. So as strategist we must have the counter strategies and second plans to overcome the competition. The edge human have over AI is the ability to think and we generate this in AI integrated techs so we must set the level for this. Otherwise this hazard could be a great threat in future and humanity could possibly be an extinct being.

Continue Reading

Science & Technology

What is more disruptive with the AI: Its dark potentials or our (anti-Intellectual) Ignorance?

Anis H. Bajrektarevic

Published

on

Throughout the most of human evolution both progress as well as its horizontal transmission was extremely slow, occasional and tedious a process. Well into the classic period of Alexander the Macedonian and his glorious Alexandrian library, the speed of our knowledge transfers – however moderate, analogue and conservative – was still always surpassing snaillike cycles of our breakthroughs.

When our sporadic breakthroughs finally turned to be faster than the velocity of their infrequent transmissions – that marked a point of our departure. Simply, our civilizations started to significantly differentiate from each other in their respective techno-agrarian, politico-military, ethno-religious and ideological, and economic setups. In the eve of grand discoveries, that very event transformed wars and famine from the low-impact and local, into the bigger and cross-continental.

Faster cycles of technological breakthroughs, patents and discoveries than their own transfers, primarily occurred on the Old continent. That occurrence, with all its reorganizational effects, radically reconfigured societies. It finally marked a birth of mighty European empires, their (liberal) schools and overall, lasting triumph of the western civilization.

Act

For the past few centuries, we lived fear but dreamt hope – all for the sake of modern times. From WWI to www. Is this modernity of internet age, with all the suddenly reviled breakthroughs and their instant transmission, now harboring us in a bay of fairness, harmony and overall reconciliation? Was and will our history ever be on holiday? Thus, has our world ever been more than an idea? Shall we stop short at the Kantian word – a moral definition of imagined future, or continue to the Hobbesian realities and grasp for an objective, geopolitical definition of our common tomorrow?

The Agrarian age inevitably brought up the question of economic redistribution. Industrial age culminated on the question of political participation. The AI (Quantum physics, Nanorobotics and Bioinformatics) brings a new, yet underreported challenge: Human (physical and mental) powers might – far and wide, and rather soon – become obsolete. If/when so, a question of human irrelevance is next to ask.

Why is the AI like no technology ever before? Why re-visiting and re-thing spirituality matters …

If you believe that the above is yet another philosophical melodrama, an anemically played alarmism, mind this:

We will soon have to redefine what we consider as a life itself.

Less than a month ago (January 2020), the successful trials have been completed. Border between organic and inorganic, intrinsic and artificial is downed forever. The AI has it now all-in: quantum physics (along with quantum computing), nanorobotics, bioinformatics and organic tissue tailoring. Synthesis of all that is usually referred as xenobots(sorts of living robots) – biodegradable symbiotic nanorobots that exclusively rely on evolutionary (self-navigable) algorithms. 

React

Although life is to be lived forward (with no backward looking), human retrospection is a biggest reservoir of insights. Of what makes us human.

Hence, what does our history of technology in relation to human development tell us so far?

Elaborating on a well-known argument of ‘defensive modernization’ of Fukuyama, it is evident that throughout the entire human history a technological drive was aimed to satisfy the security (and control) objective. It was rarely (if at all) driven by a desire to (gain a knowledge outside of convention, in order to) ease human existence, and to enhance human emancipation and liberation of societies at large. Thus, unless operationalized by the system, both intellectualism (human autonomy, mastery and purpose), and technological breakthroughs were traditionally felt and perceived as a threat. As a problem, not a solution. 

Ok. But what has brought us (under) the AI today?

It was our acceptance. Of course, manufactured.

All cyber-social networks and related search engines are far away from what they are portrayed to be: a decentralized but unified intelligence, attracted by gravity of quality rather than navigated by force of a specific locality. (These networks were not introduced to promote and emancipate other cultures but to maintain and further strengthen supremacy of the dominant one.)

In no way they correspond with a neuroplasticity of physics of our consciousness. They only offer an answer to our anxieties – in which the fear from free time is the largest, since free time coupled with silence is our gate to creativity and self-reflection. In fact, the cyber-tools of these data-sponges primarily serve the predictability, efficiency, calculability and control purpose, and only then they serve everything else – as to be e.g. user-friendly and en mass service attractive.

To observe the new corrosive dynamics of social phenomenology between manipulative fetishization (probability) and self-trivialization (possibility), the cyber-social platforms – these dustbins of human empathy in the muddy suburbs of consciousness – are particularly interesting.

This is how the human presence eliminating technologies have been introduced to and accepted by us.

Packed

How did we reflect – in our past – on new social dynamics created by the deployment of new technologies?

Aegean theater of the Antique Greece was the place of astonishing revelations and intellectual excellence – a remarkable density and proximity, not surpassed up to our age. All we know about science, philosophy, sports, arts, culture and entertainment, stars and earth has been postulated, explored and examined then and there. Simply, it was a time and place of triumph of human consciousness, pure reasoning and sparkling thought. However, neither Euclid, Anaximander, Heraclites, Hippocrates (both of Chios, and of Cos), Socrates, Archimedes, Ptolemy, Democritus, Plato, Pythagoras, Diogenes, Aristotle, Empedocles, Conon, Eratosthenes nor any of dozens of other brilliant ancient Greek minds did ever refer by a word, by a single sentence to something which was their everyday life, something they saw literally on every corner along their entire lives. It was an immoral, unjust, notoriously brutal and oppressive slavery system that powered the Antique state. (Slaves have not been even attributed as humans, but rather as the ‘phonic tools/tools able to speak’.) This myopia, this absence of critical reference on the obvious and omnipresent is a historic message – highly disturbing, self-telling and quite a warning. 

So, finally

Why is the AI like no technology ever before?

Ask google, you see that I am busy messaging right now!

Continue Reading

Publications

Latest

Defense2 hours ago

“Westlessness” of the West, and debates on China during Munich Security Conference

The Munich Security Conference, which traditionally brings together heads of state and government, foreign and defense ministers in February, is...

Hotels & Resorts4 hours ago

Discover Ateshgah Historical Architectural Reserve with Four Seasons Hotel Baku

The capital of Azerbaijan, beautiful Baku is known for its ancient and rich culture. It’s the city where centuries-long history is combined with modern...

Americas6 hours ago

How Bernie Sanders Will Destroy the Deep State if He Becomes President

Joe Lauria at Consortium News headlined on February 21st, “Apparent US Intel Meddling in US Election With ‘Report’ Russia is...

South Asia8 hours ago

Pakistan- Afghanistan- Turkey Trilateral Summits and its implication for the region

Turkey was the first Muslim country that tried to ameliorate Pakistan and Afghanistan’s relations during the post 9/11 decades. Ankara...

Style10 hours ago

Celebrate Time With A Personalised Engraving On Your Jaeger Lecoultre Reverso

When Jaeger‑LeCoultre introduced the Reverso almost 90 years ago, its blank metal flip side was designed purely as a functional...

Newsdesk12 hours ago

APEC Needs to Look Beyond Numbers, Bring Concrete Benefits to People

The current volatility and uncertainty of the international trade environment requires APEC to be dynamic, said Dato’ Sri Norazman Ayob,...

Science & Technology14 hours ago

Future Goals in the AI Race: Explainable AI and Transfer Learning

Recent years have seen breakthroughs in neural network technology: computers can now beat any living person at the most complex...

Trending