Connect with us

Science & Technology

Russia’s Huawei 5G Conundrum

Published

on

The action being taken by various governments to limit the involvement of China’s Huawei in the provision of equipment for 5G has brought into sharp-focus an issue that has been around for some time, but is now becoming more acute for national security of individual countries. That is, how to ensure that purchased Information and Communication Technology (ICT) hardware and software does not contain aspects, either at time of purchase or later, that offer the possibility of being maliciously used on a large scale – either for espionage or sabotage of crucial national infrastructure.

Australia has totally banned the use of Huawei equipment in its future 5G telecommunications network, while the US has banned its use by official organizations. The US, UK and a number of other developed countries may eventually follow the Australian lead.

Recent focus has been very much on 5G because of the role that it will play in supporting the use of Artificial Intelligence (AI), Internet of Things (IoT), Cloud etc; and, the outsized role that Chinese companies in supplying much of the needed infrastructure (eg Huawei and ZTE) around the world.

The international developments seem almost certain to put Russia in a difficult position. Is it anti-Huawei, pro-Huawei, or somewhere in the middle. If it is in the middle, how does Russia ensure its national security interests?

A Russian National Technology Initiative (NTI) document in 2016 saw the world as being increasingly divided up into closed “economic-trade” blocks formed on the basis of a combination of economic and political issues. It was argued that these blocks, or “alliances, aim to develop and retain production value added chains” that are protected from outside competition by ensuring that their rules and standards become the norm. The NTI document went on to say that countries and companies which are outside these blocks/alliances and their value added chains cannot break into them because the technological standards have already been set to disadvantage them.

Thus, according to the document, the NTI was given the goal of making Russia “one of the ‘big three’ major technological states by 2035, and have its own high-tech specialization in the global chain of creating additional value”. In order to achieve this, Russia will need is own block/alliance or participate in others in such a way that it becomes a leader in “developing and confirming international technical standards”.

President Putin, in his address to the St. Petersburg economic forum on 17 June 2016, said: “Today we see attempts to secure or even monopolize the benefits of new generation technologies. This, I think, is the motive behind the creation of restricted areas with regulatory barriers to reduce the cross-flow of breakthrough technologies to other regions of the world with fairly tight control over cooperation chains for maximum gain from technological advances.”

Then US Secretary of State played-up the security aspects of such economic-trade blocs: “I have worked from day one to emphasize that foreign policy is economic policy and economic policy is foreign policy. Without a doubt, these trade agreements are at the center of defending our strategic interests, deepening our diplomatic relationships, strengthening our national security, and reinforcing our leadership across the globe.” “Even as we seek to complete TTIP and strengthen our bonds across one ocean, we know that our future prosperity and security will also rest on America’s role as a Pacific power. Central to that effort is the adoption of (Transpacific Partnership) TPP.”

However, given the prospective Brexit and the rise of Trump as an economic nationalist, such blocs seemed very unlikely when I first wrote about the NTI in 2016. Since then, Trump’s strident America first approach to the economy, abandonment of TPP, and lack of interest in an US role in international security issues would seem to have confirmed my earlier view.

Nevertheless, “Western” concern about advances in Chinese technology, the way it is being acquired (allegations of IP theft and heavy-handed treatment of companies seeking to invest in China), and the way it is being used (Xinjiang) seems to be leading to at least partial technology blocs — with the possibility of broadening to aspects of international trade and investment.

Whereas the NTI idea of economic / trade blocs was largely based on the political and economic consequences of growing global value-added chains in high-tech and Russia’s need to be part of this trend, we may now be in a situation where such economic / trade blocs will be formed by a perceived urgent need to tear existing high-tech value-added chains apart in the name of national security and create new ones. National Security is now very much in the driver’s seat!

Putin’s point about “attempts to secure or even monopolize the benefits of new generation technologies” remains valid, as does the issue — in a different form — of what bloc if any can or should Russia join.

Concerns about the security aspects of Huawei telecommunication equipment in the UK led to the establishment of the Huawei Cyber Security Evaluation Centre” (HCSEC). While Huawei pays the costs of this centre, it has no control over its operation. A HCSEC Oversight Board was established in 2014. Its fourth report in 2018 concluded that:

“5.2 The key conclusions from the Board’s fourth year of work are:

It is evident that HCSEC continues to provide unique, world-class cyber security expertise and technical assurance of sufficient scope and quality as to be appropriate for the current stage in the assurance framework around Huawei in the UK ii. However, Huawei’s processes continue to fall short of industry good practice and make it difficult to provide long term assurance. The lack of progress in remediating these is disappointing. NCSC and Huawei are working with the network operators to develop a long-term solution, regarding the lack of lifecycle management around third party components, a new strategic risk to the UK telecommunications networks. Significant work will be required to remediate this issue and provide interim risk management.

iii. The HCSEC Oversight Board is assured that the Ernst & Young Audit Report provides important, external reassurance that the arrangements for HCSEC’s operational independence from Huawei Headquarters is operating robustly and effectively, and in a manner consistent with the 2010 arrangements between the Government and the company. The issue identified was rated as low risk and two further advisory issues were identified.

5.3 Overall therefore, the Oversight Board has concluded that in the year 2017-2018, HCSEC fulfilled its obligations in respect of the provision of security and engineering assurance artefacts to the NCSC and the UK operators as part of the strategy to manage risks to UK national security from Huawei’s involvement in the UK’s critical networks. However, the execution of the strategy exposed a number of risks which will need significant additional work and management. The Oversight Board will need to pay attention to these issues.”

The qualified nature of the HCSEC reports has led to come commentators to offer strong support to the Australian bans on Huawei participation in Australian 5G. This is particularly the case with the ASPI International Cyber Policy Centre. The Centre’s Tom Uren says that the contents of the four HCSEC oversight board annual reports (2015, 2016, 2017 and 2018) “show that it is very difficult indeed” to “assess products to make sure they won’t be used to spy on us”.

However, the underlying issue is broader than Huawei and 5G. A 2018 book by Olav Lysne concludes that:

“Industrialized nation states are currently facing an almost impossible dilemma. On one hand, the critical functions of their societies, such as the water supply, the power supply, transportation, healthcare, and phone and messaging services, are built on top of a huge distributed digital infrastructure. On the other hand, equipment for the same infrastructure is made of components constructed in countries or by companies that are inherently not trusted. In this book, we have demonstrated that verifying the functionality of these components is not feasible given the current state of the art. The security implications of this are enormous. The critical functions of society mentioned above are so instrumental to our well-being that threats to their integrity also threaten the integrity of entire nations. The procurement of electronic equipment for national infrastructures therefore represents serious exposure to risk and decisions on whom to buy equipment from should be treated accordingly. The problem also has an industrial dimension, in that companies fearing industrial espionage or sabotage should be cautious in choosing from whom to buy electronic components and equipment. Honest providers of equipment and components see this problem from another angle. Large international companies have been shut out of entire markets because of allegations that their equipment cannot be trusted. For them, the problem is stated differently: How can they prove that the equipment they sell does not have hidden malicious functionality? We have seen throughout the chapters of this book that we are currently far from being able to solve the problem from that angle as well. This observation implies that our problem is not only a question of security but also a question of impediments to free trade. Although difficult, the question of how to build verifiable trust in electronic equipment remains important and its importance shows every sign of growing.”

The basic technical reason for Australia banning Huawei has been put forward by the head of its Signals Directorate: “5G is not just fast data, it is also high-density connection of devices – human to human, human to machine and machine to machine – and finally it is much lower signal latency or speed of response. Historically, we have protected the sensitive information and functions at the core of our telecommunications networks by confining our high-risk vendors to the edge of our networks. But the distinction between core and edge collapses in 5G networks. That means that a potential threat anywhere in the network will be a threat to the whole network. In consultation with operators and vendors, we worked hard this year to see if there were ways to protect our 5G networks if high-risk vendor equipment was present anywhere in these networks. At the end of this process, my advice was to exclude high-risk vendors from the entirety of evolving 5G networks.”

The technical issues of 5G are very complex and there is no universal agreement in any country about the introduction and operation of networks. International technical standards are still being developed.  Initially, many basic 5G features will be delivered in most cases by upgraded 4G infrastructure, but getting the most out of 5G – in terms of speed and capacity – will require significant new investment in telecommunications infrastructure.

A controversial US proposal to build secure 5G as a “single, inherently protected, information transportation super highway” was produced by members of the US security establishment in early 2018 – and found its way into the public arena. The document says that presently “data traverses cyberspace through a patchwork transport layer constructed through an evolutionary process as technology matured”. “Measures to secure and protect data and information result in an ‘overhead’ that affects network performance – they reduce throughput, increase latency, and result in an inherently and inefficient and unreliable construct. Additionally, the framework under which access and services are allocated is suboptimal, yielding incomplete and redundant networks. Without a concerted effort to reframe and reimagine the information space, America will continue on the same trajectory – chasing cyber adversaries in an information environment where security is scarce.”

It goes on to say that “the advent of ‘secure’ network technology and the move to 5G presents an opportunity to create a completely new framework.” “Whoever leads in technology and market share for 5G development will have a tremendous advantage towards ushering in the massive Internet of Things, machine learning, AI, and thus the commanding heights of the information domain.” “The transformative nature of 5G is its ability to enable the massive Internet of Things.” “Using efforts like China Manufacturing 2025 (CM2025) and the 13th Five Year Plan, China has assembled the basic components required for winning the AI arms race.”

While the proposal for a such extensive government involvement in US 5G infrastructure seems to have been rejected, it does indicate the level of attention being focused on the issue.

The Russian Ministry of Communications is advocating that private Russian telecommunications companies share much of the 5G infrastructure, which may to some degree allow a more secure network to be built. However, this does not solve the problem of where to source the equipment.

What should Russia do if the concerns about Huawei and Chinese technology more generally start to lead to the formation of an anti-Chinese technology based economic bloc?

There is little reason to believe Russia will be any better than Western countries in evaluating the security related aspects of Chinese technology, and there would be a strong case for Russia to follow the lead of Australia, the UK, USA etc. However, there would be several arguments against such a course of action.

Firstly, Russia will not want to jeopardize its present good political relationship with China. Apart from energy sales the economic relationship between Russia and China is not strong, however geography means that Russia has a huge stake in the political relationship.

Secondly, if it is possible for Huawei and other Chinese companies to do the harmful things that are claimed then presumably non-Chinese suppliers could also do the same to Russia at the request (or demand) of their country’s security agencies. While Western commentators make much of China’s June 2017 National Intelligence Law that obliges “all organizations and citizens” to “support, cooperate and collaborate in national intelligence work”, Western high-tech companies would almost certainly do the same when it comes to Russia given its very poor image in those countries and the perceived Russian threat to those countries.

Thirdly, at a purely technical level there is nothing to suggest that Russia could build 5G infrastructure without importing most of the equipment. While Russia has a solid reputation in the software field, Russian manufacturing capacity and quality is not high. Russia’s efforts to promote the high-tech sector from the top have not been particularly successful. Even China is very dependent on crucial imported 5G components.

Fourthly, my September 2016 report on the NTI suggested that Russia needed to put more emphasis on using available digital technology rather than trying to develop new leading-edge products. In early 2017, the Russian government announced its “Strategy for the Development of the Information Society in the Russian Federation for 2017-2030” While much can be done using existing 4G infrastructure, a good 5G network will be necessary well before 2030 to maximize the benefits of the strategy as well as take best advantage of any NTI successes.

As things now stand, Russia is likely to use Chinese Huawei (and other Chinese) hardware while attempting to ensure that Russian software is used wherever possible. However, as already noted, this will be no easy task.

It is difficult to avoid the conclusion that when it comes to 5G and national security, Russia is between a rock and a hard-place. It has neither the 5G infrastructure manufacturing capacity of the US and China, nor any real friends that are capable of helping it.

Visiting Professor, School of Asian Studies within the Higher School of Economics National Research University, Moscow, where I teach the entire Master’s Degree module: “Russia’s Asian Foreign Policy” (covering Russian relations with all Asian countries). ALSO, Professor of International Business, Baikal School of BRICS, Irkutsk National Research Technical University (teach mainly Chinese students, with a particular emphasis on the technology sector).

Science & Technology

The World After COVID-19: Does Transparent Mean Healthy?

Maria Gurova

Published

on

The insanity of despair and primaeval fear for one’s health (and today, no matter how ironic and paradoxical it sounds, this may be the state of mind that brings many of us together) will most likely give rise to a new global formation that will then become a global reality. It is still hard to say what it will be like exactly, but it is clear that the world will become more transparent. And I do not mean in the usual sense of anti-corruption measures, but rather in the original sense of the word – the world will become more “see-through.” Our temperature will be monitored. Smartphones with built-in sensors will collect precise data not only about our clicks and likes, but about our physical and possibly emotional state. The world and the people that live in it will undergo a number of changes once the current coronavirus pandemic is over, and many of those changes will be accompanied by a leap in technological development.

For many experts and scientists, the events that are unfolding today are reminiscent of what happened in 2003, when the SARS virus presented the first large-scale threat to human health of the new millennium. Unlike today’s unbidden crowned guest, SARS was not so virulent, yet it caused major concerns for a number of countries, particularly in East and Southeast Asia. Hence the deadly lessons learned in Singapore and partly in Taiwan, where the government has for two decades now been successfully using a system of mass surveillance of the everyday life of its citizens – a system that has received the approval of the people. This system is part of Singapore’s cybersecurity strategy and allows the physical condition of large masses of people to be monitored, thereby preventing diseases from spreading and escalating into epidemics. This, combined with their ability to enforce extremely strict quarantine measures and carry out mass testing instead of the selective testing currently practised in Europe and Russia, has allowed Singapore and Taiwan to contain the spread of the disease and prevent it from turning into an epidemic. Of course, their compact territories have certainly played a part here. Other countries, for instance, Israel and Russia, have already followed this example and approved a monitoring system that uses mobile data and geolocation in order to trace the movements of persons with confirmed infection. We have to assume that one of the first steps after the COVID-19 pandemic will be to embed this surveillance system even deeper into the public life. Most likely, this step will be met with approval instead of protests and street rallies.

I would not wish to speak for everyone, but it seems to me that the choice between health and privacy is a no-brainer. The pandemic will end, and what the world emerging from the pandemic will look like is an interesting question worthy of discussion. To quote the Deputy Minister of Health of Iran, who had COVID-19, we can note that the coronavirus came to us from a relatively safe country and, contrary to recent rumours, it does not only affect those of Asian heritage: quite the opposite, it is very democratic in its choice of victims, which is to say, it affects everyone.

Hence the question: by self-isolating, we are buying doctors and scientists time to find a cure to the virus and test vaccines, but what are we going to do in the event of a new pandemic? Here, humanity faces two choices. The first is to give free rein to nationalists who are already jubilant and triumphant over the failures of globalization and the inability of liberal democratic countries to shut down their borders to viruses and undesirable immigrants. The second is to move to a radically new formation where we will become even more mutually dependent and open to our societies and governments, because this will be a mandatory condition for moving about and doing business, and perhaps even starting a family. Personal secrets will become a thing of the past, a fairy tale we tell our grandchildren. In fact, the issue is far more serious, with multiple additions and ensuing consequences.

Following the COVID-19 pandemic, consensus and mutual understanding between states will be relevant like never before, especially since the problems of disarmament, nuclear warheads, defence budgets propped up by taxpayer money, international sanctions, etc., that appeared and developed during the presidencies of Nikita Khrushchev and Ronald Reagan may finally move into the background. Instead, world leaders, especially given that most of them are at an age that makes them particularly vulnerable to the coronavirus, should start thinking about new plans for investing in healthcare, socioeconomic aspects of life and technological development, because those will be intrinsically linked with the other aspects of improving the state mentioned above. Will this represent a new social contract between the government, the public and the citizen? Probably. Will it represent a new pact between governments? One would hope so. Perhaps the coronavirus pandemic will break down the old world and give rise to the new one that so many expected to appear in the 1990s. But what was to await us back then was proxy wars and a confrontation through sanctions that split societies from within and raised barriers between states. Maybe this new world will be one where surveillance cameras and sensors will first prompt a feeling of relief and then become an integral part of the picture. Perhaps it will be a world where life without external surveillance and control will appear unsafe and unnatural.

From our partner RIAC

Continue Reading

Science & Technology

Future Goals in the AI Race: Explainable AI and Transfer Learning

Published

on

Recent years have seen breakthroughs in neural network technology: computers can now beat any living person at the most complex game invented by humankind, as well as imitate human voices and faces (both real and non-existent) in a deceptively realistic manner. Is this a victory for artificial intelligence over human intelligence? And if not, what else do researchers and developers need to achieve to make the winners in the AI race the “kings of the world?”

Background

Over the last 60 years, artificial intelligence (AI) has been the subject of much discussion among researchers representing different approaches and schools of thought. One of the crucial reasons for this is that there is no unified definition of what constitutes AI, with differences persisting even now. This means that any objective assessment of the current state and prospects of AI, and its crucial areas of research, in particular, will be intricately linked with the subjective philosophical views of researchers and the practical experience of developers.

In recent years, the term “general intelligence,” meaning the ability to solve cognitive problems in general terms, adapting to the environment through learning, minimizing risks and optimizing the losses in achieving goals, has gained currency among researchers and developers. This led to the concept of artificial general intelligence (AGI), potentially vested not in a human, but a cybernetic system of sufficient computational power. Many refer to this kind of intelligence as “strong AI,” as opposed to “weak AI,” which has become a mundane topic in recent years.

As applied AI technology has developed over the last 60 years, we can see how many practical applications – knowledge bases, expert systems, image recognition systems, prediction systems, tracking and control systems for various technological processes – are no longer viewed as examples of AI and have become part of “ordinary technology.” The bar for what constitutes AI rises accordingly, and today it is the hypothetical “general intelligence,” human-level intelligence or “strong AI,” that is assumed to be the “real thing” in most discussions. Technologies that are already being used are broken down into knowledge engineering, data science or specific areas of “narrow AI” that combine elements of different AI approaches with specialized humanities or mathematical disciplines, such as stock market or weather forecasting, speech and text recognition and language processing.

Different schools of research, each working within their own paradigms, also have differing interpretations of the spheres of application, goals, definitions and prospects of AI, and are often dismissive of alternative approaches. However, there has been a kind of synergistic convergence of various approaches in recent years, and researchers and developers are increasingly turning to hybrid models and methodologies, coming up with different combinations.

Since the dawn of AI, two approaches to AI have been the most popular. The first, “symbolic” approach, assumes that the roots of AI lie in philosophy, logic and mathematics and operate according to logical rules, sign and symbolic systems, interpreted in terms of the conscious human cognitive process. The second approach (biological in nature), referred to as connectionist, neural-network, neuromorphic, associative or subsymbolic, is based on reproducing the physical structures and processes of the human brain identified through neurophysiological research. The two approaches have evolved over 60 years, steadily becoming closer to each other. For instance, logical inference systems based on Boolean algebra have transformed into fuzzy logic or probabilistic programming, reproducing network architectures akin to neural networks that evolved within the neuromorphic approach. On the other hand, methods based on “artificial neural networks” are very far from reproducing the functions of actual biological neural networks and rely more on mathematical methods from linear algebra and tensor calculus.

Are There “Holes” in Neural Networks?

In the last decade, it was the connectionist, or subsymbolic, approach that brought about explosive progress in applying machine learning methods to a wide range of tasks. Examples include both traditional statistical methodologies, like logistical regression, and more recent achievements in artificial neural network modelling, like deep learning and reinforcement learning. The most significant breakthrough of the last decade was brought about not so much by new ideas as by the accumulation of a critical mass of tagged datasets, the low cost of storing massive volumes of training samples and, most importantly, the sharp decline of computational costs, including the possibility of using specialized, relatively cheap hardware for neural network modelling. The breakthrough was brought about by a combination of these factors that made it possible to train and configure neural network algorithms to make a quantitative leap, as well as to provide a cost-effective solution to a broad range of applied problems relating to recognition, classification and prediction. The biggest successes here have been brought about by systems based on “deep learning” networks that build on the idea of the “perceptron” suggested 60 years ago by Frank Rosenblatt. However, achievements in the use of neural networks also uncovered a range of problems that cannot be solved using existing neural network methods.

First, any classic neural network model, whatever amount of data it is trained on and however precise it is in its predictions, is still a black box that does not provide any explanation of why a given decision was made, let alone disclose the structure and content of the knowledge it has acquired in the course of its training. This rules out the use of neural networks in contexts where explainability is required for legal or security reasons. For example, a decision to refuse a loan or to carry out a dangerous surgical procedure needs to be justified for legal purposes, and in the event that a neural network launches a missile at a civilian plane, the causes of this decision need to be identifiable if we want to correct it and prevent future occurrences.

Second, attempts to understand the nature of modern neural networks have demonstrated their weak ability to generalize. Neural networks remember isolated, often random, details of the samples they were exposed to during training and make decisions based on those details and not on a real general grasp of the object represented in the sample set. For instance, a neural network that was trained to recognize elephants and whales using sets of standard photos will see a stranded whale as an elephant and an elephant splashing around in the surf as a whale. Neural networks are good at remembering situations in similar contexts, but they lack the capacity to understand situations and cannot extrapolate the accumulated knowledge to situations in unusual settings.

Third, neural network models are random, fragmentary and opaque, which allows hackers to find ways of compromising applications based on these models by means of adversarial attacks. For example, a security system trained to identify people in a video stream can be confused when it sees a person in unusually colourful clothing. If this person is shoplifting, the system may not be able to distinguish them from shelves containing equally colourful items. While the brain structures underlying human vision are prone to so-called optical illusions, this problem acquires a more dramatic scale with modern neural networks: there are known cases where replacing an image with noise leads to the recognition of an object that is not there, or replacing one pixel in an image makes the network mistake the object for something else.

Fourth, the inadequacy of the information capacity and parameters of the neural network to the image of the world it is shown during training and operation can lead to the practical problem of catastrophic forgetting. This is seen when a system that had first been trained to identify situations in a set of contexts and then fine-tuned to recognize them in a new set of contexts may lose the ability to recognize them in the old set. For instance, a neural machine vision system initially trained to recognize pedestrians in an urban environment may be unable to identify dogs and cows in a rural setting, but additional training to recognize cows and dogs can make the model forget how to identify pedestrians, or start confusing them with small roadside trees.

Growth Potential?

The expert community sees a number of fundamental problems that need to be solved before a “general,” or “strong,” AI is possible. In particular, as demonstrated by the biggest annual AI conference held in Macao, “explainable AI” and “transfer learning” are simply necessary in some cases, such as defence, security, healthcare and finance. Many leading researchers also think that mastering these two areas will be the key to creating a “general,” or “strong,” AI.

Explainable AI allows for human beings (the user of the AI system) to understand the reasons why a system makes decisions and approve them if they are correct, or rework or fine-tune the system if they are not. This can be achieved by presenting data in an appropriate (explainable) manner or by using methods that allow this knowledge to be extracted with regard to specific precedents or the subject area as a whole. In a broader sense, explainable AI also refers to the capacity of a system to store, or at least present its knowledge in a human-understandable and human-verifiable form. The latter can be crucial when the cost of an error is too high for it only to be explainable post factum. And here we come to the possibility of extracting knowledge from the system, either to verify it or to feed it into another system.

Transfer learning is the possibility of transferring knowledge between different AI systems, as well as between man and machine so that the knowledge possessed by a human expert or accumulated by an individual system can be fed into a different system for use and fine-tuning. Theoretically speaking, this is necessary because the transfer of knowledge is only fundamentally possible when universal laws and rules can be abstracted from the system’s individual experience. Practically speaking, it is the prerequisite for making AI applications that will not learn by trial and error or through the use of a “training set,” but can be initialized with a base of expert-derived knowledge and rules – when the cost of an error is too high or when the training sample is too small.

How to Get the Best of Both Worlds?

There is currently no consensus on how to make an artificial general intelligence that is capable of solving the abovementioned problems or is based on technologies that could solve them.

One of the most promising approaches is probabilistic programming, which is a modern development of symbolic AI. In probabilistic programming, knowledge takes the form of algorithms and source, and target data is not represented by values of variables but by a probabilistic distribution of all possible values. Alexei Potapov, a leading Russian expert on artificial general intelligence, thinks that this area is now in a state that deep learning technology was in about ten years ago, so we can expect breakthroughs in the coming years.

Another promising “symbolic” area is Evgenii Vityaev’s semantic probabilistic modelling, which makes it possible to build explainable predictive models based on information represented as semantic networks with probabilistic inference based on Pyotr Anokhin’s theory of functional systems.

One of the most widely discussed ways to achieve this is through so-called neuro-symbolic integration – an attempt to get the best of both worlds by combining the learning capabilities of subsymbolic deep neural networks (which have already proven their worth) with the explainability of symbolic probabilistic modelling and programming (which hold significant promise). In addition to the technological considerations mentioned above, this area merits close attention from a cognitive psychology standpoint. As viewed by Daniel Kahneman, human thought can be construed as the interaction of two distinct but complementary systems: System 1 thinking is fast, unconscious, intuitive, unexplainable thinking, whereas System 2 thinking is slow, conscious, logical and explainable. System 1 provides for the effective performance of run-of-the-mill tasks and the recognition of familiar situations. In contrast, System 2 processes new information and makes sure we can adapt to new conditions by controlling and adapting the learning process of the first system. Systems of the first kind, as represented by neural networks, are already reaching Gartner’s so-called plateau of productivity in a variety of applications. But working applications based on systems of the second kind – not to mention hybrid neuro-symbolic systems which the most prominent industry players have only started to explore – have yet to be created.

This year, Russian researchers, entrepreneurs and government officials who are interested in developing artificial general intelligence have a unique opportunity to attend the first AGI-2020 international conference in St. Petersburg in late June 2020, where they can learn about all the latest developments in the field from the world’s leading experts.

From our partner RIAC

Continue Reading

Science & Technology

How as strategist we can compete with the sentient Artificial intelligence?

Published

on

Universe is made up of humans, stars, galaxies, milky ways, black holes other objects linked and connected with each other. Everything in the universe has its level of mechanisms and complexities. Humans are very complex creatures man-made objects are more complex and difficult to understand. With the passage of time human beings are more evolved and become more advanced technologically. Human inventions are reached to that level of advancement, which initiates a competition between machines and humans, itself. Humans are the most intelligent mortals on the earth but now human are being challenged by the intelligence (artificial intelligence), which was invented as helping hand for humans to increase efficiency. Here it is important to question that whether human’s intelligence was not enough to survive in the fast growing technological world? Or the man-made intelligence has reached to its peak so that humans come in competition with machines and human intelligence is challenged by the artificial intelligence? If there is competition, then how strategists could compete with artificial intelligence? To answer these questions we first need to know what artificial intelligence actually is.

Artificial intelligence was presented by John McCarthy in 1955; he characterized computerized reasoning in 1956 at Dartmouth Conference, the main counterfeit consciousness meeting that: Every fragment of learning or another element of insight can on a basic level be so unequivocally depicted that a machine can be made to empower it. An endeavor will be made to learn how to influence machines to exploit vernacular, mount deliberations and ideas, take care of sort of issues now held for people, and enhance themselves. There are seven main features of artificial intelligence as follows:-

“Simulating higher functions of brain

Programming a computer to use general language

Arrangement of hypothetical neurons in a manner  so that they can form concept

Way to determine and measure problem complexity

Self-improvement

Abstraction: it is defined as quality of dealing with ideas , not with events

Creativity and randomness”

Another definition is given by Elaine rich who expressed that counterfeit consciousness is tied in with making computer to do such thing which are presently being finished by human. He said that each computer is artificial intelligence framework. Jack Copland expressed that critical elements of artificial intelligence are speculation discovering that empowers the student to perform in the circumstance that are beforehand experienced. At that point its thinking, to reason is to make inference fittingly, critical thinking implied that by giving information it can finish up comes about lastly trickiness intends to break down a checked situation and investigating the highlights and connection between the articles and self-driving autos are its case.

Artificial intelligence is very common in the developed nations and developing nations are using artificial intelligence according to resources. Now question is that how artificial intelligence is being utilized in the above mentioned fields? Use of AI will be elaborated with help of phenomenon and examples of related fields for better understanding.

World is being more advanced and technologies are improving as well. In this situation states become conscious about their security. At this point states are involving AI approaches in their defense systems and some states are already using artificially integrated technologies. On 11 May 2017, Dan Coats, the executive of US National Intelligence, conveyed declaration to the US Congress on his yearly Worldwide Threat Assessment. In the openly discharged archive, he said that (AI) is progressing computational abilities that advantage the economy, yet those advances likewise empower new military capacities for our enemies’. In the meantime, the US Department of Defense (DOD) is taking a shot at such frameworks. Undertaking Maven, for example, otherwise called the Algorithmic Warfare Cross-Functional Team (AWCFT), is intended to quicken the incorporation of huge information, machine learning and AI into US military capacities. While the underlying focal point of AWCFT is on computer vision calculations for protest identification and characterization, it will unite all current calculation based-innovation activities related with US resistance knowledge. Command, control, communications, computers, intelligence, surveillance and reconnaissance (C4ISR) are achieving new statures of proficiency that empower information accumulation and preparing at exceptional scale and speed. At the point when the example acknowledgment calculations being produced in China, Russia, the UK, the US and somewhere else are combined with exact weapons frameworks, they will additionally expand the strategic preferred standpoint of unmanned elevated vehicles (UAVs) and other remotely worked stages. China’s resistance part has made achievements in UAV ‘swarming’ innovation, including an exhibition of 1,000 EHang UAVs flying in arrangement at the Guangzhou flying demonstration in February 2017. Potential situations could incorporate contending UAV swarms attempting to hinder each other’s C4ISR arrange, while at the same time drawing in dynamic targets.

Humans are the most intelligent creatures that created an artificial intelligence technology. The technology we human introduced is more intelligent than us and works fastest than humans. So here is big question marks that can humans compete with the artificial intelligence in near future. Now days it seems that AI is replacing humans in every field of life so what will be condition after decades or two. There is an alarming competition started between the human and AI. AI was called as demon by Tesla Elon Musk. A well physicist Stephen Hawking also stated that in future artificial intelligence could be proved as a bad omen for humanity. But signs of all this clear and we can clearly see the replacement of humans. We human are somehow losing the competition. But it is also clear that a creator can be destructor also. So as strategist we must have the counter strategies and second plans to overcome the competition. The edge human have over AI is the ability to think and we generate this in AI integrated techs so we must set the level for this. Otherwise this hazard could be a great threat in future and humanity could possibly be an extinct being.

Continue Reading

Publications

Latest

African Renaissance58 mins ago

Chasing the sea

The voices are inside my head. Calling to me. Speaking in ancient tongues. They talk and talk and talk. The...

Europe3 hours ago

A New Twist in the Spanish Approach to Politics in Venezuela: Podemos in the Spanish Government

During the last pseudo-legislature in Spain, the position that had been maintained by the Spanish government towards Venezuela and its...

Human Rights5 hours ago

COVID-19 stoking xenophobia, hate and exclusion, minority rights expert warns

Combatting the COVID-19 pandemic must also include stamping out what one independent human rights expert has called the “darker sides”...

Newsdesk7 hours ago

World Bank to Help Improve Business Environment and Justice Service Standards in Croatia

The World Bank Board of Executive Directors today approved a loan to the Republic of Croatia for the Justice for...

Economy9 hours ago

Morocco’s Economy: COVID-19 Epidemic made a new development model

Considering the financial dilemma of 2008, the outbreaks of the Arab political spring that brush off the Arab society and...

Middle East11 hours ago

Resisting Lockdowns: Bringing Ultra-conservatives into the fold

The Coronavirus pandemic points a finger not only at the colossal global collapse of responsible public health policy but also...

Human Rights13 hours ago

Coronavirus pandemic threatens to plunge millions in Arab region into poverty and food insecurity

COVID-19 will be responsible for pushing a further 8.3 million people in the Arab region into poverty, according to a...

Trending