The format of the next-generation cellular networks, which is commonly known as 5G, is considered by a vast majority of experts as one of the “key technologies in the decades to come.” It is seen as a major element in ensuring the leading positions of a country at a new stage of the race for the most favorable national development status. What are the political features of this phase of technology rivalry?
The International Telecommunication Union (ITU) plans to approve the final version of the 5G standard by early next year. However, experts throughout the globe have been developing the “general principles and properties” of the fifth-generation cellular communication technologies for more than five years. The current fourth-generation technology guarantees maximum data transfer rate at about 150 Mbit/s. 5G promises speeds of more than 1 Gbit / s. By the middle of this year, “China, South Korea, Japan, and most EU countries had chosen the frequency spectrum that will be used in fifth-generation networks.” A heated debate has been under way in the United States, which has already held several frequency auctions for 5G .According to Deputy Prime Minister Maxim Akimov, who oversees the development of digital technologies in the Russian Federation, the launch of 5G is “a matter of survival if we do not want to lose technological leadership”.
Experts have no doubt that the fifth generation mobile networks will lead to “drastic changes” in many areas of life. While the present-day cellular communication standards are designed to exchange voice traffic and exchanges between terminal communication devices, the 5G technology is intended to create an environment in which billions of different devices interact with each other in real time. Moreover, the speed and reliability of connection should boost many times. The 5G environment should become a kind of communicative digital “ocean” into which an overwhelming majority of people and economic entities will plunge in the coming decades. This will fundamentally change the industry, global supply chains, defense tecnology, agriculture, transport, medicine, approaches to managing national infrastructure, and in general, the quality of life of billions of people, by becoming part and parcel of everyday life.
One of the most promising and at the same time, conflicting areas, which will, undoubtedly, get good impetus following the introduction of 5G, will be the so-called “Internet of Things”. Microchips will be installed in almost all industrial and consumer products, transmitting all kinds of information and capable of receiving control commands from the outside. Critics fear that the world is moving in the direction of “surveillance capitalism”. The government and business are to address many issues, The Economist points out, including digital ownership, big data, surveillance, competition monitoring, and security. The standards of receiving, transmitting, processing and storing data promise to become a battlefield of both private companies and government organizations . Those who will control their development and implementation will enjoy significant, if not crucial, advantages.
“When an invention becomes part of infrastructure, it also becomes part of political relations and undergoes both engineering and political changes”. Therefore, the main political battles are currently unfolding over the choice of frequency bands for building 5G networks, and over the suppliers. Pessimists, including in Russia, fear that, in case of the worst scenario, the priorities of maintaining sovereignty and security may lead to further fragmentation of not only the Internet, but also the cellular communications space. Differences in the frequency bands used for the development of 5G networks may cause conflicts between telecom operators and refusals to conclude agreements on international roaming. According to optimists, in the absence of universally approved 5G standards, manufacturers of client equipment, from terminals to smart phones, are forced to envisage the possibility of operating in the widest possible frequency range.
At present, the observer attention is focused on two conflicting approaches to the implementation of 5G technologies, the American one, on the one hand, and the Chinese one, on the other. US President Donald Trump regularly makes high-profile statements about the need to ensure America’s top position in promoting 5G technologies. Trump has come out strongly against the purchase of Chinese-made 5G technologies by the United States. Washington is urging other countries to refrain from doing so as well. Otherwise, it said it may wrap up cooperation with its close allies in the field of telecommunications, including the exchange of intelligence data. Given the situation, the US government agencies that deal with technology development have, on the one hand, already managed to distribute the bulk of frequency modulations for the introduction of 5G, compared to other countries. On the other hand, Leonid Kovachich of the Carnegie Moscow Center says, “the United States is working to create networks in the ultra-high frequency range.” Such frequencies – above 6 GHz, provide “the most tangible capacity growth against the existing ones.” However, due to specific technological characteristics, they require an extremely high density of transceiver infrastructure. Thus, America has picked the most expensive option, which, according to critics, including specialists from the US Federal Communications Commission (FCC), is “extremely prodigal”. In addition, this approach “will further accentuate the technological and, as a result, social gap between large cities and sparsely populated regions in the United States”, and will also significantly weaken the ability of American cellular manufacturers and mobile operators to successfully compete with China in other countries of the world.
China’s authorities have been trying to follow major global trends, including the choice of frequencies for 5G. Meanwhile, as experts at Oxford Information Labs claim, China is operating in several areas at once, for the purpose of at least significantly downsizing the competitive edge of Western companies in implementing 5G projects. The Chinese companies are stimulated by the fact that they already control “more than a third of all 5G patents in the world” and also a fairly limited number of potential developers and manufacturers of 5G technology. The West suspects official Beijing of striving to create a de facto corporate-technological “vertical”, with non-governmental organizations pursuing a policywhich is determined by the country’s political and military leadership. What speaks of such a policy is the pragmatic readiness of the Chinese authorities to provide substantial government loans and subsidies for an early launch of domestically developed technologies, including abroad, through resorting to political, diplomatic, financial and economic influence amidst the international community.
Amid the growing confrontation between the United States and China, European countries voice different views. The German government, Die Zeit wrote in November, still expresses its readiness to “use the key technology of the coming decades – data transmission through the new 5G standard – using the technology of the Chinese company Huawei.” However, according to French President Emmanuel Macron, such an approach “is naive.” Macron is convinced that the infrastructure of the future, which is represented by 5G, should consolidate the sovereignty of Europe, not weaken it. What is meant is data security, without which, Europeans say, it makes no sense to talk about security policy asn such. In his opinion, “it is necessary to abandon” the Chinese-manufactured technologies “in favor of European analogues”.
Russia is balancing between fears of alarmists over the “cumbersome nature” of regulators and controllers that “jeopardize” the country’s technological development under the pretext of concern “for national security”, on the one hand, and on the other, the belief of optimists who remind them that the 4G and LTE standards of cellular communication were introduced in a similarly protracted and “bureaucratic” way. A number of representatives of the Russian telecommunications industry have expressed concern that “transition to unconventional frequencies or even an isolated Russian 5G system will delay the creation of modern networks for 5-7 years”. Their opponents are confident that domestic operators will need a few more years to launch 5G networks that cover vast areas. “There are no economic scenarios for 5G yet, no one in the world knows how to earn money on such networks, the existing networks all but prove this in practice. It makes no sense to strive to launch 5G networks at any cost. Just as there is no need to clear the frequencies now – nobody will use them ”.
Whatever the case, the undisguised attempts to halt the development of Chinese high-tech companies or even ruin them which have been made by the United States in recent years “have demonstrated to most countries that independence in IT technology is crucial.” A possible scenario envisages prohibition or restriction of services, products or services provided by American companies in other countries; or at least in such a vital sector as public procurement. For many states it becomes critical to create and develop the domestic IT sector, IT services and software. All this may culminate in the “process of disintegration” of the global IT market, the division of countries into blocks and coalitions focused on “their own” manufacturers of software and their own technological standards.
The problem is to ensure sovereignty without having to face technological or informational isolation. “Information isolationism in the era of digital communication is usually a characteristic of rogue states.” While easing the burden of confronting external pressure, this approach deprives the country of the opportunity to form its own international agenda, believes Dmitry Evstafiev from the Higher School of Economics . On his part, Igor Ashmanov, CEO of Ashmanov & Partners, remarks: “Absence of information sovereignty is an absolutely toxic thing. Speaking about digital sovereignty from a technological point of view, it is necessary to underscore the importance of creating domestic technologies and companies. Borrowing something without giving it any thought is not the right thing to do. ” In his turn, Ilya Massukh, Director of the Autonomous Non-Profit Organization “ Center for Import Substitution in ICT”, says that technological development may result in the loss of identity. Consequently, nations that are striving to maintain sovereignty cannot afford to be completely dependent on foreign “suppliers of technological products”.
Russia, like most countries of the world, has yet to strike the so-called “golden middle,” which would enable it to use the advantages of the new technological reality to maximum efficiency in terms of national development. However, it should do so without turning the process of adopting new technologies into the driving force of self-isolation, which may easily reduce to zero a significant chunk of benefits supplied by the next- generation communication technologies.
From our partner International Affairs
Future Goals in the AI Race: Explainable AI and Transfer Learning
Recent years have seen breakthroughs in neural network technology: computers can now beat any living person at the most complex game invented by humankind, as well as imitate human voices and faces (both real and non-existent) in a deceptively realistic manner. Is this a victory for artificial intelligence over human intelligence? And if not, what else do researchers and developers need to achieve to make the winners in the AI race the “kings of the world?”
Over the last 60 years, artificial intelligence (AI) has been the subject of much discussion among researchers representing different approaches and schools of thought. One of the crucial reasons for this is that there is no unified definition of what constitutes AI, with differences persisting even now. This means that any objective assessment of the current state and prospects of AI, and its crucial areas of research, in particular, will be intricately linked with the subjective philosophical views of researchers and the practical experience of developers.
In recent years, the term “general intelligence,” meaning the ability to solve cognitive problems in general terms, adapting to the environment through learning, minimizing risks and optimizing the losses in achieving goals, has gained currency among researchers and developers. This led to the concept of artificial general intelligence (AGI), potentially vested not in a human, but a cybernetic system of sufficient computational power. Many refer to this kind of intelligence as “strong AI,” as opposed to “weak AI,” which has become a mundane topic in recent years.
As applied AI technology has developed over the last 60 years, we can see how many practical applications – knowledge bases, expert systems, image recognition systems, prediction systems, tracking and control systems for various technological processes – are no longer viewed as examples of AI and have become part of “ordinary technology.” The bar for what constitutes AI rises accordingly, and today it is the hypothetical “general intelligence,” human-level intelligence or “strong AI,” that is assumed to be the “real thing” in most discussions. Technologies that are already being used are broken down into knowledge engineering, data science or specific areas of “narrow AI” that combine elements of different AI approaches with specialized humanities or mathematical disciplines, such as stock market or weather forecasting, speech and text recognition and language processing.
Different schools of research, each working within their own paradigms, also have differing interpretations of the spheres of application, goals, definitions and prospects of AI, and are often dismissive of alternative approaches. However, there has been a kind of synergistic convergence of various approaches in recent years, and researchers and developers are increasingly turning to hybrid models and methodologies, coming up with different combinations.
Since the dawn of AI, two approaches to AI have been the most popular. The first, “symbolic” approach, assumes that the roots of AI lie in philosophy, logic and mathematics and operate according to logical rules, sign and symbolic systems, interpreted in terms of the conscious human cognitive process. The second approach (biological in nature), referred to as connectionist, neural-network, neuromorphic, associative or subsymbolic, is based on reproducing the physical structures and processes of the human brain identified through neurophysiological research. The two approaches have evolved over 60 years, steadily becoming closer to each other. For instance, logical inference systems based on Boolean algebra have transformed into fuzzy logic or probabilistic programming, reproducing network architectures akin to neural networks that evolved within the neuromorphic approach. On the other hand, methods based on “artificial neural networks” are very far from reproducing the functions of actual biological neural networks and rely more on mathematical methods from linear algebra and tensor calculus.
Are There “Holes” in Neural Networks?
In the last decade, it was the connectionist, or subsymbolic, approach that brought about explosive progress in applying machine learning methods to a wide range of tasks. Examples include both traditional statistical methodologies, like logistical regression, and more recent achievements in artificial neural network modelling, like deep learning and reinforcement learning. The most significant breakthrough of the last decade was brought about not so much by new ideas as by the accumulation of a critical mass of tagged datasets, the low cost of storing massive volumes of training samples and, most importantly, the sharp decline of computational costs, including the possibility of using specialized, relatively cheap hardware for neural network modelling. The breakthrough was brought about by a combination of these factors that made it possible to train and configure neural network algorithms to make a quantitative leap, as well as to provide a cost-effective solution to a broad range of applied problems relating to recognition, classification and prediction. The biggest successes here have been brought about by systems based on “deep learning” networks that build on the idea of the “perceptron” suggested 60 years ago by Frank Rosenblatt. However, achievements in the use of neural networks also uncovered a range of problems that cannot be solved using existing neural network methods.
First, any classic neural network model, whatever amount of data it is trained on and however precise it is in its predictions, is still a black box that does not provide any explanation of why a given decision was made, let alone disclose the structure and content of the knowledge it has acquired in the course of its training. This rules out the use of neural networks in contexts where explainability is required for legal or security reasons. For example, a decision to refuse a loan or to carry out a dangerous surgical procedure needs to be justified for legal purposes, and in the event that a neural network launches a missile at a civilian plane, the causes of this decision need to be identifiable if we want to correct it and prevent future occurrences.
Second, attempts to understand the nature of modern neural networks have demonstrated their weak ability to generalize. Neural networks remember isolated, often random, details of the samples they were exposed to during training and make decisions based on those details and not on a real general grasp of the object represented in the sample set. For instance, a neural network that was trained to recognize elephants and whales using sets of standard photos will see a stranded whale as an elephant and an elephant splashing around in the surf as a whale. Neural networks are good at remembering situations in similar contexts, but they lack the capacity to understand situations and cannot extrapolate the accumulated knowledge to situations in unusual settings.
Third, neural network models are random, fragmentary and opaque, which allows hackers to find ways of compromising applications based on these models by means of adversarial attacks. For example, a security system trained to identify people in a video stream can be confused when it sees a person in unusually colourful clothing. If this person is shoplifting, the system may not be able to distinguish them from shelves containing equally colourful items. While the brain structures underlying human vision are prone to so-called optical illusions, this problem acquires a more dramatic scale with modern neural networks: there are known cases where replacing an image with noise leads to the recognition of an object that is not there, or replacing one pixel in an image makes the network mistake the object for something else.
Fourth, the inadequacy of the information capacity and parameters of the neural network to the image of the world it is shown during training and operation can lead to the practical problem of catastrophic forgetting. This is seen when a system that had first been trained to identify situations in a set of contexts and then fine-tuned to recognize them in a new set of contexts may lose the ability to recognize them in the old set. For instance, a neural machine vision system initially trained to recognize pedestrians in an urban environment may be unable to identify dogs and cows in a rural setting, but additional training to recognize cows and dogs can make the model forget how to identify pedestrians, or start confusing them with small roadside trees.
The expert community sees a number of fundamental problems that need to be solved before a “general,” or “strong,” AI is possible. In particular, as demonstrated by the biggest annual AI conference held in Macao, “explainable AI” and “transfer learning” are simply necessary in some cases, such as defence, security, healthcare and finance. Many leading researchers also think that mastering these two areas will be the key to creating a “general,” or “strong,” AI.
Explainable AI allows for human beings (the user of the AI system) to understand the reasons why a system makes decisions and approve them if they are correct, or rework or fine-tune the system if they are not. This can be achieved by presenting data in an appropriate (explainable) manner or by using methods that allow this knowledge to be extracted with regard to specific precedents or the subject area as a whole. In a broader sense, explainable AI also refers to the capacity of a system to store, or at least present its knowledge in a human-understandable and human-verifiable form. The latter can be crucial when the cost of an error is too high for it only to be explainable post factum. And here we come to the possibility of extracting knowledge from the system, either to verify it or to feed it into another system.
Transfer learning is the possibility of transferring knowledge between different AI systems, as well as between man and machine so that the knowledge possessed by a human expert or accumulated by an individual system can be fed into a different system for use and fine-tuning. Theoretically speaking, this is necessary because the transfer of knowledge is only fundamentally possible when universal laws and rules can be abstracted from the system’s individual experience. Practically speaking, it is the prerequisite for making AI applications that will not learn by trial and error or through the use of a “training set,” but can be initialized with a base of expert-derived knowledge and rules – when the cost of an error is too high or when the training sample is too small.
How to Get the Best of Both Worlds?
There is currently no consensus on how to make an artificial general intelligence that is capable of solving the abovementioned problems or is based on technologies that could solve them.
One of the most promising approaches is probabilistic programming, which is a modern development of symbolic AI. In probabilistic programming, knowledge takes the form of algorithms and source, and target data is not represented by values of variables but by a probabilistic distribution of all possible values. Alexei Potapov, a leading Russian expert on artificial general intelligence, thinks that this area is now in a state that deep learning technology was in about ten years ago, so we can expect breakthroughs in the coming years.
Another promising “symbolic” area is Evgenii Vityaev’s semantic probabilistic modelling, which makes it possible to build explainable predictive models based on information represented as semantic networks with probabilistic inference based on Pyotr Anokhin’s theory of functional systems.
One of the most widely discussed ways to achieve this is through so-called neuro-symbolic integration – an attempt to get the best of both worlds by combining the learning capabilities of subsymbolic deep neural networks (which have already proven their worth) with the explainability of symbolic probabilistic modelling and programming (which hold significant promise). In addition to the technological considerations mentioned above, this area merits close attention from a cognitive psychology standpoint. As viewed by Daniel Kahneman, human thought can be construed as the interaction of two distinct but complementary systems: System 1 thinking is fast, unconscious, intuitive, unexplainable thinking, whereas System 2 thinking is slow, conscious, logical and explainable. System 1 provides for the effective performance of run-of-the-mill tasks and the recognition of familiar situations. In contrast, System 2 processes new information and makes sure we can adapt to new conditions by controlling and adapting the learning process of the first system. Systems of the first kind, as represented by neural networks, are already reaching Gartner’s so-called plateau of productivity in a variety of applications. But working applications based on systems of the second kind – not to mention hybrid neuro-symbolic systems which the most prominent industry players have only started to explore – have yet to be created.
This year, Russian researchers, entrepreneurs and government officials who are interested in developing artificial general intelligence have a unique opportunity to attend the first AGI-2020 international conference in St. Petersburg in late June 2020, where they can learn about all the latest developments in the field from the world’s leading experts.
From our partner RIAC
How as strategist we can compete with the sentient Artificial intelligence?
Universe is made up of humans, stars, galaxies, milky ways, black holes other objects linked and connected with each other. Everything in the universe has its level of mechanisms and complexities. Humans are very complex creatures man-made objects are more complex and difficult to understand. With the passage of time human beings are more evolved and become more advanced technologically. Human inventions are reached to that level of advancement, which initiates a competition between machines and humans, itself. Humans are the most intelligent mortals on the earth but now human are being challenged by the intelligence (artificial intelligence), which was invented as helping hand for humans to increase efficiency. Here it is important to question that whether human’s intelligence was not enough to survive in the fast growing technological world? Or the man-made intelligence has reached to its peak so that humans come in competition with machines and human intelligence is challenged by the artificial intelligence? If there is competition, then how strategists could compete with artificial intelligence? To answer these questions we first need to know what artificial intelligence actually is.
Artificial intelligence was presented by John McCarthy in 1955; he characterized computerized reasoning in 1956 at Dartmouth Conference, the main counterfeit consciousness meeting that: Every fragment of learning or another element of insight can on a basic level be so unequivocally depicted that a machine can be made to empower it. An endeavor will be made to learn how to influence machines to exploit vernacular, mount deliberations and ideas, take care of sort of issues now held for people, and enhance themselves. There are seven main features of artificial intelligence as follows:-
“Simulating higher functions of brain
Programming a computer to use general language
Arrangement of hypothetical neurons in a manner so that they can form concept
Way to determine and measure problem complexity
Abstraction: it is defined as quality of dealing with ideas , not with events
Creativity and randomness”
Another definition is given by Elaine rich who expressed that counterfeit consciousness is tied in with making computer to do such thing which are presently being finished by human. He said that each computer is artificial intelligence framework. Jack Copland expressed that critical elements of artificial intelligence are speculation discovering that empowers the student to perform in the circumstance that are beforehand experienced. At that point its thinking, to reason is to make inference fittingly, critical thinking implied that by giving information it can finish up comes about lastly trickiness intends to break down a checked situation and investigating the highlights and connection between the articles and self-driving autos are its case.
Artificial intelligence is very common in the developed nations and developing nations are using artificial intelligence according to resources. Now question is that how artificial intelligence is being utilized in the above mentioned fields? Use of AI will be elaborated with help of phenomenon and examples of related fields for better understanding.
World is being more advanced and technologies are improving as well. In this situation states become conscious about their security. At this point states are involving AI approaches in their defense systems and some states are already using artificially integrated technologies. On 11 May 2017, Dan Coats, the executive of US National Intelligence, conveyed declaration to the US Congress on his yearly Worldwide Threat Assessment. In the openly discharged archive, he said that (AI) is progressing computational abilities that advantage the economy, yet those advances likewise empower new military capacities for our enemies’. In the meantime, the US Department of Defense (DOD) is taking a shot at such frameworks. Undertaking Maven, for example, otherwise called the Algorithmic Warfare Cross-Functional Team (AWCFT), is intended to quicken the incorporation of huge information, machine learning and AI into US military capacities. While the underlying focal point of AWCFT is on computer vision calculations for protest identification and characterization, it will unite all current calculation based-innovation activities related with US resistance knowledge. Command, control, communications, computers, intelligence, surveillance and reconnaissance (C4ISR) are achieving new statures of proficiency that empower information accumulation and preparing at exceptional scale and speed. At the point when the example acknowledgment calculations being produced in China, Russia, the UK, the US and somewhere else are combined with exact weapons frameworks, they will additionally expand the strategic preferred standpoint of unmanned elevated vehicles (UAVs) and other remotely worked stages. China’s resistance part has made achievements in UAV ‘swarming’ innovation, including an exhibition of 1,000 EHang UAVs flying in arrangement at the Guangzhou flying demonstration in February 2017. Potential situations could incorporate contending UAV swarms attempting to hinder each other’s C4ISR arrange, while at the same time drawing in dynamic targets.
Humans are the most intelligent creatures that created an artificial intelligence technology. The technology we human introduced is more intelligent than us and works fastest than humans. So here is big question marks that can humans compete with the artificial intelligence in near future. Now days it seems that AI is replacing humans in every field of life so what will be condition after decades or two. There is an alarming competition started between the human and AI. AI was called as demon by Tesla Elon Musk. A well physicist Stephen Hawking also stated that in future artificial intelligence could be proved as a bad omen for humanity. But signs of all this clear and we can clearly see the replacement of humans. We human are somehow losing the competition. But it is also clear that a creator can be destructor also. So as strategist we must have the counter strategies and second plans to overcome the competition. The edge human have over AI is the ability to think and we generate this in AI integrated techs so we must set the level for this. Otherwise this hazard could be a great threat in future and humanity could possibly be an extinct being.
What is more disruptive with the AI: Its dark potentials or our (anti-Intellectual) Ignorance?
Throughout the most of human evolution both progress as well as its horizontal transmission was extremely slow, occasional and tedious a process. Well into the classic period of Alexander the Macedonian and his glorious Alexandrian library, the speed of our knowledge transfers – however moderate, analogue and conservative – was still always surpassing snaillike cycles of our breakthroughs.
When our sporadic breakthroughs finally turned to be faster than the velocity of their infrequent transmissions – that marked a point of our departure. Simply, our civilizations started to significantly differentiate from each other in their respective techno-agrarian, politico-military, ethno-religious and ideological, and economic setups. In the eve of grand discoveries, that very event transformed wars and famine from the low-impact and local, into the bigger and cross-continental.
Faster cycles of technological breakthroughs, patents and discoveries than their own transfers, primarily occurred on the Old continent. That occurrence, with all its reorganizational effects, radically reconfigured societies. It finally marked a birth of mighty European empires, their (liberal) schools and overall, lasting triumph of the western civilization.
For the past few centuries, we lived fear but dreamt hope – all for the sake of modern times. From WWI to www. Is this modernity of internet age, with all the suddenly reviled breakthroughs and their instant transmission, now harboring us in a bay of fairness, harmony and overall reconciliation? Was and will our history ever be on holiday? Thus, has our world ever been more than an idea? Shall we stop short at the Kantian word – a moral definition of imagined future, or continue to the Hobbesian realities and grasp for an objective, geopolitical definition of our common tomorrow?
The Agrarian age inevitably brought up the question of economic redistribution. Industrial age culminated on the question of political participation. The AI (Quantum physics, Nanorobotics and Bioinformatics) brings a new, yet underreported challenge: Human (physical and mental) powers might – far and wide, and rather soon – become obsolete. If/when so, a question of human irrelevance is next to ask.
Why is the AI like no technology ever before? Why re-visiting and re-thing spirituality matters …
If you believe that the above is yet another philosophical melodrama, an anemically played alarmism, mind this:
We will soon have to redefine what we consider as a life itself.
Less than a month ago (January 2020), the successful trials have been completed. Border between organic and inorganic, intrinsic and artificial is downed forever. The AI has it now all-in: quantum physics (along with quantum computing), nanorobotics, bioinformatics and organic tissue tailoring. Synthesis of all that is usually referred as xenobots(sorts of living robots) – biodegradable symbiotic nanorobots that exclusively rely on evolutionary (self-navigable) algorithms.
Although life is to be lived forward (with no backward looking), human retrospection is a biggest reservoir of insights. Of what makes us human.
Hence, what does our history of technology in relation to human development tell us so far?
Elaborating on a well-known argument of ‘defensive modernization’ of Fukuyama, it is evident that throughout the entire human history a technological drive was aimed to satisfy the security (and control) objective. It was rarely (if at all) driven by a desire to (gain a knowledge outside of convention, in order to) ease human existence, and to enhance human emancipation and liberation of societies at large. Thus, unless operationalized by the system, both intellectualism (human autonomy, mastery and purpose), and technological breakthroughs were traditionally felt and perceived as a threat. As a problem, not a solution.
Ok. But what has brought us (under) the AI today?
It was our acceptance. Of course, manufactured.
All cyber-social networks and related search engines are far away from what they are portrayed to be: a decentralized but unified intelligence, attracted by gravity of quality rather than navigated by force of a specific locality. (These networks were not introduced to promote and emancipate other cultures but to maintain and further strengthen supremacy of the dominant one.)
In no way they correspond with a neuroplasticity of physics of our consciousness. They only offer an answer to our anxieties – in which the fear from free time is the largest, since free time coupled with silence is our gate to creativity and self-reflection. In fact, the cyber-tools of these data-sponges primarily serve the predictability, efficiency, calculability and control purpose, and only then they serve everything else – as to be e.g. user-friendly and en mass service attractive.
To observe the new corrosive dynamics of social phenomenology between manipulative fetishization (probability) and self-trivialization (possibility), the cyber-social platforms – these dustbins of human empathy in the muddy suburbs of consciousness – are particularly interesting.
This is how the human presence eliminating technologies have been introduced to and accepted by us.
How did we reflect – in our past – on new social dynamics created by the deployment of new technologies?
Aegean theater of the Antique Greece was the place of astonishing revelations and intellectual excellence – a remarkable density and proximity, not surpassed up to our age. All we know about science, philosophy, sports, arts, culture and entertainment, stars and earth has been postulated, explored and examined then and there. Simply, it was a time and place of triumph of human consciousness, pure reasoning and sparkling thought. However, neither Euclid, Anaximander, Heraclites, Hippocrates (both of Chios, and of Cos), Socrates, Archimedes, Ptolemy, Democritus, Plato, Pythagoras, Diogenes, Aristotle, Empedocles, Conon, Eratosthenes nor any of dozens of other brilliant ancient Greek minds did ever refer by a word, by a single sentence to something which was their everyday life, something they saw literally on every corner along their entire lives. It was an immoral, unjust, notoriously brutal and oppressive slavery system that powered the Antique state. (Slaves have not been even attributed as humans, but rather as the ‘phonic tools/tools able to speak’.) This myopia, this absence of critical reference on the obvious and omnipresent is a historic message – highly disturbing, self-telling and quite a warning.
Why is the AI like no technology ever before?
Ask google, you see that I am busy messaging right now!
How Coronavirus Affected the supply chain Networks/ Businesses
The public health Emergency as novel COVID-19 has caused the product flow to be changed around the global and it...
Iran Proposed Five-Nation Bloc for Regional Stability, Peace, and Progress
In February this year, Pakistan’s foreign minister Shah Mahmood Qureshi received Syed Mohammad Ali Hosseini, an Iranian Ambassador to Pakistan....
Curious Case Of Nirbhaya And International Court Of Justice
On December 16th, 2012, a 23year old physiotherapy intern known as Nirbhaya was gang-raped and heinously murdered in a moving...
Multicultural Weddings: How to Make Them Work
An eternal binding of two people who are deeply in love is a marvelous occasion. Any wedding for that matter...
BRI to Health Silk Route: How COVID-19 is Changing Global Strategic Equations?
The beginning of 2020 brought a wild card entry into global strategic equations in the form of Coronavirus Pandemic, with...
The World Bank Strengthens Support to Argentina’s Most Vulnerable Families
The World Bank Board of Directors today approved a new US$ 300 million operation to support Argentina’s efforts to strengthen...
Why Trump Will Probably Win Re-Election
Throughout this election-season in the United States, there have been many indications that the stupid voters who chose Hillary Clinton...
Americas3 days ago
Covid-19: Why the US is hit so hard?
Defense3 days ago
Europe After the INF Treaty
Europe2 days ago
Coronavirus Reveals Cracks in European Unity
Defense3 days ago
Development of New-age Weapons Systems Becomes Key to Sustaining US Military Superiority
Defense2 days ago
Indian DRDO: A Risk In Disguise
Economy3 days ago
COVID-19 has exposed the fragility of our economies
Diplomacy2 days ago
COVID-19 Diplomacy and the Role of the United Nations Security Council
EU Politics2 days ago
Coronavirus: Commission boosts budget for repatriation flights and rescEU stockpile