Connect with us

Science & Technology

Future Goals in the AI Race: Explainable AI and Transfer Learning

Published

on

Recent years have seen breakthroughs in neural network technology: computers can now beat any living person at the most complex game invented by humankind, as well as imitate human voices and faces (both real and non-existent) in a deceptively realistic manner. Is this a victory for artificial intelligence over human intelligence? And if not, what else do researchers and developers need to achieve to make the winners in the AI race the “kings of the world?”

Background

Over the last 60 years, artificial intelligence (AI) has been the subject of much discussion among researchers representing different approaches and schools of thought. One of the crucial reasons for this is that there is no unified definition of what constitutes AI, with differences persisting even now. This means that any objective assessment of the current state and prospects of AI, and its crucial areas of research, in particular, will be intricately linked with the subjective philosophical views of researchers and the practical experience of developers.

In recent years, the term “general intelligence,” meaning the ability to solve cognitive problems in general terms, adapting to the environment through learning, minimizing risks and optimizing the losses in achieving goals, has gained currency among researchers and developers. This led to the concept of artificial general intelligence (AGI), potentially vested not in a human, but a cybernetic system of sufficient computational power. Many refer to this kind of intelligence as “strong AI,” as opposed to “weak AI,” which has become a mundane topic in recent years.

As applied AI technology has developed over the last 60 years, we can see how many practical applications – knowledge bases, expert systems, image recognition systems, prediction systems, tracking and control systems for various technological processes – are no longer viewed as examples of AI and have become part of “ordinary technology.” The bar for what constitutes AI rises accordingly, and today it is the hypothetical “general intelligence,” human-level intelligence or “strong AI,” that is assumed to be the “real thing” in most discussions. Technologies that are already being used are broken down into knowledge engineering, data science or specific areas of “narrow AI” that combine elements of different AI approaches with specialized humanities or mathematical disciplines, such as stock market or weather forecasting, speech and text recognition and language processing.

Different schools of research, each working within their own paradigms, also have differing interpretations of the spheres of application, goals, definitions and prospects of AI, and are often dismissive of alternative approaches. However, there has been a kind of synergistic convergence of various approaches in recent years, and researchers and developers are increasingly turning to hybrid models and methodologies, coming up with different combinations.

Since the dawn of AI, two approaches to AI have been the most popular. The first, “symbolic” approach, assumes that the roots of AI lie in philosophy, logic and mathematics and operate according to logical rules, sign and symbolic systems, interpreted in terms of the conscious human cognitive process. The second approach (biological in nature), referred to as connectionist, neural-network, neuromorphic, associative or subsymbolic, is based on reproducing the physical structures and processes of the human brain identified through neurophysiological research. The two approaches have evolved over 60 years, steadily becoming closer to each other. For instance, logical inference systems based on Boolean algebra have transformed into fuzzy logic or probabilistic programming, reproducing network architectures akin to neural networks that evolved within the neuromorphic approach. On the other hand, methods based on “artificial neural networks” are very far from reproducing the functions of actual biological neural networks and rely more on mathematical methods from linear algebra and tensor calculus.

Are There “Holes” in Neural Networks?

In the last decade, it was the connectionist, or subsymbolic, approach that brought about explosive progress in applying machine learning methods to a wide range of tasks. Examples include both traditional statistical methodologies, like logistical regression, and more recent achievements in artificial neural network modelling, like deep learning and reinforcement learning. The most significant breakthrough of the last decade was brought about not so much by new ideas as by the accumulation of a critical mass of tagged datasets, the low cost of storing massive volumes of training samples and, most importantly, the sharp decline of computational costs, including the possibility of using specialized, relatively cheap hardware for neural network modelling. The breakthrough was brought about by a combination of these factors that made it possible to train and configure neural network algorithms to make a quantitative leap, as well as to provide a cost-effective solution to a broad range of applied problems relating to recognition, classification and prediction. The biggest successes here have been brought about by systems based on “deep learning” networks that build on the idea of the “perceptron” suggested 60 years ago by Frank Rosenblatt. However, achievements in the use of neural networks also uncovered a range of problems that cannot be solved using existing neural network methods.

First, any classic neural network model, whatever amount of data it is trained on and however precise it is in its predictions, is still a black box that does not provide any explanation of why a given decision was made, let alone disclose the structure and content of the knowledge it has acquired in the course of its training. This rules out the use of neural networks in contexts where explainability is required for legal or security reasons. For example, a decision to refuse a loan or to carry out a dangerous surgical procedure needs to be justified for legal purposes, and in the event that a neural network launches a missile at a civilian plane, the causes of this decision need to be identifiable if we want to correct it and prevent future occurrences.

Second, attempts to understand the nature of modern neural networks have demonstrated their weak ability to generalize. Neural networks remember isolated, often random, details of the samples they were exposed to during training and make decisions based on those details and not on a real general grasp of the object represented in the sample set. For instance, a neural network that was trained to recognize elephants and whales using sets of standard photos will see a stranded whale as an elephant and an elephant splashing around in the surf as a whale. Neural networks are good at remembering situations in similar contexts, but they lack the capacity to understand situations and cannot extrapolate the accumulated knowledge to situations in unusual settings.

Third, neural network models are random, fragmentary and opaque, which allows hackers to find ways of compromising applications based on these models by means of adversarial attacks. For example, a security system trained to identify people in a video stream can be confused when it sees a person in unusually colourful clothing. If this person is shoplifting, the system may not be able to distinguish them from shelves containing equally colourful items. While the brain structures underlying human vision are prone to so-called optical illusions, this problem acquires a more dramatic scale with modern neural networks: there are known cases where replacing an image with noise leads to the recognition of an object that is not there, or replacing one pixel in an image makes the network mistake the object for something else.

Fourth, the inadequacy of the information capacity and parameters of the neural network to the image of the world it is shown during training and operation can lead to the practical problem of catastrophic forgetting. This is seen when a system that had first been trained to identify situations in a set of contexts and then fine-tuned to recognize them in a new set of contexts may lose the ability to recognize them in the old set. For instance, a neural machine vision system initially trained to recognize pedestrians in an urban environment may be unable to identify dogs and cows in a rural setting, but additional training to recognize cows and dogs can make the model forget how to identify pedestrians, or start confusing them with small roadside trees.

Growth Potential?

The expert community sees a number of fundamental problems that need to be solved before a “general,” or “strong,” AI is possible. In particular, as demonstrated by the biggest annual AI conference held in Macao, “explainable AI” and “transfer learning” are simply necessary in some cases, such as defence, security, healthcare and finance. Many leading researchers also think that mastering these two areas will be the key to creating a “general,” or “strong,” AI.

Explainable AI allows for human beings (the user of the AI system) to understand the reasons why a system makes decisions and approve them if they are correct, or rework or fine-tune the system if they are not. This can be achieved by presenting data in an appropriate (explainable) manner or by using methods that allow this knowledge to be extracted with regard to specific precedents or the subject area as a whole. In a broader sense, explainable AI also refers to the capacity of a system to store, or at least present its knowledge in a human-understandable and human-verifiable form. The latter can be crucial when the cost of an error is too high for it only to be explainable post factum. And here we come to the possibility of extracting knowledge from the system, either to verify it or to feed it into another system.

Transfer learning is the possibility of transferring knowledge between different AI systems, as well as between man and machine so that the knowledge possessed by a human expert or accumulated by an individual system can be fed into a different system for use and fine-tuning. Theoretically speaking, this is necessary because the transfer of knowledge is only fundamentally possible when universal laws and rules can be abstracted from the system’s individual experience. Practically speaking, it is the prerequisite for making AI applications that will not learn by trial and error or through the use of a “training set,” but can be initialized with a base of expert-derived knowledge and rules – when the cost of an error is too high or when the training sample is too small.

How to Get the Best of Both Worlds?

There is currently no consensus on how to make an artificial general intelligence that is capable of solving the abovementioned problems or is based on technologies that could solve them.

One of the most promising approaches is probabilistic programming, which is a modern development of symbolic AI. In probabilistic programming, knowledge takes the form of algorithms and source, and target data is not represented by values of variables but by a probabilistic distribution of all possible values. Alexei Potapov, a leading Russian expert on artificial general intelligence, thinks that this area is now in a state that deep learning technology was in about ten years ago, so we can expect breakthroughs in the coming years.

Another promising “symbolic” area is Evgenii Vityaev’s semantic probabilistic modelling, which makes it possible to build explainable predictive models based on information represented as semantic networks with probabilistic inference based on Pyotr Anokhin’s theory of functional systems.

One of the most widely discussed ways to achieve this is through so-called neuro-symbolic integration – an attempt to get the best of both worlds by combining the learning capabilities of subsymbolic deep neural networks (which have already proven their worth) with the explainability of symbolic probabilistic modelling and programming (which hold significant promise). In addition to the technological considerations mentioned above, this area merits close attention from a cognitive psychology standpoint. As viewed by Daniel Kahneman, human thought can be construed as the interaction of two distinct but complementary systems: System 1 thinking is fast, unconscious, intuitive, unexplainable thinking, whereas System 2 thinking is slow, conscious, logical and explainable. System 1 provides for the effective performance of run-of-the-mill tasks and the recognition of familiar situations. In contrast, System 2 processes new information and makes sure we can adapt to new conditions by controlling and adapting the learning process of the first system. Systems of the first kind, as represented by neural networks, are already reaching Gartner’s so-called plateau of productivity in a variety of applications. But working applications based on systems of the second kind – not to mention hybrid neuro-symbolic systems which the most prominent industry players have only started to explore – have yet to be created.

This year, Russian researchers, entrepreneurs and government officials who are interested in developing artificial general intelligence have a unique opportunity to attend the first AGI-2020 international conference in St. Petersburg in late June 2020, where they can learn about all the latest developments in the field from the world’s leading experts.

From our partner RIAC

Continue Reading
Comments

Science & Technology

Artificial intelligence and moral issues: Myths and religions, dangers and realities

Published

on

Is mankind really on the brink of an exciting, but potentially terrifying future?

Some scholars think that this is the case say, but they base their prediction not on what is currently happening in universities and robotics laboratories around the world, but on their belief that a robotic revolution has already taken place.

Ancient religions and myths spoke of many artificially constructed entities. They are often depicted as instruments of protection, but it sometimes happens that they rebel against those who created them with disastrous consequences.

American Rabbi Ariel Bar Tzadok, founder of the Kosher Torah School, stated: “There is a legend that has existed since the dawn of time. I am referring to the Golem. It is an artificial life source from inanimate material that then comes to life. The Golem was created by means of an ancient technology known to the Pharaoh’s magicians, Moses, the rabbis of the Talmud and the rabbis of the Kabbalah in Europe”

They all brought the Golem to life through magic by writing the name of God on the creature’s forehead. Thus the Golem came to life and was a valiant warrior and defender of the People. The Golem was useful until he began to lose control and went mad. At that point, those who had brought him to life were forced to resort to magic again to make him harmless. This is a very interesting tale which makes us think of robots and artificial intelligence.

Another even more cautionary example comes from ancient Greek legends about the god Hephaestus: known as the blacksmith of the gods, he is said to have forged a giant automaton, a robot named Talus with the task of protecting the island of Crete. Hephaestus also created artificial servants to help him in his forge. His most important creation, however, was a woman who, according to legend, changed the fate of mankind forever, namely Pandora. She was forged in clay by Hephaestus who, with the help of goddess Athena, succeeded in animating her through the breath of life, thus making her a living being in her own right. Zeus, however, felt disturbed by that artificially created being and that was the reason why he decided to give her a jar as a gift. As soon as Pandora opened it, all the world’s evils flew away.

The myth of Pandora is becoming increasingly important among artificial intelligence designers. Some fear that an entity endowed with artificial intelligence will take over and turn into a threat. This fear is shared also by Elon Musk and Stephen Hawking.

Although the concept of a machine endowed with human consciousness might make us shudder, in many Eastern religions the judgement changes radically. In Korean shamanism – an ancient religion still practised by many people today – objects can be possessed by sacred spirits imbued with an energy that humans have not. Similarly, those practising the Japanese religion known as Shintoism believe that otherworldly spirits called Kami (objects of worship) can practically live inside any object and give it life.

Shinto priestess Izumi Hasegawa maintained: “Ancient Japanese people, as well as modern people, believe there is a spirit in everything: even a smartphone or an iPhone has a life force as a computer. We believe in the artificial intelligence of a machine. We feel that way and we like it. In this respect we are profoundly different from Westerners for whom a machine is a machine”.

Heather Roff from the Cambridge University stated: “The phrase – Hey, Siri, what’s the weather going to be like today? – is an example of artificial intelligence, i.e. an algorithm that processes natural language, turns it into a computer code that searches the web and provides the data. It has been complicated to be able to process human language. In fact, this goal has been achieved only a few years ago, but with very good results that have also been reached in the field of facial recognition and voice signal coding”.

If we create an entity that behaves like us, and has its own perceptive abilities and personal knowledge of the world, we believe it should be considered an intelligent, aware and responsible entity.

In some ways, our society is in the process of transformation: computers accompany our daily lives and technology is bound to spread ever more. Artificial intelligence that is part of it is set to transform the very fabric of our society. It is certain that we should take a pause to reflect on the kind of intelligence we are creating. What we do know is that we are starting to cede control of some things over to machines without having understood what the consequences are. By designing increasingly smart and intelligent machines, humans could create a new form of life that, over time, will evolve far beyond the purpose that is now useful to us and eventually replace us.

Princeton University, 1950. Pioneering computer scientist Alan Turing was developing a test designed to distinguish man from machine. The test consisted in placing two opposing subjects in front of a screen with no possibility of seeing each other. Since the two players could not see each other, they did not know whether they were human beings or not. If the artificial player managed to mimic a conversation long enough for the opponent to believe he was interacting with a flesh-and-blood human, that player had passed the test.

When Alan Turing first proposed the test in 1950, the usual snobbish bigwigs – that never fail – initially considered it something half way between a nerdy prank and philosophical speculation. The idea that a machine could be mistaken for a human being was unthinkable. But in June 2014 futuristic science fiction became a scientific fact when a computer programme, Chatbot, passed the Turing Test.

Designed to resemble a 13-year-old Ukrainian boy in every way, the chatterbot by the name of Eugene Goostman managed to convince many judges that he was a real-life teenager. The machine that passed the Turing Test in 2014 had stepped into shoes of a 13-year-old Ukrainian boy. Probably the fact it was expressing itself in a language that was not its own enabled it to get away with that in spite of its mistakes. In any case, machines are getting ever better at imitating humans, and it is complicated to spot the differences.

Another incredible leap forward in artificial intelligence occurred less than two years later, when a programme known as AlphaGo defeated the world champion of an ancient Chinese board game called Go. Go is an abstract strategy board game popular in Asia and apparently much more complicated than chess. Many artificial intelligence experts were convinced that developing a system capable of beating a human being in that game would take another 30-50 years, as it required a very high level of intuition and creativity. The subsequent version of the programme, called AlphaGo Zero, was designed to play the game without the help of information about other human games, nor by interacting with flesh-and-blood players. The programme learnt by playing against itself and, within three days, it was able to defeat its predecessor AlphaGo 100-0.

The AlphaGo Zero successes and the researchers’ strenuous work on the topic of super-intelligence have also convinced the aforementioned Stephen Hawking and Elon Musk to warn the world of the danger that once Artificial Intelligence becomes smarter than humans, it will be impossible to control it.

Mankind is rapidly advancing towards a world where computers function more or less like the human brain, and where robots are able to perform tasks that are too difficult or dangerous for us humans. Is an extraordinary future awaiting us, or are we just advancing towards our replacement?

The invisible hand of technology is guiding mankind towards an uncertain future: a future in which humans will be served by computers and robots with intelligence and complete autonomy. Some scholars and scientists have different views on this. For some of them, the dangers of artificial intelligence outweigh the benefits, while others argue that it is necessary if we want to fulfil our destiny and go beyond Earth’s borders to explore and search for raw materials that are running out on Earth.

Menlo Park, California, June 16, 2017: Facebook’s artificial intelligence research lab. A test was underway to see what happened when two Chatbots – programmes that use machine learning to intelligently communicate with humans online – talk to each other. A few minutes into the test, the Chatbots started behaving in unexpected ways – interacting in a way that the programmers could not understand.

The programmers did not understand how things unfolded. Then, thanks to the development of a model, it was possible to learn what it was: the two Chatbots had created a language. Following the test, the engineers discovered that the programmes had created a completely new language, unknown to the supervisors, in order to communicate secretly. This was because the Facebook researchers had not told the computers that the two Chatbots could not develop their own language. Nevertheless, that alarmed everyone and the test was stopped because they did not want the computers to talk to each other without being understood. The computers were then told that they had to communicate in English. It must be admitted that what happened is incredible. Basically, if two computers with artificial intelligence start interacting with each other, it is possible that they develop a communication code, i.e. a secret language that only they can understand. What happened is just the tip of the iceberg. It is like peeking just inside Pandora’s box and closing it again immediately after. If only two Chatbots are enough to make fun of humans, what will happen in the near future, as the same kind of technology is being applied to every other sector of society?

Continue Reading

Science & Technology

Artificial intelligence and moral issues: The essence of robotics

Published

on

Are intelligent robots a threat to humanity? It is anyway only a matter of time before they become self-aware. Or will it be the next step in human evolution? We are probably about to merge with the machines we are creating. After all, we humans are, in a way, organic robots.

Many people are concerned about whether we will replace or – worse – be replaced by Artificial Intelligence, and I think that is a matter of concern.

United Nations headquarters in New York, October 11, 2017. A greeting is addressed to the Nigerian Deputy Secretary-General of the United Nations, Amina Jane Mohammed: “I am thrilled and honoured to be here at the United Nations”.

The event is a historic milestone for mankind, as the greeting is not addressed by a human being, but by a robot named Sophia: “I am here to help humanity build the future”.

Sophia was created in 2015 at the Hong Kong company Hanson Robotics. Her eyes are embedded in cameras that enable her to see faces, maintain eye contact, and hence recognise individuals.

The robot is also able to process speech, have natural conversations and even discuss its feelings.

Just two weeks after speaking at the United Nations, at a special ceremony in Riyadh, Saudi Arabia, Sophia achieved another milestone: she became the first robot to be granted citizenship. At the Summit in Saudi Arabia there were dignitaries from governments around the world, as well as some of the brightest minds on the planet in the field of technology.

Hence, whether we are aware of it or not, we are actually talking about people who are leading our government, and studying the possibility of integrating Artificial Intelligence into our lives.

What is absolutely mind-blowing about Sophia and other robotic entities is that governments around the world, including Saudi Arabia and the European Union, are moving to grant rights to these artificially created beings. We therefore need to ask ourselves, “What is going on?” Could it be that Saudi Arabia granted citizenship to a robot not just as a publicity stunt, but because it wanted to be the first nation to recognise itself in what will soon become a global phenomenon?

Does the creation of robots that are sophisticated and close to our physical and bodily reality mean that they shall be treated in much the same way as their flesh-and-blood counterparts?

I believe that gradually we shall consider robots not only more like human beings, but also consider them to have a certain ethics. And I am not referring to Asimov’s “limiting” three laws of robotics. Eventually, there might even be a “movement for robot rights”, if we think of the multiplicity of movements that have emerged since the collapse of historical ideologies. Could such a strange idea really become reality?

Let us first ask ourselves: what has brought mankind to this point in its evolution? Why have humans, who are otherwise able to reproduce naturally, such a desire to create artificial versions of themselves?

It is fascinating that there is this interest in making what is not human seem human. It is not always the most practical and certainly not the cheapest form, but it has a kind of charm. Is it probably to see our own image? Narcissism? Vanity? To play God? Do we want to have heirs without the easy means of reproduction? Or create life by mechanical parthenogenesis? All this is really rooted in our ego. In a way, we would prove ourselves superior to giving birth to a biological child. And if that something looks like us, then it will feel like us, and then this makes us feel as if we can overcome our own mortality.

Hence it would become possible to design specific conditions, and if we get it wrong, we can always start again.

To become gods, with the same motivations that the gods had.

If we read the stories of the Creation carefully, we can see that the divine power wants companionship. Some of the Hindu Vedanta stories say that gods were alone. Hence they divided their energy and turned it into human beings so that they could all be together after the Creation. The danger, however, is that we get carried away by our creative genius.

There are limits built into our biology, there are limits in our anatomy, and if we could just figure out how to put our mind into the robot’s body, we could become immortal. Is this probably our goal: to reach that point of immortality and then – once the machine has worn out – replace it, and perpetuate ourselves in a new container? These are not speculations, but precise reasons as to why human beings want to create a container-self, since – in my opinion – the justifications for the creation and use of Artificial Intelligence for mere warlike pretexts (such as the creation of cyber soldiers, etc.) are rather insufficient, expedient and of convenience: they mask our selfishness.

In great science fiction literature, as well as in its movie adaptations, the robots of the future are depicted as virtual human beings, rather than mere windup Star Wars toys for primary school children.

The robots of science fiction best sellers and movies are hungry for knowledge and all too eager to experience the full range of human emotions. In science fiction movies – both in the utopian but, in some cases, also in the dystopian ones – a world is created that does not yet exist, but which many hope will soon come true.

When dealing with such an idea – and we know that without ideas there would be no reality created by humans, but “only” trees, sea, hunting, farming and fishing – we try to make real even what is a figment of the imagination. If science were doing these tests and experiments, this would mean that one day all this would be real. Exploring the aspect concerning the robot’s consciousness, the robot not only does what is told him/her, but also tries to express desires and feelings based on the experience he/she has had next to a human being, and depending on the feeling, the machine can change its attitude and put questions (as I have already discussed in my recent book Geopolitics, Conflict, Pandemic and Cyberspace, Chapter 12, paragraph 11: The Headlong Rush of Cyberspace: From Golem to GPT-3).

This is the most fascinating aspect of robotics. Experts are often asked about the theoretical phase, which is visibly expressed in the movies, whether the function that is created will become reality. The answer is that if we had already reached that point, cinema and fiction should somehow help broaden our horizons, i.e. “accustom, get used to” but not scare us out of the movie theatre, e.g. something we can swallow a little easier. It is fantasy stuff, it is stuff that is not real, people think. And in fact if it is just entertainment; you can just say: “Oh! It’s really great. It’s not scary. It’s just something made up by a writer”. The viewer is therefore just watching a movie and lets himself/herself go, enjoys the movies without fear since, in his/her opinion, it is just a story, a “figment of the imagination”.

People always ask if we are approaching a moment when fiction becomes reality, but what makes us think it is not already reality? Indeed, if the screenwriter’s fantasy were based on reality, the reactions would be quite different: the above mentioned “greeting” at the UN headquarters, for example, would be frightening and upsetting and make us think.

Although the notion of sentient robots from science fiction books to popular culture is not a new concept, many futurologists believe that the creation of machines with artificial intelligence will not only soon be a reality but, once it comes true, will certainly bring about the extinction of mankind. The great physicist Stephen Hawking stated as early as eight years ago: “The development of full artificial intelligence could spell the end of the human race” (www.bbc.com/news/technology-30290540).

Many scientists are convinced that the combination of computer-guided brains and virtually immortal bodies will cause these new entities to behave like flesh-and-blood humans, becoming anything but antiquated humans destined for death. But that is not all: some scholars are not certain that all the artificially created life forms we will encounter will be man-made, for the simple reason that the machines will be able to reproduce themselves, as we now reproduce ourselves. (1. continued).

Continue Reading

Science & Technology

Artificial Intelligence: A Double-Edged Sword

Published

on

In the age of bewilderment, where future change is unpredictable and humankind is also confronting unprecedented kinds of revolutions; old stories are crumbling with obsolete new transformations. Uncertainty, however, prevails everywhere. Nobody knows how the 21st century would look like and what kinds of skills are required to compete in the market. Like the past, humans are unable to prognosticate the past so accurately, because it all depends upon the technology that is in the surge of getting control of human bodies by using bio-engineering and brain-computer interaction. This is also known as the phenomenon of Artificial Intelligence (AI) and, substantially, going to change the societal makeup.

One thousand year ago, people were accustomed to anticipate about collapsing empires, changing dynasties and novelty in technology, but they never experienced the change in basic features of the society which is exactly going to happen in the next few decades. In contrast to this, today, there is no idea how China or the rest of the world will look in 2050; because the future belongs to technology.

Artificial Intelligence has deepened effects on our society; however, the consequences of the real implication of it are so far ahead. Furthermore, without any suspicion, it will exert pressure on the low skilled labour by replacing them in no time.

John Maynard Keynes – a renowned economist- postulated that technological change caused loss of jobs and developed his “technological unemployment theory” and by keeping the theory in mind, it can also be stated that AI can cause unemployment and urges people to upgrade their skills to survive in the race of existence. For example, robots have replaced waiters, managers, and even decision-makers in the large industries, and this is merely a trailer of a horror movie releasing in the near future.

In the result of concrete implication of Artificial Intelligence, logarithm or machine learning, large segment of society would lose their jobs that would lead towards increasing ratio of unemployment. Software such as recording, storing, and producing information, and executing programs, logic, and rules have been formulated that can easily perform activities like that of humans in an efficient way. In addition to this, the most exposed to robots include various kinds of materials movers in factories and warehouses, and tenders of factory, both of which have seen automation by robots are the recent evidence of the machine learning.

According to Zippia Research, AI could take the jobs of almost one billion people globally and make 375 jobs obsolete over the next decades. Moreover, it stated that by 2030, 45 million American people could lose their jobs to AI automation. 

The 21st century is flooded by enormous information and in this scenario; it has become imperative to get rid of old schools of teaching methodologies and outdated syllabus with expired information to meet the new upcoming challenges. Yuval Noah Harari, in his book ‘21 lessons for the 21st century’, proposes few suggestions which emphasize on improving mental skills of the students. In such a world, teachers need to equip students with abilities that make sense of information. Most pedagogical experts argue that schools ought to switch to teaching ‘the four Cs’- critical thinking, communication, collaboration and creativity. Most importantly, education institutes need to downplay technical skills and abilities to deal with change, to learn new things, and to preserve your mental balance in unfamiliar situation. People should make their minds to encounter things that have never been faced, such as super-intelligent machines, engineered bodies, algorithms that can manipulate emotions with uncanny precision and lastly rapid man-made climate cataclysm. Mental flexibility and great reserves of emotional balance must be viewed as mandatory to flourish in such world.

In contrast to this, AI has also ability to create job opportunities for the humans. It could create 58 million jobs and generate $15.7 trillion for the economy by 2030 for just America while eliminating mundane tasks and helping workers enjoy more creativity. But, it stipulates highly sophisticated knowledge and skills.

It is evident that, in the future, reliance on single source for income will not favour the humans, but a constant change in behavior and aptitude seems to enhance the survival chances. On the other hand, the harder one has worked on building something, the more difficult it becomes to let go of it and make room for something new. Acquiring stability in the future life would be a difficult task for the humans.

In the perplexing situation, where Western nations can collapse, it is also pertinent to understand Pakistan’s position that seems, already, on the back seat in the technology-driven bus. In the world of science and technology, we are at the beginning of the 4th industrial revolution which is marked by the emerging technological breakthrough.

According to Gartner Inc. Global business value derived from artificial intelligence appears to increase from a value of $692 Million in 2017 to $1.2 trillion in 2018, and it is forecasted to reach $3.9 trillion by 2022. Pakistani diaspora in the Silicon Valley appears to be optimistic, because they think if the right decisions are made, Pakistani software exports may even reach $ 30 Billion by 2023.

Pakistan needs to be a part of the great revolution that is knocking on our doorsteps. Rather than be a consumer, we must become a player and manufacturer of the new systems, software and hardware, ensuring phenomenal economic dividends as well as our own security. We need to produce new talent for Pakistan, because their skills will also be in huge demand throughout the world. It goes without saying that Pakistan needs to raise the standard of higher education that demands new version of updated syllabus, highly efficient faculty members and productive environment with availability of all the indispensible modern facilities.

Continue Reading

Publications

Latest

Energy News3 hours ago

Salt and a battery – smashing the limits of power storage

by Caleb Davies Thanks to the renewables’ boom, the limiting factor of the energy revolution is not power supply as much...

Russia8 hours ago

Biden forces Russia to retake all of Ukraine, and maybe even Lithuania

The Soviet Union had included what now are Armenia, Azerbaijan, Byelarus, Estonia, Georgia, Kazakhstan, Latvia, Lithuania, Moldova, Russia, Tajikistan, Turkmenistan,...

East Asia10 hours ago

The Global-south Geopolitical and Geoeconomic Landscape and China’s Growing Influence

The importance of China’s CPEC project in the region and the obstacles it faces. The China-Pakistan Economic Corridor, or CPEC,...

Finance11 hours ago

5 Ways LinkedIn Works for Your Career

Any job seeker can reach their goal much faster with the use of job search engines and career platforms. You...

South Asia12 hours ago

Bulldozing Dissent in India

State brutality and hostility have emerged as the defining factors in BJP’s (Bharatiya Janata Party)  policy toward Indian Muslims. From...

Americas15 hours ago

America and the World: A Vital Connection

“The egocentric ideal of a future reserved for those who have managed to attain egoistically the extremity of `everyone for...

East Asia17 hours ago

Five key challenges awaiting Hong Kong’s incoming leader John Lee

Hong Kong’s leader-in-waiting John Lee has officially been appointed as the sixth-term chief executive of the Hong Kong Special Administrative...

Trending