Recent years have seen breakthroughs in neural network technology: computers can now beat any living person at the most complex game invented by humankind, as well as imitate human voices and faces (both real and non-existent) in a deceptively realistic manner. Is this a victory for artificial intelligence over human intelligence? And if not, what else do researchers and developers need to achieve to make the winners in the AI race the “kings of the world?”
Over the last 60 years, artificial intelligence (AI) has been the subject of much discussion among researchers representing different approaches and schools of thought. One of the crucial reasons for this is that there is no unified definition of what constitutes AI, with differences persisting even now. This means that any objective assessment of the current state and prospects of AI, and its crucial areas of research, in particular, will be intricately linked with the subjective philosophical views of researchers and the practical experience of developers.
In recent years, the term “general intelligence,” meaning the ability to solve cognitive problems in general terms, adapting to the environment through learning, minimizing risks and optimizing the losses in achieving goals, has gained currency among researchers and developers. This led to the concept of artificial general intelligence (AGI), potentially vested not in a human, but a cybernetic system of sufficient computational power. Many refer to this kind of intelligence as “strong AI,” as opposed to “weak AI,” which has become a mundane topic in recent years.
As applied AI technology has developed over the last 60 years, we can see how many practical applications – knowledge bases, expert systems, image recognition systems, prediction systems, tracking and control systems for various technological processes – are no longer viewed as examples of AI and have become part of “ordinary technology.” The bar for what constitutes AI rises accordingly, and today it is the hypothetical “general intelligence,” human-level intelligence or “strong AI,” that is assumed to be the “real thing” in most discussions. Technologies that are already being used are broken down into knowledge engineering, data science or specific areas of “narrow AI” that combine elements of different AI approaches with specialized humanities or mathematical disciplines, such as stock market or weather forecasting, speech and text recognition and language processing.
Different schools of research, each working within their own paradigms, also have differing interpretations of the spheres of application, goals, definitions and prospects of AI, and are often dismissive of alternative approaches. However, there has been a kind of synergistic convergence of various approaches in recent years, and researchers and developers are increasingly turning to hybrid models and methodologies, coming up with different combinations.
Since the dawn of AI, two approaches to AI have been the most popular. The first, “symbolic” approach, assumes that the roots of AI lie in philosophy, logic and mathematics and operate according to logical rules, sign and symbolic systems, interpreted in terms of the conscious human cognitive process. The second approach (biological in nature), referred to as connectionist, neural-network, neuromorphic, associative or subsymbolic, is based on reproducing the physical structures and processes of the human brain identified through neurophysiological research. The two approaches have evolved over 60 years, steadily becoming closer to each other. For instance, logical inference systems based on Boolean algebra have transformed into fuzzy logic or probabilistic programming, reproducing network architectures akin to neural networks that evolved within the neuromorphic approach. On the other hand, methods based on “artificial neural networks” are very far from reproducing the functions of actual biological neural networks and rely more on mathematical methods from linear algebra and tensor calculus.
Are There “Holes” in Neural Networks?
In the last decade, it was the connectionist, or subsymbolic, approach that brought about explosive progress in applying machine learning methods to a wide range of tasks. Examples include both traditional statistical methodologies, like logistical regression, and more recent achievements in artificial neural network modelling, like deep learning and reinforcement learning. The most significant breakthrough of the last decade was brought about not so much by new ideas as by the accumulation of a critical mass of tagged datasets, the low cost of storing massive volumes of training samples and, most importantly, the sharp decline of computational costs, including the possibility of using specialized, relatively cheap hardware for neural network modelling. The breakthrough was brought about by a combination of these factors that made it possible to train and configure neural network algorithms to make a quantitative leap, as well as to provide a cost-effective solution to a broad range of applied problems relating to recognition, classification and prediction. The biggest successes here have been brought about by systems based on “deep learning” networks that build on the idea of the “perceptron” suggested 60 years ago by Frank Rosenblatt. However, achievements in the use of neural networks also uncovered a range of problems that cannot be solved using existing neural network methods.
First, any classic neural network model, whatever amount of data it is trained on and however precise it is in its predictions, is still a black box that does not provide any explanation of why a given decision was made, let alone disclose the structure and content of the knowledge it has acquired in the course of its training. This rules out the use of neural networks in contexts where explainability is required for legal or security reasons. For example, a decision to refuse a loan or to carry out a dangerous surgical procedure needs to be justified for legal purposes, and in the event that a neural network launches a missile at a civilian plane, the causes of this decision need to be identifiable if we want to correct it and prevent future occurrences.
Second, attempts to understand the nature of modern neural networks have demonstrated their weak ability to generalize. Neural networks remember isolated, often random, details of the samples they were exposed to during training and make decisions based on those details and not on a real general grasp of the object represented in the sample set. For instance, a neural network that was trained to recognize elephants and whales using sets of standard photos will see a stranded whale as an elephant and an elephant splashing around in the surf as a whale. Neural networks are good at remembering situations in similar contexts, but they lack the capacity to understand situations and cannot extrapolate the accumulated knowledge to situations in unusual settings.
Third, neural network models are random, fragmentary and opaque, which allows hackers to find ways of compromising applications based on these models by means of adversarial attacks. For example, a security system trained to identify people in a video stream can be confused when it sees a person in unusually colourful clothing. If this person is shoplifting, the system may not be able to distinguish them from shelves containing equally colourful items. While the brain structures underlying human vision are prone to so-called optical illusions, this problem acquires a more dramatic scale with modern neural networks: there are known cases where replacing an image with noise leads to the recognition of an object that is not there, or replacing one pixel in an image makes the network mistake the object for something else.
Fourth, the inadequacy of the information capacity and parameters of the neural network to the image of the world it is shown during training and operation can lead to the practical problem of catastrophic forgetting. This is seen when a system that had first been trained to identify situations in a set of contexts and then fine-tuned to recognize them in a new set of contexts may lose the ability to recognize them in the old set. For instance, a neural machine vision system initially trained to recognize pedestrians in an urban environment may be unable to identify dogs and cows in a rural setting, but additional training to recognize cows and dogs can make the model forget how to identify pedestrians, or start confusing them with small roadside trees.
The expert community sees a number of fundamental problems that need to be solved before a “general,” or “strong,” AI is possible. In particular, as demonstrated by the biggest annual AI conference held in Macao, “explainable AI” and “transfer learning” are simply necessary in some cases, such as defence, security, healthcare and finance. Many leading researchers also think that mastering these two areas will be the key to creating a “general,” or “strong,” AI.
Explainable AI allows for human beings (the user of the AI system) to understand the reasons why a system makes decisions and approve them if they are correct, or rework or fine-tune the system if they are not. This can be achieved by presenting data in an appropriate (explainable) manner or by using methods that allow this knowledge to be extracted with regard to specific precedents or the subject area as a whole. In a broader sense, explainable AI also refers to the capacity of a system to store, or at least present its knowledge in a human-understandable and human-verifiable form. The latter can be crucial when the cost of an error is too high for it only to be explainable post factum. And here we come to the possibility of extracting knowledge from the system, either to verify it or to feed it into another system.
Transfer learning is the possibility of transferring knowledge between different AI systems, as well as between man and machine so that the knowledge possessed by a human expert or accumulated by an individual system can be fed into a different system for use and fine-tuning. Theoretically speaking, this is necessary because the transfer of knowledge is only fundamentally possible when universal laws and rules can be abstracted from the system’s individual experience. Practically speaking, it is the prerequisite for making AI applications that will not learn by trial and error or through the use of a “training set,” but can be initialized with a base of expert-derived knowledge and rules – when the cost of an error is too high or when the training sample is too small.
How to Get the Best of Both Worlds?
There is currently no consensus on how to make an artificial general intelligence that is capable of solving the abovementioned problems or is based on technologies that could solve them.
One of the most promising approaches is probabilistic programming, which is a modern development of symbolic AI. In probabilistic programming, knowledge takes the form of algorithms and source, and target data is not represented by values of variables but by a probabilistic distribution of all possible values. Alexei Potapov, a leading Russian expert on artificial general intelligence, thinks that this area is now in a state that deep learning technology was in about ten years ago, so we can expect breakthroughs in the coming years.
Another promising “symbolic” area is Evgenii Vityaev’s semantic probabilistic modelling, which makes it possible to build explainable predictive models based on information represented as semantic networks with probabilistic inference based on Pyotr Anokhin’s theory of functional systems.
One of the most widely discussed ways to achieve this is through so-called neuro-symbolic integration – an attempt to get the best of both worlds by combining the learning capabilities of subsymbolic deep neural networks (which have already proven their worth) with the explainability of symbolic probabilistic modelling and programming (which hold significant promise). In addition to the technological considerations mentioned above, this area merits close attention from a cognitive psychology standpoint. As viewed by Daniel Kahneman, human thought can be construed as the interaction of two distinct but complementary systems: System 1 thinking is fast, unconscious, intuitive, unexplainable thinking, whereas System 2 thinking is slow, conscious, logical and explainable. System 1 provides for the effective performance of run-of-the-mill tasks and the recognition of familiar situations. In contrast, System 2 processes new information and makes sure we can adapt to new conditions by controlling and adapting the learning process of the first system. Systems of the first kind, as represented by neural networks, are already reaching Gartner’s so-called plateau of productivity in a variety of applications. But working applications based on systems of the second kind – not to mention hybrid neuro-symbolic systems which the most prominent industry players have only started to explore – have yet to be created.
This year, Russian researchers, entrepreneurs and government officials who are interested in developing artificial general intelligence have a unique opportunity to attend the first AGI-2020 international conference in St. Petersburg in late June 2020, where they can learn about all the latest developments in the field from the world’s leading experts.
From our partner RIAC
At Last A Malaria Vaccine and How It All Began
This week marked a signal achievement. A group from Oxford University announced the first acceptable vaccine ever against malaria. One might be forgiven for wondering why it has taken so long when the covid-19 vaccines have taken just over a year … even whether it is a kind of economic apartheid given that malaria victims reside in the poorest countries of the world.
It turns out that the difficulties of making a malaria vaccine have been due to the complexity of the pathogen itself. The malarial parasite has thousands of genes; by way of comparison, the coronavirus has about a dozen. It means malaria requires a very high immune response to fight it off.
A trial of the vaccine in Burkina Faso has yielded an efficacy of 77 percent for subjects given a high dose and 71 percent for the low-dose recipients. The World Health Organization (WHO) had specified a goal of 75 percent for effective deployment in the population. A previous vaccine demonstrated only 55 percent effectiveness. The seriousness of the disease can be ascertained from the statistics. In 2019, 229 million new malaria infections were recorded and 409 thousand people died. Moreover, many who recover can be severely debilitated by recurring bouts of the disease.
Vaccination has an interesting history. The story begins with Edward Jenner. A country doctor with a keen and questioning mind, he had observed smallpox as a deadly and ravaging disease. He also noticed that milkmaids never seemed to get it. However, they had all had cowpox, a mild variant which at some time or another they would have caught from the cows they milked.
It was 1796 and Jenner desperate for a smallpox cure followed up his theory, of which he was now quite certain, with an experiment. On May14, 1796 Jenner inoculated James Phipps, the eight-year-old son of Jenner’s gardener. He used scraped pus from cowpox blisters on the hands of Sarah Nelmes, a milkmaid who had caught cowpox from a cow named Blossom. Blossom’s hide now hangs in the library of St. George’s Hospital, Jenner’s alma mater.
Phipps was inoculated on both arms with the cowpox material. The result was a mild fever but nothing serious. Next he inoculated Phipps with variolous material, a weakened form of smallpox bacteria often dried from powdered scabs. No disease followed, even on repetition. He followed this experiment with 23 additional subjects (for a round two dozen) with the same result. They were all immune to smallpox. Then he wrote about it.
Not new to science, Edward Jenner had earlier published a careful study of the cuckoo and its habit of laying its eggs in others’ nests. He observed how the newly hatched cuckoo pushed hatchlings and other eggs out of the nest. The study was published resulting in his election as a Fellow of the Royal Society. He was therefore well-suited to spread the word about immunization against smallpox through vaccination with cowpox.
Truth be told, inoculation was not new. People who had traveled to Constantinople reported on its use by Ottoman physicians. And around Jenner’s time, there was a certain Johnny Notions, a self-taught healer, who used it in the Shetland Isles then being devastated by a smallpox epidemic. Others had even used cowpox earlier. But Jenner was able to rationally formalize and explain the procedure and to continue his efforts even though The Royal Society did not accept his initial paper. Persistence pays and finally even Napoleon, with whom Britain was at war, awarded him a medal and had his own troops vaccinated.
The Dark Ghosts of Technology
Last many decades, if accidently, we missed the boat on understanding equality, diversity and tolerance, nevertheless, how obediently and intentionally we worshiped the technology no matter how dark or destructive a shape it morphed into; slaved to ‘dark-technology’ our faith remained untarnished and faith fortified that it will lead us as a smarter and successful nation.
How wrong can we get, how long in the spell, will we ever find ourselves again?
The dumb and dumber state of affairs; extreme and out of control technology has taken human-performances on ‘real-value-creation’ as hostage, crypto-corruption has overtaken economies, shiny chandeliers now only cast giant shadows, tribalism nurturing populism and socio-economic-gibberish on social media narratives now as new intellectualism.
Only the mind is where critical thinking resides, not in some app.
The most obvious missing link, is theabandonment of own deeper thinking. By ignoring critical thinking, and comfortably accepting our own programming, labeled as ‘artificial intelligence’ forgetting in AI there is nothing artificial just our own ‘ignorance’ repackaged and branded. AI is not some runaway train; there is always a human-driver in the engine room, go check. When ‘mechanized-programming, sensationalized by Hollywood as ‘celestially-gifted-artificial-intelligence’ now corrupting global populace in assuming somehow we are in safe hands of some bionic era of robotized smartness. All designed and suited to sell undefined glittering crypto-economies under complex jargon with illusions of great progress. The shiny towers of glittering cities are already drowning in their own tent-cities.
A century ago, knowing how to use a pencil sharpener, stapler or a filing cabinet got us a job, today with 100+ miscellaneous, business or technology related items, little or nothing considered as big value-added gainers. Nevertheless, Covidians, the survivors of the covid-19 cruelties now like regimented disciples all lining up at the gates. There never ever was such a universal gateway to a common frontier or such massive assembly of the largest mindshare in human history.
Some of the harsh lessons acquired while gasping during the pandemic were to isolate techno-logy with brain-ology. Humankind needs humankind solutions, where progress is measured based on common goods. Humans will never be bulldozers but will move mountains. Without mind, we become just broken bodies, in desperate search for viagra-sunrises, cannabis-high-afternoons and opioid-sunsets dreaming of helicopter-monies.
Needed more is the mental-infrastructuring to cope with platform economies of global-age and not necessarily cemented-infrastructuring to manage railway crossings. The new world already left the station a while ago. Chase the brain, not the train. How will all this new thinking affect the global populace and upcoming of 100 new National Elections, scheduled over the next 500 days? The world of Covidians is in one boat; the commonality of problems bringing them closer on key issues.
Newspapers across the world dying; finally, world-maps becoming mandatory readings of the day
Smart leadership must develop smart economies to create the real ‘need’ of the human mind and not just jobs, later rejected only as obsolete against robotization. Across the world, damaged economies are visible. Lack of pragmatic support to small medium businesses, micro-mega exports, mini-micro-manufacturing, upskilling, and reskilling of national citizenry are all clear measurements pointing as national failures. Unlimited rainfall of money will not save us, but the respectable national occupationalism will. Study ‘population-rich-nations’ and new entrapments of ‘knowledge-rich-nations’ on Google and also join Expothon Worldwide on ‘global debate series’ on such topics.
Emergency meetings required; before relief funding expires, get ready with the fastest methodologies to create national occupationalism, at any costs, or prepare for fast waves of populism surrounded by almost broken systems. Bold nations need smart play; national debates and discussions on common sense ideas to create local grassroots prosperity and national mobilization of hidden talents of the citizenry to stand up to the global standard of competitive productivity of national goods and services.
The rest is easy
China and AI needs in the security field
On the afternoon of December 11, 2020, the Political Bureau of the Central Committee of the Communist Party of China (CPC) held the 26th Collective Study Session devoted to national security. On that occasion, the General Secretary of the CPC Central Committee, Xi Jinping, stressed that the national security work was very important in the Party’s management of State affairs, as well as in ensuring that the country was prosperous and people lived in peace.
In view of strengthening national security, China needs to adhere to the general concept of national security; to seize and make good use of an important and propitious period at strategic level for the country’s development; to integrate national security into all aspects of the CPC and State’s activity and consider it in planning economic and social development. In other words, it needs to builda security model in view of promoting international security and world peace and offering strong guarantees for the construction of a modern socialist country.
In this regard, a new cycle of AI-driven technological revolution and industrial transformation is on the rise in the Middle Empire. Driven by new theories and technologies such as the Internet, mobile phone services, big data, supercomputing, sensor networks and brain science, AI offers new capabilities and functionalities such as cross-sectoral integration, human-machine collaboration, open intelligence and autonomous control. Economic development, social progress, global governance and other aspects have a major and far-reaching impact.
In recent years, China has deepened the AI significance and development prospects in many important fields. Accelerating the development of a new AI generation is an important strategic starting point for rising up to the challenge of global technological competition.
What is the current state of AI development in China? How are the current development trends? How will the safe, orderly and healthy development of the industry be oriented and led in the future?
The current gap between AI development and the international advanced level is not very wide, but the quality of enterprises must be “matched” with their quantity. For this reason, efforts are being made to expand application scenarios, by enhancing data and algorithm security.
The concept of third-generation AI is already advancing and progressing and there are hopes of solving the security problem through technical means other than policies and regulations-i.e. other than mere talk.
AI is a driving force for the new stages of technological revolution and industrial transformation. Accelerating the development of a new AI generation is a strategic issue for China to seize new opportunities in the organisation of industrial transformation.
It is commonly argued that AI has gone through two generations so far. AI1 is based on knowledge, also known as “symbolism”, while AI2 is based on data, big data, and their “deep learning”.
AI began to be developed in the 1950s with the famous Test of Alan Turing (1912-54), and in 1978 the first studies on AI started in China. In AI1, however, its progress was relatively small. The real progress has mainly been made over the last 20 years – hence AI2.
AI is known for the traditional information industry, typically Internet companies. This has acquired and accumulated a large number of users in the development process, and has then established corresponding patterns or profiles based on these acquisitions, i.e. the so-called “knowledge graph of user preferences”. Taking the delivery of some products as an example, tens or even hundreds of millions of data consisting of users’ and dealers’ positions, as well as information about the location of potential buyers, are incorporated into a database and then matched and optimised through AI algorithms: all this obviously enhances the efficacy of trade and the speed of delivery.
By upgrading traditional industries in this way, great benefits have been achieved. China is leading the way and is in the forefront in this respect: facial recognition, smart speakers, intelligent customer service, etc. In recent years, not only has an increasing number of companies started to apply AI, but AI itself has also become one of the professional directions about which candidates in university entrance exams are worried.
According to statistics, there are 40 AI companies in the world with a turnover of over one billion dollars, 20 of them in the United States and as many as 15 in China. In quantitative terms, China is firmly ranking second. It should be noted, however, that although these companies have high ratings, their profitability is still limited and most of them may even be loss-making.
The core AI sector should be independent of the information industry, but should increasingly open up to transport, medicine, urban fabric and industries led independently by AI technology. These sectors are already being developed in China.
China accounts for over a third of the world’s AI start-ups. And although the quantity is high, the quality still needs to be improved. First of all, the application scenarios are limited. Besides facial recognition, security, etc., other fields are not easy to use and are exposed to risks such as 1) data insecurity and 2) algorithm insecurity. These two aspects are currently the main factors limiting the development of the AI industry, which is in danger of being prey to hackers of known origin.
With regard to data insecurity, we know that the effect of AI applications depends to a large extent on data quality, which entails security problems such as the loss of privacy (i.e. State security). If the problem of privacy protection is not solved, the AI industry cannot develop in a healthy way, as it would be working for ‘unknown’ third parties.
When we log into a webpage and we are told that the most important thing for them is the surfers’ privacy, this is a lie as even teenage hackers know programs to violate it: at least China tells us about the laughableness of such politically correct statements.
The second important issue is the algorithm insecurity. The so-called insecure algorithm is a model that is used under specific conditions and will not work if the conditions are different. This is also called unrobustness, i.e. the algorithm vulnerability to the test environment.
Taking autonomous driving as an example, it is impossible to consider all scenarios during AI training and to deal with new emergencies when unexpected events occur. At the same time, this vulnerability also makes AI systems permeable to attacks, deception and frauds.
The problem of security in AI does not lie in politicians’ empty speeches and words, but needs to be solved from a technical viewpoint. This distinction is at the basis of AI3.
It has a development path that combines the first generation knowledge-based AI and the second generation data-driven AI. It uses the four elements – knowledge, data, algorithms and computing power – to establish a new theory and interpretable and robust methods for a safe, credible and reliable technology.
At the moment, the AI2 characterised by deep learning is still in a phase of growth and hence the question arises whether the industry can accept the concept of AI3 development.
As seen above, AI has been developing for over 70 years and now it seems to be a “prologue’.
Currently most people are not able to accept the concept of AI3 because everybody was hoping for further advances and steps forward in AI2. Everybody felt that AI could continue to develop by relying on learning and not on processing. The first steps of AI3 in China took place in early 2015 and in 2018.
The AI3 has to solve security problems from a technical viewpoint. Specifically, the approach consists in combining knowledge and data. Some related research has been carried out in China over the past four or five years and the results have also been applied at industrial level. The RealSecure data security platform and the RealSafe algorithm security platform are direct evidence of these successes.
What needs to be emphasised is that these activities can only solve particular security problems in specific circumstances. In other words, the problem of AI security has not yet found a fundamental solution, and it is likely to become a long-lasting topic without a definitive solution since – just to use a metaphor – once the lock is found, there is always an expert burglar. In the future, the field of AI security will be in a state of ongoing confrontation between external offence and internal defence – hence algorithms must be updated constantly and continuously.
The progression of AI3 will be a natural long-term process. Fortunately, however, there is an important AI characteristic – i.e. that every result put on the table always has great application value. This is also one of the important reasons why all countries attach great importance to AI development, as their national interest and real independence are at stake.
With changes taking place around the world and a global economy in deep recession due to Covid-19, the upcoming 14th Five-Year Plan (2021-25) of the People’s Republic of China will be the roadmap for achieving the country’s development goals in the midst of global turmoil.
As AI is included in the aforementioned plan, its development shall also tackle many “security bottlenecks”. Firstly, there is a wide gap in the innovation and application of AI in the field of network security, and many scenarios are still at the stage of academic exploration and research.
Secondly, AI itself lacks a systematic security assessment and there are severe risks in all software and hardware aspects. Furthermore, the research and innovation environment on AI security is not yet at its peak and the relevant Chinese domestic industry not yet at the top position, seeking more experience.
Since 2017, in response to the AI3 Development Plan issued by the State Council, 15 Ministries and Commissions including the Ministry of Science and Technology, the Development and Reform Commission, etc. have jointly established an innovation platform. This platform is made up of leading companies in the industry, focusing on open innovation in the AI segment.
At present, thanks to this platform, many achievements have been made in the field of security. As first team in the world to conduct research on AI infrastructure from a system implementation perspective, over 100 vulnerabilities have been found in the main machine learning frameworks and dependent components in China.
The number of vulnerabilities make Chinese researchers rank first in the world. At the same time, a future innovation plan -developed and released to open tens of billions of security big data – is being studied to promote the solution to those problems that need continuous updates.
The government’s working report promotes academic cooperation and pushes industry and universities to conduct innovative research into three aspects: a) AI algorithm security comparison; 2) AI infrastructure security detection; 3) AI applications in key cyberspace security scenarios.
By means of state-of-the-art theoretical and basic research, we also need to provide technical reserves for the construction of basic AI hardware and open source software platforms (i.e. programmes that are not protected by copyright and can be freely modified by users) and AI security detection platforms, so as to reduce the risks inherent in AI security technology and ensure the healthy development of AI itself.
With specific reference to security, on March 23 it was announced that the Chinese and Russian Foreign Ministers had signed a joint statement on various current global governance issues.
The statement stresses that the continued spread of the Covid-19 pandemic has accelerated the evolution of the international scene, has caused a further imbalance in the global governance system and has affected the process of economic development while new global threats and challenges have emerged one after another and the world has entered a period of turbulent changes. The statement appeals to the international community to put aside differences, build consensus, strengthen coordination, preserve world peace and geostrategic stability, as well as promote the building of a more equitable, democratic and rational multipolar international order.
In view of ensuring all this, the independence enshrined by international law is obviously not enough, nor is the possession of nuclear deterrent. What is needed, instead, is the country’s absolute control of information security, which in turn orients and directs the weapon systems, the remote control of which is the greedy prey to the usual suspects.
Eastern Balkans Economic update: Romania’s and North Macedonia’s new data for 2020
When governments around the world started reacting to the pandemic, they induced a vast and unpredictable crisis. The ensuing recession...
Political Lessons from Kerala: People’s Response to the Communist Welfare System
Amid covid-19 fears, the elections to the legislative assemblies of four Indian states- West Bengal, Tamil Nadu, Assam and Kerala,...
5th Generation Warfare: A reality or Controversy?
In the truest sense, the constant repetition of phrase ‘the 5th generation warfare’ by our military leaders in every media...
Has Modi Conceded ‘South Asia’ to the United States?
Prime Minister Narendra Modi has been pursuing an assertive and confrontational foreign policy. From carrying out ‘surgical strikes’ across the...
Angelus U30 Black Titanium: The one-of-a-kind mean machine
Offered up on the altar of the grande complication, the U30 is a piece like no other. Ultra-light and ultra-sporty,...
Conflict Affected Families in Armenia to Receive World Bank Support
A Grant Agreement for the “Support to Conflict Affected Families” project was signed today by Sylvie Bossoutrot, World Bank Country...
Russia becomes member of International Organization for Migration
After several negotiations, Russia finally becomes as a full-fledged member of the International Organization for Migration (IOM). It means that...
Energy2 days ago
Nord Stream 2: To Gain or to Refrain? Why Germany Refuses to Bend under Sanctions Pressure
Defense2 days ago
China’s quad in the making: A non-conventional approach
Americas3 days ago
Trump Lost, Biden Won. Is Joe Biden’s presidency a signal towards Obama’s America?
South Asia2 days ago
Covid19 mismanagement in India
Green Planet3 days ago
Climate Change Problem: an Emerging Threat to Global Security
Reports2 days ago
Clean energy demand for critical minerals set to soar as the world pursues net zero goals
Reports3 days ago
Global e-commerce jumps to $26.7 trillion, fuelled by COVID-19
South Asia2 days ago
Rohingya crisis: How long will Bangladesh single-handedly assume this responsibility?