Wagner and Furst exhaustively explore the inner workings and implications of AI in their new book, “AI Supremacy: Winning in the Era of Machine Learning”. Each chapter focuses on the current and future state of AI within a specific industry, country or society in general. Special emphasis is placed on how AI will shape the domestic, diplomatic and military landscapes of the US, EU and China.
Here is an interview with Daniel Wagner
Can you briefly explain the differences between artificial intelligence, machine learning, and deep learning?
Artificial intelligence (AI) is the overarching science and engineering associated with intelligent algorithms, whether or not they learn from data. However, the definition of intelligence is subject to philosophical debate-even the terms algorithms can be interpreted in a wide context. This is one of the reasons why there is some confusion about what AI is and what is not, because people use the word loosely and have their own definition of what they believe AI is. People should understand AI to be a catch-all term for technology which tends to imply the latest advances in intelligent algorithms, but the context in how the phrase is used determines its meaning, which can vary quite widely.
Machine learning (ML) is a subfield of AI that focuses on intelligent algorithms that can learn automatically (without being explicitly programmed) from data. There are three general categories of ML: supervised machine learning, unsupervised machine learning, and reinforcement learning.
Deep learning (DL) is a subfield of ML that imitates the workings of the human brain (or neural networks) in the processing of data and creating patterns for use in decision-making. It is true that the way the human brain processes information was one of the main inspirations behind DL, but it only mimics the functioning of neurons. This doesn’t mean that consciousness is being replicated, because we really do not understand all the underlying mechanics driving consciousness. Since DL is a rapidly evolving field there are other more general definitions of it, such as a neural network with more than two layers. The idea of layers is that information is processed by the DL algorithm at one level and then passes information on to the next level so that higher levels of abstraction and conclusions can be drawn about data.
Is China’s Social Credit Score system about to usher in an irreversible Orwellian nightmare there? How likely is it to spread to other dictatorships?
The social credit system that the Chinese government is in the process of unleashing is creating an Orwellian nightmare for some of China’s citizens. We say “some” because many Chinese citizens do not necessarily realize that it is being rolled out. This is because the government has been gradually implementing versions of what has become the social credit system over a period of years without calling it that. Secondly, most Chinese citizens have become numb to the intrusive nature of the Chinese state. They have been poked and prodded in various forms for so long that they have become accustomed to, and somewhat accepting, of it. That said, the social credit system has real consequences for those who fall afoul of it; they will soon learn about the consequences of having done so, if they have not learned already.
As we note in the book, the Chinese government has shared elements of its social credit system technology with a range of states across the world. There is every reason to believe that authoritarian governments will wish to adopt the technology and use it for their own purposes. Some have already done so.
How can we stop consumer drones from being used to aid in blackmail, burglary, assassination, and terrorist attacks?
As Daniel notes in his book Virtual Terror, governments are having a difficult time keeping track of the tens of millions of drones that are in operation in societies around the world. Registering them is largely voluntary and there are too few regulations in place governing their use. Given this, there is little that can be done, at this juncture, to prevent them from being used for nefarious purposes. Moreover, drones’ use on the battlefield is transforming the way individual battles will be fought, and wars will be waged. We have a chapter in the book devoted to this subject.
Google, YouTube, Twitter and Facebook have been caught throttling/ending traffic to many progressive (TeleSur, TJ Kirk) and conservative (InfoWars, PragerU) websites and channels. Should search engines and social media platforms be regulated as public utilities, to lend 1st Amendment protections to the users of these American companies?
The current battle being waged–in the courts, legislatures, and the battlefield of social media itself- are already indicative of how so many unanswered questions associated with the rise of social media are being addressed out of necessity. It seems that no one–least of all the social media firms–wants to assume responsibility when things go wrong or uncomfortable questions must be answered. Courts and legislatures will ultimately have to find a middle ground response to issues such as first amendment protections, but this will likely remain a moving target for some time to come, as there is no single black or white answer, and, as each new law comes into effect, its ramifications will become known, which means the laws will undoubtedly need to become subsequently modified.
Do you think blockchain will eventually lead to a golden era of fiscal transparency?
This is hard to say. On one hand, the rise of cryptocurrencies brought with them the promise of money outside the control of governments and large corporations. However, cryptocurrencies have been subject to a number of high-profile heists and there are still some fundamental issues with them, such as the throughput of Bitcoin which is only able to process around a few transactions per second. This makes some cryptocurrencies less viable for real world transactions and everyday commerce.
The financial services industry has jumped on the blockchain bandwagon, but they have taken the open concept of some cryptocurrencies and reinvented it as distributed ledger technology (DLT). To be part of DLTs created by financial institutions, a joining member must be a financial institution. For this reason, the notion of transparency is not relevant, since the DLT will be controlled by a limited number of members and only they will determine what information is public and what is not.
The other issue with the crypto space right now is that is filled with fraud. At the end of the day, crypto is an asset class like gold or any other precious metal. It does not actually produce anything; The only real value it has is the willingness of another person to pay more for it in the future. It is possible that a few cryptocurrencies will survive long-term and become somewhat viable, but the evolution of blockchain will likely continue to move towards DLT that more people will trust. Also, governments are likely to issue their own cryptocurrencies in the future, which will bring it into the mainstream.
Taiwan has recently started using online debate forums to help draft legislation, in a form of direct democracy. Kenya just announced that they will post presidential election results on a blockchain. How can AI and blockchain enhance democracy?
Online debate forums are obviously a good thing, because having the average person engage in political debate and being able to record and aggregate voting results will create an opportunity for more transparency. The challenge becomes how to verify the identities of the people submitting their feedback. Could an AI program be designed to submit feedback millions of times to give a false representation of the public’s concerns?
Estonia has long been revered as the world’s most advanced digital society, but researchers have pointed out serious security flaws in its electronic voting system, which could be manipulated to influence election outcomes. AI can help by putting in place controls to verify that the person providing feedback for legislation is a citizen. Online forums could force users to take a pic of their face next to their passport to verify their identity with facial recognition algorithms.
Should an international statute be passed banning scientists from installing emotions-specially pain and fear-into AI?
Perhaps, for now at least, the question should be: should scientists ban the installation of robots or other forms of AI to imitate human emotions? The short answer to this is that it depends. On one hand, AI imitating human emotions could be a good thing, such as when caring for the elderly or teaching a complex concept to a student. However, a risk is that when AI can imitate human emotions very well, people may believe they have gained a true friend who understands them. It is somewhat paradoxical that the rise of social media has connected more of us, but some people still admit that they lack meaningful relationships with others.
You don’t talk much about India in your book. How far behind are they in the AI race, compared to China, the US & EU?
Surprisingly, many of the world’s countries have only adopted a formal AI strategy in the last year. India is one of them; It only formally adopted an AI strategy in 2018 and lags well behind China, the EU, the US, and variety of other countries. India has tremendous potential to meaningfully enter the race for AI supremacy and become a viable contender, but it still lacks a military AI strategy. India already contributes to advanced AI-oriented technology through its thriving software, engineering, and consulting sectors. Once it ramps up a national strategy, it should quickly become a leader in the AI arena–to the extent that it devotes sufficient resources to that strategy and swiftly and effectively implements it. That is not a guaranteed outcome, based on the country’s prior history with some prior national initiatives. We must wait and see if India lives up to its potential in this arena.
On page 58 you write, “Higher-paying jobs requiring creativity and problem-solving skills, often assisted by computers, have proliferated… Demand has increased for lower skilled restaurant workers, janitors, home health aides, and others providing services that cannot be automated.” How will we be able to stop this kind of income inequality?
In all likelihood, the rise of AI will, at least temporarily, increased the schism between highly paid white-collar jobs and lower paid blue-collar jobs, however, at the same time, AI will, over decades, dramatically alter the jobs landscape. Entire industries will be transformed to become more efficient and cost effective. In some cases this will result in a loss of jobs while in others it will result in job creation. What history has shown is that, even in the face of transformational change, the job market has a way of self-correcting; Overall levels of employment tend to stay more or less the same. We have no doubt that this will prove to be the case in this AI-driven era. While income inequality will remain a persistent threat, our expectation is that, two decades from now, it will be no worse than it is right now.
AI systems like COMPAS and PredPol have been exposed for being racially biased. During YouTube’s “Adpocalypse”, many news and opinion videos got demonetized by algorithms indiscriminately targeting keywords like ‘war’ and ‘racism”. How can scientists and executives prevent their biases from influencing their AI?
This will be an ongoing debate. Facebook removed a PragerU video where a woman was describing the need for strong men in society and the problem with feminizing them. Ultimately, Facebook said it was a mistake and put the video back up. So the question becomes who decides what constitutes “racist” or “hate speech” content? The legal issues seem to emerge, if it can be argued that the content being communicated are calling on people to act in a violent way.
Could the political preferences of a social media company’s executives overrule the sensibilities of the common person to make up their own mind? On the other hand, India has a string of mob killings from disinformation campaigns on WhatsApp, mostly from people who were first time smartphone users. Companies could argue that some people are not able to distinguish between real and fake videos so content must be censored in that case.
Ultimately, executives and scientists will need to have an open and ongoing debate about content censorship. Companies must devise a set of principles and adhere to them to the best of their ability. As AI becomes more prevalent in monitoring and censoring online content there will have to be more transparency about the process and the algorithms will need to be adjusted following a review by the company. In other words, companies cannot prevent algorithmic biases, but they can monitor them and be transparent with the public about steps to make them better over time.
Amper is an AI music composer. Heliograf has written about 1000 news blurbs for WaPo. E-sports and e-bands are starting to sell out stadiums. Are there any human careers that you see as being automation-proof?
In theory, nearly any cognitive or physical task can be automated. We do not believe that people should be too worried, at least for the time being, about the implications of doing so because the costs to automate even basic tasks to the level of human performance is extremely high, and we are a good ways away from being technically capable of automating most tasks. However, AI should spark conversations about how we want to structure our society in the future and what it means to be human because AI will improve over time and become more dominant in the economy.
In Chapter 1 you briefly mention digital amnesia (outsourcing the responsibility of memorizing stuff to one’s devices). How else do you anticipate consumer devices will change us psychologically in the next few decades?
We could see a spike in schizophrenia because the immersive nature of virtual, augmented, and mixed reality that will increasingly blur the lines between reality and fantasy. In the 1960s there was a surge of interest in mind-expanding drugs such as psychedelics. However, someone ingesting LSD knew there was a time limit associated with the effects of the drug. These technologies do not end. Slowly, the real world could become less appealing and less real for heavy users of extended reality technology. This could affect relationships between other humans and increase the nature and commonality of mental illness. Also, as discussed in the book, we are already seeing people who cannot deal with risk in the real world. There have been several cases of animal mauling, cliff falls, and car crashes among individuals in search of the perfect “selfie”. This tendency to want to perfect our digital personas should be a topic of debate in schools and at the dinner table.
Ready Player One is the most recent sci-fi film positing the gradual elimination of corporeal existence through Virtual Reality. What do you think of the transcension hypothesis on Fermi’s paradox?
The idea that our consciousness can exist independently from our bodies has occurred throughout humanity’s history. It appears that our consciousness is a product of our own living bodies. No one knows if a person’s consciousness can exist after the body dies, but some have suggested that a person’s brain still functions for a few minutes after the body dies. It seems we need to worry about the impact of virtual reality on our physical bodies before it will be possible for us to transcend our bodies and exist on a digital plane. This is a great thought experiment, but there is not enough evidence to suggest that this is even remotely possible in the future.
What role will AI play in climate change?
AI will become an indispensable tool for helping to predict the impacts of climate change in the future. The field of “Climate Informatics” is already blossoming, harnessing AI to fundamentally transform weather forecasting (including the prediction of extreme events) and to improve our understanding of the effects of climate change. Much more thought and research needs to be devoted to exploring the linkages between the technology revolution and other important global trends, including demographic changes such as ageing and migration, climate change, and sustainable development, but AI should make a real difference in enhancing our general understanding of the impacts of these, and other, phenomena going forward.
AI Is Neither the Terminator Nor a Benevolent Super Being
Digitalization and the development of artificial intelligence (AI) bring up many philosophical and ethical questions about the role of man and robot in the nascent social and economic order. How real is the threat of an AI dictatorship? Why do we need to tackle AI ethics today? Does AI provide breakthrough solutions? We ask these and other questions in our interview with Maxim Fedorov, Vice-President for Artificial Intelligence and Mathematical Modelling at Skoltech.
On 1–3 July, Maxim Fedorov chaired the inaugural Trustworthy AI online conference on AI transparency, robustness and sustainability hosted by Skoltech.
Maxim, do you think humanity already needs to start working out a new philosophical model for existing in a digital world whose development is determined by artificial intelligence (AI) technologies?
The fundamental difference between today’s technologies and those of the past is that they hold up a “mirror” of sorts to society. Looking into this mirror, we need to answer a number of philosophical questions. In times of industrialization and production automation, the human being was a productive force. Today, people are no longer needed in the production of the technologies they use. For example, innovative Japanese automobile assembly plants barely have any people at the floors, with all the work done by robots. The manufacturing process looks something like this: a driverless robot train carrying component parts enters the assembly floor, and a finished car comes out. This is called discrete manufacturing – the assembly of a finite set of elements in a sequence, a task which robots manage quite efficiently. The human being is gradually being ousted from the traditional economic structure, as automated manufacturing facilities generally need only a limited number of human specialists. So why do we need people in manufacturing at all? In the past, we could justify our existence by the need to earn money or consume, or to create jobs for others, but now this is no longer necessary. Digitalization has made technologies a global force, and everyone faces philosophical questions about their personal significance and role in the modern world – questions we should be answering today, and not in ten years when it will be too late.
At the last World Economic Forum in Davos, there was a lot of discussion about the threat of the digital dictatorship of AI. How real is that threat in the foreseeable future?
There is no evil inherent in AI. Technologies themselves are ethically neutral. It is people who decide whether to use them for good or evil.
Speaking of an AI dictatorship is misleading. In reality, technologies have no subjectivity, no “I.” Artificial intelligence is basically a structured piece of code and hardware. Digital technologies are just a tool. There is nothing “mystical” about them either.
My view as a specialist in the field is that AI is currently a branch of information and communications technology (ICT). Moreover, AI does not even “live” in an individual computer. For a person from the industry, AI is a whole stack of technologies that are combined to form what is called “weak” AI.
We inflate the bubble of AI’s importance and erroneously impart this technology stack with subjectivity. In large part, this is done by journalists, people without a technical education. They discuss an entity that does not actually exist, giving rise to the popular meme of an AI that is alternately the Terminator or a benevolent super-being. This is all fairy tales. In reality, we have a set of technological solutions for building effective systems that allow decisions to be made quickly based on big data.
Various high-level committees are discussing “strong” AI, which will not appear for another 50 to 100 years (if at all). The problem is that when we talk about threats that do not exist and will not exist in the near future, we are missing some real threats. We need to understand what AI is and develop a clear code of ethical norms and rules to secure value while avoiding harm.
Sensationalizing threats is a trend in modern society. We take a problem that feeds people’s imaginations and start blowing it up. For example, we are currently destroying the economy around the world under the pretext of fighting the coronavirus. What we are forgetting is that the economy has a direct influence on life expectancy, which means that we are robbing many people of years of life. Making decisions based on emotion leads to dangerous excesses.
As the philosopher Yuval Noah Harari has said, millions of people today trust the algorithms of Google, Netflix, Amazon and Alibaba to dictate to them what they should read, watch and buy. People are losing control over their lives, and that is scary.
Yes, there is the danger that human consciousness may be “robotized” and lose its creativity. Many of the things we do today are influenced by algorithms. For example, drivers listen to their sat navs rather than relying on their own judgment, even if the route suggested is not the best one. When we receive a message, we feel compelled to respond. We have become more algorithmic. But it is ultimately the creator of the algorithm, not the algorithm itself, that dictates our rules and desires.
There is still no global document to regulate behaviour in cyberspace. Should humanity perhaps agree on universal rules and norms for cyberspace first before taking on ethical issues in the field of AI?
I would say that the issue of ethical norms is primary. After we have these norms, we can translate them into appropriate behaviour in cyberspace. With the spread of the internet, digital technologies (of which AI is part) are entering every sphere of life, and that has led us to the need to create a global document regulating the ethics of AI.
But AI is a component part of information and communications technologies (ICT). Maybe we should not create a separate track for AI ethics but join it with the international information security (IIS) track? Especially since IIS issues are being actively discussed at the United Nations, where Russia is a key player.
There is some justification for making AI ethics a separate track, because, although information security and AI are overlapping concepts, they are not embedded in one another. However, I agree that we can have a separate track for information technology and then break it down into sub-tracks where AI would stand alongside other technologies. It is a largely ontological problem and, as with most problems of this kind, finding the optimal solution is no trivial matter.
You are a member of the international expert group under UNESCO that is drafting the first global recommendation on the ethics of AI. Are there any discrepancies in how AI ethics are understood internationally?
The group has its share of heated discussions, and members often promote opposing views. For example, one of the topics is the subjectivity and objectivity of AI. During the discussion, a group of states clearly emerged that promotes the idea of subjectivity and is trying to introduce the concept of AI as a “quasi-member of society.” In other words, attempts are being made to imbue robots with rights. This is a dangerous trend that may lead to a sort of technofascism, inhumanity of such a scale that all previous atrocities in the history of our civilization would pale in comparison.
Could it be that, by promoting the concept of robot subjectivity, the parties involved are trying to avoid responsibility?
Absolutely. A number of issues arise here. First, there is an obvious asymmetry of responsibility. “Let us give the computer with rights, and if its errors lead to damage, we will punish it by pulling the plug or formatting the hard drive.” In other words, the responsibility is placed on the machine and not its creator. The creator gets the profit, and any damage caused is someone else’s problem. Second, as soon as we give AI rights, the issues we are facing today with regard to minorities will seem trivial. It will lead to the thought that we should not hurt AI but rather educate it (I am not joking: such statements are already being made at high-level conferences). We will see a sort of juvenile justice for AI. Only it will be far more terrifying. Robots will defend robot rights. For example, a drone may come and burn your apartment down to protect another drone. We will have a techno-racist regime, but one that is controlled by a group of people. This way, humanity will drive itself into a losing position without having the smallest idea of how to escape it.
Thankfully, we have managed to remove any inserts relating to “quasi-members of society” from the group’s agenda.
We chose the right time to create the Committee for Artificial Intelligence under the Commission of the Russian Federation for UNESCO, as it helped to define the main focus areas for our working group. We are happy that not all countries support the notion of the subjectivity of AI – in fact, most oppose it.
What other controversial issues have arisen in the working group’s discussions?
We have discussed the blurred border between AI and people. I think this border should be defined very clearly. Then we came to the topic of human-AI relationships, a term which implies the whole range of relationships possible between people. We suggested that “relationships” be changed to “interactions,” which met opposition from some of our foreign colleagues, but in the end, we managed to sort it out.
Seeing how advanced sex dolls have become, the next step for some countries would be to legalize marriage with them, and then it would not be long before people starting asking for church weddings. If we do not prohibit all of this at an early stage, these ideas may spread uncontrollably. This approach is backed by big money, the interests of corporations and a different system of values and culture. The proponents of such ideas include a number of Asian countries with a tradition of humanizing inanimate objects. Japan, for example, has a tradition of worshipping mountain, tree and home spirits. On the one hand, this instills respect for the environment, and I agree that, being a part of the planet, part of nature, humans need to live in harmony with it. But still, a person is a person, and a tree is a tree, and they have different rights.
Is the Russian approach to AI ethics special in any way?
We were the only country to state clearly that decisions on AI ethics should be based on a scientific approach. Unfortunately, most representatives of other countries rely not on research, but on their own (often subjective) opinion, so discussions in the working group often devolve to the lay level, despite the fact that the members are highly qualified individuals.
I think these issues need to be thoroughly researched. Decisions on this level should be based on strict logic, models and experiments. We have tremendous computing power, an abundance of software for scenario modelling, and we can model millions of scenarios at a low cost. Only after that should we draw conclusions and make decisions.
How realistic is the fight against the subjectification of AI if big money is at stake? Does Russia have any allies?
Everyone is responsible for their own part. Our task right now is to engage in discussions systematically. Russia has allies with matching views on different aspects of the problem. And common sense still prevails. The egocentric approach we see in a number of countries that is currently being promoted, this kind of self-absorption, actually plays into our hands here. Most states are afraid that humans will cease to be the centre of the universe, ceding our crown to a robot or a computer. This has allowed the human-centred approach to prevail so far.
If the expert group succeeds at drafting recommendations, should we expect some sort of international regulation on AI in the near future?
If we are talking about technical standards, they are already being actively developed at the International Organization for Standardization (ISO), where we have been involved with Technical Committee 164 “Artificial Intelligence” (TC 164) in the development of a number of standards on various aspects of AI. So, in terms of technical regulation, we have the ISO and a whole range of documents. We should also mention the Institute of Electrical and Electronics Engineers (IEEE) and its report on Ethically Aligned Design. I believe this document is the first full-fledged technical guide on the ethics of autonomous and intelligent systems, which includes AI. The corresponding technical standards are currently being developed.
As for the United Nations, I should note the Beijing Consensus on Artificial Intelligence and Education that was adopted by UNESCO last year. I believe that work on developing the relevant standards will start next year.
So the recommendations will become the basis for regulatory standards?
Exactly. This is the correct way to do it. I should also say that it is important to get involved at an early stage. This way, for instance, we can refer to the Beijing agreements in the future. It is important to make sure that AI subjectivity does not appear in the UNESCO document, so that it does not become a reference point for this approach.
Let us move from ethics to technological achievements. What recent developments in the field can be called breakthroughs?
We haven’t seen any qualitative breakthroughs in the field yet. Image recognition, orientation, navigation, transport, better sensors (which are essentially the sensory organs for robots) – these are the achievements that we have so far. In order to make a qualitative leap, we need a different approach.
Take the “chemical universe,” for example. We have researched approximately 100 million chemical compounds. Perhaps tens of thousands of these have been studied in great depth. And the total number of possible compounds is 1060, which is more than the number of atoms in the Universe. This “chemical universe” could hold cures for every disease known to humankind or some radically new, super-strong or super-light materials. There is a multitude of organisms on our planet (such as the sea urchin) with substances in their bodies that could, in theory, cure many human diseases or boost immunity. But we do not have the technology to synthesize many of them. And, of course, we cannot harvest all the sea urchins in the sea, dry them and make an extract for our pills. But big data and modelling can bring about a breakthrough in this field. Artificial intelligence can be our navigator in this “chemical universe.” Any reasonable breakthrough in this area will multiply our income exponentially. Imagine an AIDS or cancer medicine without any side effects, or new materials for the energy industry, new types of solar panels, etc. These are the kind of things that can change our world.
How is Russia positioned on the AI technology market? Is there any chance of competing with the United States or China?
We see people from Russia working in the developer teams of most big Asian, American and European companies. A famous example is Sergey Brin, co-founder and developer of Google. Russia continues to be a “donor” of human resources in this respect. It is both reassuring and disappointing because we want our talented guys to develop technology at home. Given the right circumstances, Yandex could have dominated Google.
As regards domestic achievements, the situation is somewhat controversial. Moscow today is comparable to San Francisco in terms of the number, quality and density of AI development projects. This is why many specialists choose to stay in Moscow. You can find a rewarding job, interesting challenges and a well-developed expert community.
In the regions, however, there is a concerning lack of funds, education and infrastructure for technological and scientific development. All three of our largest supercomputers are in Moscow. Our leaders in this area are the Russian Academy of Sciences, Moscow State University and Moscow Institute of Physics and Technology – organizations with a long history in the sciences, rich traditions, a sizeable staff and ample funding. There are also some pioneers who have got off the ground quickly, such as Skoltech, and surpassed their global competitors in many respects. We recently compared Skoltech with a leading AI research centre in the United Kingdom and discovered that our institution actually leads in terms of publications and grants. This means that we can and should do world-class science in Russia, but we need to overcome regional development disparities.
Russia has the opportunity to take its rightful place in the world of high technology, but our strategy should be to “overtake without catching up.” If you look at our history, you will see that whenever we have tried to catch up with the West or the East, we have lost. Our imitations turned out wrong, were laughable and led to all sorts of mishaps. On the other hand, whenever we have taken a step back and synthesized different approaches, Asian or Western, without blindly copying them, we have achieved tremendous success.
We need to make a sober assessment of what is happening in the East and in the West and what corresponds to our needs. Russia has many unique challenges of its own: managing its territory, developing the resource industries and continuous production. If we are able to solve these tasks, then later we can scale up our technological solutions to the rest of the world, and Russian technology will be bought at a good price. We need to go down our own track, not one that is laid down according to someone else’s standards, and go on our way while being aware of what is going on around us. Not pushing back, not isolating, but synthesizing.
From our partner RIAC
India and outer space ambition: First crewed mission in 2021 and geopolitics involvements
The Indian outer space programme has achieved an extraordinary progress in the last decade.
The country is already known as the “bright spot” of the global economy, relying on a vast internal market, a highly young population and as one of the most sustained growth rates among the BRICS: all characteristics which shape it one of the top ten economies in the world.
Additionally, India displays successful space missions in its portfolio along with its proficiency in operating low-cost space projects which have made it a tremendously esteemed role-player in the international environment.
The agenda for the 2020-2030 decade is busy and crammed with challenging missions planned to land on the Moon and found the first Indian solar observatory in 2020, orbit around Venus and Mars in the two-year period 2023-2024, engage the country’s first crewed orbital spaceflight mission in 2021 and install the first modular space station in 2030. Thus, this ambitious calendar is the square one to drive India to the top position among space-faring nations by the end of the decade.
The emerging advanced Industry 4.0 – made up of Artificial Intelligence (AI), autonomous robots, big data synthesis, hyper-automation and digital manufacturing – represents the core of the current decade: hardware and software technologies supplementing real and cyber rooms into cyber-physical systems are overcoming the past Industry 3.0 technologies which have been the fuel of space operations until now.
According to the India’s space history, in 1984, Indian Air Force (IAF) pilot Rakesh Sharma was the first and only Indian citizen to hazard the space aboard a Soviet rocket for a week-long stay on the Salyut 7 space station. By December 2021, the country is hell-bent on running its own crewed spaceflight programme, called Gaganyaan, consisting in the launch of three astronauts into low Earth orbit for one week. In order to heighten its odds at success, Indian Space Research Organisation (ISRO) is going to carry out two un-crewed test flights, respectively in December 2020 and July 2021. What really stands out from the past missions is the unveiled intention of launching a humanoid robot named Vyommitra into low Earth orbit, which will perform as a dummy astronaut for the first two test flights. Vyommitra is half-length but equipped with communication systems which enable it to perceive and transmit with astronauts. It is set up to react to its environment, develop life-support operations and simulate crew activities: all procedures which would – and will – assist in facing issues and ensure the safety of the crew’s life on board before its 2021-planned flight.
As laid out in the ISRO Report 2020, the human spaceflight represents a giant stepping stone in the long run towards technological breakthroughs and India cannot miss this golden opportunity.
However, although ISRO’s ambitions involve human spaceflight programme, the organisation does not have any know-how about astronaut training.
In light of this, it has followed a wake-up call for an international cooperation with Russia’s space agency Roscosmos and France. Specifically, since January 2020, four Indian Air Force (IAF) pilots have been attending a twelve-month-long programme at the Yuri Gagarin Cosmonaut Training Center, near Moscow. The pilots-turned-astronauts are addressing an intensive physical and biomedical training, including a first focus on preparation for extra-ordinary flight circumstances and a second one on monitoring the health conditions of astronauts from take-off to landing.
Well, it may be that ISRO is not yet at the helm of crewed spaceflights, but it certainly cannot be said that India does not tend to exceed its limits and thrive in challenging environments.
The Mars Orbiter Missions (Chandrayaan-1 and -2) and 104 satellites launched at once time are excellent patterns proving how outer space has increasingly become a resource for escalating a national prestige whose both Indian citizens and the international community can benefit.
Likewise, the space has also turned into an expedient of foreign policy and diplomacy, as well as a stratagem for its military renovation. About this latter, in 2019, India has abandoned its traditional opposition to the space militarisation to embrace a new, more resolute approach with a view to its national space policy. Then, the country has established a Defence Space Agency (DSA) in 2019 which was already at that time forecasted to be a milestone towards shaking the ground of the space revolution. The DSA must team up with its civilian space agency partner, ISRO, to boost India’s technological capabilities in space affairs. In particular, both the agencies are expected to break the long-standing competition and distrust between civilian and military sectors by developing Industry 4.0 innovation ecosystems and enhancing Industry 4.0 automations and components in all their future projects and missions.
Moreover, on 24 June 2020, the Union Cabinet decided to institute a new body – Indian National Space Promotion and Authorisation Centre (IN-SPACe) – that will pursue a greater involvement of private industry, academy and research institutions in India’s space sector. IN-SPACe, which is predicted to reach its final operational capability within six months, will act as a junction point between ISRO and every like-minded private organisation which is determined to participate in research and development (R&D) of new technologies, exploration missions and human spaceflight programme.
The call for the private sector finds its origins in what the ISRO’s chairperson – Kailasavadivoo Sivan – stated during a recent interview. He declared that Indian industry had a barely three per cent share in a rapidly growing global space economy which was already worth at least $360 billion. Only two per cent of this market was for rocket and satellite launch services, which require fairly large infrastructure and heavy investment.
Despite profit-making and strategic reasons are crucial for the private involvement in the space sector, the Indian industry seems to be still unable to cater to the technological demands on its own.
Off to a rocky step, New Delhi should provide a strategic national space vision in the long term, evolve military dogmas, brainstorm new policies and change the geopolitics of the Region and the world at large.
If the above-mentioned Gaganyaan mission – meant to depart in 2021 – is successful, India will join the ranks of Russia, the U.S. and China in launching their own crews into space.
The geopolitical context that would follow up will meet the India’s growing role on the world stage and these diplomatic key results could be very convenient in turning tables for the United States: strengthening relations with India and making the country join the “table of the greatest ones” along with Europe and Russia, would isolate China, which is also on the hunt for extra-planetary successes.
From our partner International Affairs
FLATOD-19 – Flexible Tourism Destinations: An innovative management tool for visitors and destinations
In the time of Covid-19 epidemic, the destinations of any kind, around the globe, must consider the probability of never facing a “zero number”, ie a full elimination, of cases. Especially “Tourism Destinations” have to consider a list of parameters regarding their ability to operate within a COVID-19 environment (or any other epidemic) and to respond fast, if they are to maintain their presence at the world “touristic map”.
Greece, as a country with many islands being separate tourism destinations themselves, faces a unique dilemma regarding the identification of destinations which can “open” to tourism, together with their “when” and “how”. Destinations aiming to “open” must also, on top of the national and international legal framework, apply:
1. an Integrated Plan for management of facilities, visitors and locals, so as to copy with incidents and crises in a flexible way, including even the unpleasant “closing-and-reopening” scenario.
2. effective tools to communicate their managerial sufficiency to all interested parties, especially to potential visitors and local communities.
The Project FLATOD-19:
The FLATOD-19 Methodology covers those challenges and strategic necessities, the main one being the high percentage of the destination’s population “enrichment”, ie a weekly input-output of visitors of 30-100% of the local population, most of them with vague epidemiological status, risking uncontrollable outbreaks.
As stated by the Greek chief Epidemiologist, Mr. Sotirios Tsiodras, such (tourism) environments make the tracking of cases incredibly difficult, especially among tourists. Therefore, the presence of capable health infrastructures alone at the tourism destinations is of small practical value, because it is fairly easy for an outbreak to occupy the capacity and crush the existing health system. Conversely, it is of vital importance to implement methods of preventive control of possible cases’ transmission and, even more important, it is their identification and very fast tracing both at hotel and at the destination level. The issue of “speed’ should be emphasized, as on top of the possible effects on the medical/ health area, there is another neglected issue: the huge indirect costs entailed by the delay of a well-structured reaction (flights’ diversions, quarantines, etc.).
The innovative methodology “FLATOD-19” helps the pre-planned restriction of incidents and the rapid traceability of cases through the following pillars:
A) Destinations’ categorization based on the probable efficiency of “opening” and on the visitors’ characteristics (e.g. their country of origin) following always the performance of an economic and technical feasibility audit in the very beginning.
B) Formation of a collaborative Leadership scheme for implementing the project at any required local level and installation of critical-information management system (mini MIS).
C) Visitors’ management through their grouping in “clusters”, at country, destination and hotel level, by zoning and time-slotting techniques, starting from the booking stage until their departure.
Our methodology was developed by a multidisciplinary group of experts, scientists-consultants representing many complementary sectors of the tourism industry, living in four countries:
MAIN STUDY TEAM
• Dimitris Vassiliou, MSc Management Science & OR, Destination Marketing & Gastronomy tourism expert, Owner of “Authentic Greece – Local Products & Destinations”
• Prof. Michalis Toanoglou, PhD Hospitality Management, Sustainable Destination Management expert (S. Korea, Woosong University)
• Kiki Domzaridou, Chemist, MSc, MBA, Quality Management Systems expert, Food Safety Lead auditor
• Emmanouil Paterakis, General / Family Doctor, member of the Board of Directors of the Medical Association of Heraklion, Crete
• Iris Kouveli, MSc Sports Management, Sport Events & Destinations Integrator
• Argyri Katapodi, MSc Finance & Investment, Luxury Hospitality Business/ CEO
• Dimitrios Soukeras, MBA(ER), SJSU Faculty, Risk & Incident Analysis Expert- O.Diagnosis LTD/CEO
• Dr Melas Christos, Assistant Professor in Health Informatics, School of Health Sciences, Hellenic Mediterranean University, Crete. Collaborating Academic Staff, Business and Organisation Management, Hellenic Open University
• Christos Mammidis, BBA, MBA, Communication expert / PR Strategist, PR&More Ltd
• Vasilis Zissimopoulos, CEO – Founder Costa Nostrum- Sustainable Beaches/ Company of Certification for Sustainable Beaches
• Marios Papadakis, MD, PhD, MBA plastic surgeon Ishou University hospital, Taiwan
Since the beginning of the pandemic, our team put an effort or even “envisioned” to cover an expected gap in planning and to help not only Greece but also other tourism-based countries to assign practical solutions at complex problems, in the era of Covid-19 pandemic.
We would like to underpin that our effort, considering its research and thus innovative character, DOES NOT claim any completeness or perfection honour. Our model targets specific types of destinations, and has not so far fully incorporated some operational and marketing parameters. Moreover, in no case, we intend to replace the a country’s structures or any scientific organization regarding the provision of health data or the use of epidemiological models, although we plan to develop one focused specifically on tourism.
Consequently, we are declaring that we are open to collaborations with individuals and groups/institutions and we will be happy to meet and cooperate with any interested parties willing to offer to the common cause.
With honour and sense of responsibility
The FLATOD-19 Team
For any information please contact:
Dimitris E. Vassiliou, e-mail: dvas[at]apelop.gr
Prof . Michalis Toanoglou: e-mail: toanogloum[at]icloud.com
Zero emission economy will lead to 15 million new jobs by 2030 in Latin America and Caribbean
In a new groundbreaking study , the Inter-American Development Bank (IDB) and the International Labour Organization (ILO) show that the transition...
European Commission strengthens support for treatment through convalescent plasma
European Commission has invited more than 200 blood-collection services around the EU to apply for funding for the purchase of...
Tajik opposition movement
Once fractured Tajik opposition has joined forces in Warsaw to challenge the regime in Dushanbe. Early September 2018, an opposition...
No end in sight to COVID crisis, and its impact will last for ‘decades to come’
Expressing “appreciation for WHO and partners’ COVID-19 pandemic response efforts”, the emergency committee convened by the UN health agency’s chief, made it clear that...
ISIS and the Militant Jihad on Instagram
Authors: Anne Speckhard and Molly Ellenberg The Islamic State of Iraq and Syria [ISIS] is notorious for its slick propaganda...
Tourism Restarts: 40% of Destinations Have Now Eased Travel Restrictions
The responsible restart of tourism is underway around the world as growing numbers of destinations ease COVID-19 related travel restrictions...
The haunting Karo-Kari culture in Pakistan’s Sindh province
In the desolated land of district Ghotki in Pakistan’s Sindh province, which is comprising of sand dunes, barren fields and...
Africa3 days ago
Iran, China and the Djibouti experience
Intelligence2 days ago
Understanding the Balochistan Liberation Army: An Analysis of Emerging Trends
East Asia3 days ago
Here is How China Responds to US in Indo-Pacific
Americas2 days ago
India and Brazil Are Now the Global Worst Coronavirus Nations
Southeast Asia3 days ago
Reviewing Higher Education Leadership and UNP
Middle East2 days ago
Arab-Chinese Cooperation Forum: Crucial Decisions in Difficult Times
Eastern Europe1 day ago
The Treasure Map to the Forgotten Epoch of the Iravan Khanate
Southeast Asia3 days ago
ASEAN knows that Trump does not have a clear vision for Asia