Connect with us

Science & Technology

AI Is Neither the Terminator Nor a Benevolent Super Being

Published

on

Digitalization and the development of artificial intelligence (AI) bring up many philosophical and ethical questions about the role of man and robot in the nascent social and economic order. How real is the threat of an AI dictatorship? Why do we need to tackle AI ethics today? Does AI provide breakthrough solutions? We ask these and other questions in our interview with Maxim Fedorov, Vice-President for Artificial Intelligence and Mathematical Modelling at Skoltech.

On 1–3 July, Maxim Fedorov chaired the inaugural Trustworthy AI online conference on AI transparency, robustness and sustainability hosted by Skoltech.

Maxim, do you think humanity already needs to start working out a new philosophical model for existing in a digital world whose development is determined by artificial intelligence (AI) technologies?

The fundamental difference between today’s technologies and those of the past is that they hold up a “mirror” of sorts to society. Looking into this mirror, we need to answer a number of philosophical questions. In times of industrialization and production automation, the human being was a productive force. Today, people are no longer needed in the production of the technologies they use. For example, innovative Japanese automobile assembly plants barely have any people at the floors, with all the work done by robots. The manufacturing process looks something like this: a driverless robot train carrying component parts enters the assembly floor, and a finished car comes out. This is called discrete manufacturing – the assembly of a finite set of elements in a sequence, a task which robots manage quite efficiently. The human being is gradually being ousted from the traditional economic structure, as automated manufacturing facilities generally need only a limited number of human specialists. So why do we need people in manufacturing at all? In the past, we could justify our existence by the need to earn money or consume, or to create jobs for others, but now this is no longer necessary. Digitalization has made technologies a global force, and everyone faces philosophical questions about their personal significance and role in the modern world – questions we should be answering today, and not in ten years when it will be too late.

At the last World Economic Forum in Davos, there was a lot of discussion about the threat of the digital dictatorship of AI. How real is that threat in the foreseeable future?

There is no evil inherent in AI. Technologies themselves are ethically neutral. It is people who decide whether to use them for good or evil.

Speaking of an AI dictatorship is misleading. In reality, technologies have no subjectivity, no “I.” Artificial intelligence is basically a structured piece of code and hardware. Digital technologies are just a tool. There is nothing “mystical” about them either.

My view as a specialist in the field is that AI is currently a branch of information and communications technology (ICT). Moreover, AI does not even “live” in an individual computer. For a person from the industry, AI is a whole stack of technologies that are combined to form what is called “weak” AI.

We inflate the bubble of AI’s importance and erroneously impart this technology stack with subjectivity. In large part, this is done by journalists, people without a technical education. They discuss an entity that does not actually exist, giving rise to the popular meme of an AI that is alternately the Terminator or a benevolent super-being. This is all fairy tales. In reality, we have a set of technological solutions for building effective systems that allow decisions to be made quickly based on big data.

Various high-level committees are discussing “strong” AI, which will not appear for another 50 to 100 years (if at all). The problem is that when we talk about threats that do not exist and will not exist in the near future, we are missing some real threats. We need to understand what AI is and develop a clear code of ethical norms and rules to secure value while avoiding harm.

Sensationalizing threats is a trend in modern society. We take a problem that feeds people’s imaginations and start blowing it up. For example, we are currently destroying the economy around the world under the pretext of fighting the coronavirus. What we are forgetting is that the economy has a direct influence on life expectancy, which means that we are robbing many people of years of life. Making decisions based on emotion leads to dangerous excesses.

As the philosopher Yuval Noah Harari has said, millions of people today trust the algorithms of Google, Netflix, Amazon and Alibaba to dictate to them what they should read, watch and buy. People are losing control over their lives, and that is scary.

Yes, there is the danger that human consciousness may be “robotized” and lose its creativity. Many of the things we do today are influenced by algorithms. For example, drivers listen to their sat navs rather than relying on their own judgment, even if the route suggested is not the best one. When we receive a message, we feel compelled to respond. We have become more algorithmic. But it is ultimately the creator of the algorithm, not the algorithm itself, that dictates our rules and desires.

There is still no global document to regulate behaviour in cyberspace. Should humanity perhaps agree on universal rules and norms for cyberspace first before taking on ethical issues in the field of AI?

I would say that the issue of ethical norms is primary. After we have these norms, we can translate them into appropriate behaviour in cyberspace. With the spread of the internet, digital technologies (of which AI is part) are entering every sphere of life, and that has led us to the need to create a global document regulating the ethics of AI.

But AI is a component part of information and communications technologies (ICT). Maybe we should not create a separate track for AI ethics but join it with the international information security (IIS) track? Especially since IIS issues are being actively discussed at the United Nations, where Russia is a key player.

There is some justification for making AI ethics a separate track, because, although information security and AI are overlapping concepts, they are not embedded in one another. However, I agree that we can have a separate track for information technology and then break it down into sub-tracks where AI would stand alongside other technologies. It is a largely ontological problem and, as with most problems of this kind, finding the optimal solution is no trivial matter.

You are a member of the international expert group under UNESCO that is drafting the first global recommendation on the ethics of AI. Are there any discrepancies in how AI ethics are understood internationally?

The group has its share of heated discussions, and members often promote opposing views. For example, one of the topics is the subjectivity and objectivity of AI. During the discussion, a group of states clearly emerged that promotes the idea of subjectivity and is trying to introduce the concept of AI as a “quasi-member of society.” In other words, attempts are being made to imbue robots with rights. This is a dangerous trend that may lead to a sort of technofascism, inhumanity of such a scale that all previous atrocities in the history of our civilization would pale in comparison.

Could it be that, by promoting the concept of robot subjectivity, the parties involved are trying to avoid responsibility?

Absolutely. A number of issues arise here. First, there is an obvious asymmetry of responsibility. “Let us give the computer with rights, and if its errors lead to damage, we will punish it by pulling the plug or formatting the hard drive.” In other words, the responsibility is placed on the machine and not its creator. The creator gets the profit, and any damage caused is someone else’s problem. Second, as soon as we give AI rights, the issues we are facing today with regard to minorities will seem trivial. It will lead to the thought that we should not hurt AI but rather educate it (I am not joking: such statements are already being made at high-level conferences). We will see a sort of juvenile justice for AI. Only it will be far more terrifying. Robots will defend robot rights. For example, a drone may come and burn your apartment down to protect another drone. We will have a techno-racist regime, but one that is controlled by a group of people. This way, humanity will drive itself into a losing position without having the smallest idea of how to escape it.

Thankfully, we have managed to remove any inserts relating to “quasi-members of society” from the group’s agenda.

We chose the right time to create the Committee for Artificial Intelligence under the Commission of the Russian Federation for UNESCO, as it helped to define the main focus areas for our working group. We are happy that not all countries support the notion of the subjectivity of AI – in fact, most oppose it.

What other controversial issues have arisen in the working group’s discussions?

We have discussed the blurred border between AI and people. I think this border should be defined very clearly. Then we came to the topic of human-AI relationships, a term which implies the whole range of relationships possible between people. We suggested that “relationships” be changed to “interactions,” which met opposition from some of our foreign colleagues, but in the end, we managed to sort it out.

Seeing how advanced sex dolls have become, the next step for some countries would be to legalize marriage with them, and then it would not be long before people starting asking for church weddings. If we do not prohibit all of this at an early stage, these ideas may spread uncontrollably. This approach is backed by big money, the interests of corporations and a different system of values and culture. The proponents of such ideas include a number of Asian countries with a tradition of humanizing inanimate objects. Japan, for example, has a tradition of worshipping mountain, tree and home spirits. On the one hand, this instills respect for the environment, and I agree that, being a part of the planet, part of nature, humans need to live in harmony with it. But still, a person is a person, and a tree is a tree, and they have different rights.

Is the Russian approach to AI ethics special in any way?

We were the only country to state clearly that decisions on AI ethics should be based on a scientific approach. Unfortunately, most representatives of other countries rely not on research, but on their own (often subjective) opinion, so discussions in the working group often devolve to the lay level, despite the fact that the members are highly qualified individuals.

I think these issues need to be thoroughly researched. Decisions on this level should be based on strict logic, models and experiments. We have tremendous computing power, an abundance of software for scenario modelling, and we can model millions of scenarios at a low cost. Only after that should we draw conclusions and make decisions.

How realistic is the fight against the subjectification of AI if big money is at stake? Does Russia have any allies?

Everyone is responsible for their own part. Our task right now is to engage in discussions systematically. Russia has allies with matching views on different aspects of the problem. And common sense still prevails. The egocentric approach we see in a number of countries that is currently being promoted, this kind of self-absorption, actually plays into our hands here. Most states are afraid that humans will cease to be the centre of the universe, ceding our crown to a robot or a computer. This has allowed the human-centred approach to prevail so far.

If the expert group succeeds at drafting recommendations, should we expect some sort of international regulation on AI in the near future?

If we are talking about technical standards, they are already being actively developed at the International Organization for Standardization (ISO), where we have been involved with Technical Committee 164 “Artificial Intelligence” (TC 164) in the development of a number of standards on various aspects of AI. So, in terms of technical regulation, we have the ISO and a whole range of documents. We should also mention the Institute of Electrical and Electronics Engineers (IEEE) and its report on Ethically Aligned Design. I believe this document is the first full-fledged technical guide on the ethics of autonomous and intelligent systems, which includes AI. The corresponding technical standards are currently being developed.

As for the United Nations, I should note the Beijing Consensus on Artificial Intelligence and Education that was adopted by UNESCO last year. I believe that work on developing the relevant standards will start next year.

So the recommendations will become the basis for regulatory standards?

Exactly. This is the correct way to do it. I should also say that it is important to get involved at an early stage. This way, for instance, we can refer to the Beijing agreements in the future. It is important to make sure that AI subjectivity does not appear in the UNESCO document, so that it does not become a reference point for this approach.

Let us move from ethics to technological achievements. What recent developments in the field can be called breakthroughs?

We haven’t seen any qualitative breakthroughs in the field yet. Image recognition, orientation, navigation, transport, better sensors (which are essentially the sensory organs for robots) – these are the achievements that we have so far. In order to make a qualitative leap, we need a different approach.

Take the “chemical universe,” for example. We have researched approximately 100 million chemical compounds. Perhaps tens of thousands of these have been studied in great depth. And the total number of possible compounds is 1060, which is more than the number of atoms in the Universe. This “chemical universe” could hold cures for every disease known to humankind or some radically new, super-strong or super-light materials. There is a multitude of organisms on our planet (such as the sea urchin) with substances in their bodies that could, in theory, cure many human diseases or boost immunity. But we do not have the technology to synthesize many of them. And, of course, we cannot harvest all the sea urchins in the sea, dry them and make an extract for our pills. But big data and modelling can bring about a breakthrough in this field. Artificial intelligence can be our navigator in this “chemical universe.” Any reasonable breakthrough in this area will multiply our income exponentially. Imagine an AIDS or cancer medicine without any side effects, or new materials for the energy industry, new types of solar panels, etc. These are the kind of things that can change our world.

How is Russia positioned on the AI technology market? Is there any chance of competing with the United States or China?

We see people from Russia working in the developer teams of most big Asian, American and European companies. A famous example is Sergey Brin, co-founder and developer of Google. Russia continues to be a “donor” of human resources in this respect. It is both reassuring and disappointing because we want our talented guys to develop technology at home. Given the right circumstances, Yandex could have dominated Google.

As regards domestic achievements, the situation is somewhat controversial. Moscow today is comparable to San Francisco in terms of the number, quality and density of AI development projects. This is why many specialists choose to stay in Moscow. You can find a rewarding job, interesting challenges and a well-developed expert community.

In the regions, however, there is a concerning lack of funds, education and infrastructure for technological and scientific development. All three of our largest supercomputers are in Moscow. Our leaders in this area are the Russian Academy of Sciences, Moscow State University and Moscow Institute of Physics and Technology – organizations with a long history in the sciences, rich traditions, a sizeable staff and ample funding. There are also some pioneers who have got off the ground quickly, such as Skoltech, and surpassed their global competitors in many respects. We recently compared Skoltech with a leading AI research centre in the United Kingdom and discovered that our institution actually leads in terms of publications and grants. This means that we can and should do world-class science in Russia, but we need to overcome regional development disparities.

Russia has the opportunity to take its rightful place in the world of high technology, but our strategy should be to “overtake without catching up.” If you look at our history, you will see that whenever we have tried to catch up with the West or the East, we have lost. Our imitations turned out wrong, were laughable and led to all sorts of mishaps. On the other hand, whenever we have taken a step back and synthesized different approaches, Asian or Western, without blindly copying them, we have achieved tremendous success.

We need to make a sober assessment of what is happening in the East and in the West and what corresponds to our needs. Russia has many unique challenges of its own: managing its territory, developing the resource industries and continuous production. If we are able to solve these tasks, then later we can scale up our technological solutions to the rest of the world, and Russian technology will be bought at a good price. We need to go down our own track, not one that is laid down according to someone else’s standards, and go on our way while being aware of what is going on around us. Not pushing back, not isolating, but synthesizing.

From our partner RIAC

Continue Reading
Comments

Science & Technology

The Race for AI, Quantum Supremacy

Published

on

On a hot summer’s morning in July, Robert Oppenheimer stood in a control bunker in New Mexico and watched the results of his Manhattan Project burn the desert sand, transforming it into a mild but lightly radioactive green glass. Years later, when asked what went through his head when he saw that great grey cloud rise out of the sand, he said he was reminded of Hindu Scripture, the line from Vishnu: ‘Now I am become Death, the destroyer of worlds’. Although, according to his brother, what he actually said after seeing the bomb explode was: ‘I guess it worked’.


As romantic as the potential of science can be, there is also a banality to the discoveries and inventions that shape our world. It is irrefutable that the atomic bomb changed the trajectory of the 20th century, ending the Second World War and fuelling the Cold War between the Soviet Union and the United States, and their proxies. Today, in an era when energy security, food and water shortages and wide-spread dignity-deficits make as many headlines as guns and tanks, investing in AI and quantum technologies can help ensure supremacy. But at what price?

With the world’s superpowers on the cusp of a full-blown AI arms race, things could turn ugly very fast unless efforts are made to guarantee sustainable security for all. AI and quantum technologies could still become game-changing weapons, much like the nuclear bomb. There are already smart bombs, and hypersonic missiles that are faster than ever imagined. AI will immediately provide speed and power, enabling systems to move faster and do more complex activities more efficiently. In short, AI will progressively increase our capabilities, for good or evil. The ultimate challenge will be for countries at the forefront of AI advancement, often geopolitical rivals, to create international frameworks that encourage the transparent development of impressive innovations whose benefits can be shared widely, and responsibly.


There are plenty of eye-catching stories depicting the use of AI in ‘killer drones’ or missiles defence systems, and various world leaders have extolled the benefits of the technology in their militaries. But to focus on specific AI applications in the military is to miss the larger role that the technology is likely to play in global societies and potential conflicts. Military AI is at a relatively early stage of development, and while we can well imagine a future of robotic soldiers and other autonomous killing machines, this would be to ignore the unprecedented impact of AI and quantum technology on our future existence. In the near future, Artificial Intelligence will seep into every aspect of our societies and our economies, transforming our computational power, and with it the manufacturing speed, domestic output, energy usage, and all other processes and relations that define the economic success of a society. It is no wonder then that major global powers China, Russia, the U.S. and others, have poured billions into R&D labs, developing quantum technology and artificial intelligence, in the hope of unlocking a level of extreme-computational power that will catapult scientific, economic, military and technological advances into a new era.

In most developed countries, economic growth in the past half-century has been closely tied to advances in computational power, often from a relatively low base. The dash to quantum supremacy, whether by Google, IBM, or major entities in other nations, will propel states to domination of the global stage. This will come at a price for humanity and the collateral damage is likely to be equitable and dignified peace, security and prosperity. The unilateral and exclusive development of quantum supremacy will break every encryption of other states, and potentially dominate every aspect of world politics and critical infrastructure. It will encroach on our individual freedoms, cultural norms and identity. This won’t be sustainable and will trigger highly disruptive conflicts that could threaten the future of humanity as we know it. 

So how do we prevent this doomsday scenario? We should start by taking an honest look in the mirror. History shows that it is in the nature of states to first strive for survival before ultimately aiming for domination. An unchecked hegemon is rarely fair, just or peaceful, regardless of their proclaimed ideals or political ethos. That is why multipolarity and multilateralism are necessary prerequisites for securing a sustainable future for humanity. Parity or, near parity, is not in the DNA of a hegemon, because most states still govern their national interest through zero-sum paradigms without regard to transnational, global or planetary interests. This is understandable. But it is unworkable in our instantly connected and deeply interdependent world. Despite the initial horror emanating from the use of nuclear weapons against Japan in 1945, near-parity is what led nuclear states to enact treaties that governed the peaceful use of nuclear weapons. It also helped avoid, at least so far, scenarios of mutually assured destruction.

But we need not shackle ourselves to dated Cold War paradigms. In an anarchic, global system without a just, equitable or representative overarching authority, we should seek shelter in more sustainable approaches to global governance. Best embodied by “Multi-sum security” and “Symbiotic Realism” frameworks, these are defined by absolute gains, non-conflictual competition and win-win scenarios, thus guaranteeing sustainable security for all. Importantly, the future should not be taken hostage by any nation that unilaterally masters quantum supremacy. This would create a destructive and uncertain era that could lead to a dystopic stratification of peoples, cultures and states. Such a scenario may not start with a bang, but it could very well once again involve a scientist standing back, looking at their work and exclaiming ‘I guess it worked’.

Continue Reading

Science & Technology

Potential of Nanotechnology

Published

on

Emerging technologies such as AI, robotics and cyber have been in the limelight in defence and military domains since the 1950s; however, nanotechnology has not had a fair share of publicity. The global nanotechnology industry has a rapidly expanding market with an estimated worth of USD 2.4 billion in 2021 and is forecasted to reach USD 33.7 billion by 2030. This is due to the growing use of nanotechnology in various sectors such as urban farming, precision agriculture, medical, engineering, energy, security, defence, environment etc. While nanotechnology has proven tremendously beneficial for the civilian sector, it has valuable offerings for the military industry as well.

Nanotechnology is being used to develop nanoweapons which are miniaturised versions of weapons ranging from about 1-100 nanometres. A practical example of this is evident in the reduction of drone size from about 4 feet to the size of a honey bee. Such weapons would fit in the bags and pockets of the soldiers. Louis A. Del Monte, in his book ‘Nanoweapons: A Growing Threat to Humanity’, commented on the size of nanoweapons and termed them ‘nanobots’ with destructive potential.

The reduced size and enhanced spectrum of nanotechnology have allowed the development of highly sensitive nano-thermal and chemical sensors that can be of great value to military operatives. Nano-communication devices can be an effective tool for surveillance missions. For instance, nanotechnology has allowed video tracking and monitoring using 35x optical zoom nano multi-eye lens, real-time nano-radar and nano-eye cloud storage. Such technologies could be helpful for militaries to operate even in bad weather conditions and work around blind spots. Additionally, nanocomposite materials have good potential for the aerospace industry due to their lightweight and extended durability under high pressure and at high speed. Nanotechnology could also significantly impact space-based intelligence, communication, imaging and signal processing. In the longer run, most military technologies would be dependent on nanomaterials. Nanotechnology is also being evaluated for its use in unmanned platforms and robots. The applications of nanotechnology could also enable the development of  mini-nukes, weighing about five pounds and carrying an explosive power of 100 tonnes of TNT. Such an evolution in weapons can provide a competitive edge to the militaries around the world

To ensure a competitive edge, arms exporters are under tremendous pressure to outrun the others in winning this global nano-arms race. There is significant competition between the United States (US) and China in nanotechnology. By comparing the two countries’ progress in nanotechnology using documented and published research, it can be established that China is ahead of the US, with more than 42% of globally published research articles (about 85,700) on nanotechnology. However, Louis A. Del Monte, in his book titled ‘Genius Weapons’, claimed that the US enjoyed a ‘substantial lead’ in nanoweapons. He stated that this was a critical component of its ‘Third Offset Strategy.’ Conversely, he was also wary that the world would catch up with the US’ technological developments within a few years. Countries like India, Iran, South Korea, Germany, Japan, United Kingdom (UK) and Russia have shown great interest in nanotechnology.

India, in particular, is right behind the US and China in nanotechnology. Indian Defence Research and Development Organization (DRDO) has established several nanotechnology research institutes to pursue interdisciplinary research. Institutes such as the Centre for Nanoscience & Nanotechnology (UIEAST)  in 2005, Centres of Excellence in Nanoelectronics (CEN) since 2006, Centre for Nano Science and Engineering (CeNSE) in 2010, Nanoscience Centre for Optoelectronics and Energy Devices (Nano-COED), and several other research labs are working in various areas of nanotechnology. Furthermore, India is also the third-largest producer of research papers on nanotechnology, behind China and US. Additionally, employees of DRDO have published books on the subject, such as a book on ‘Nanotechnology for Defence Applications’, which discusses the potential of nanotechnology for the defence sector. The Indian defence forces have been eager to deploy nanotechnology on the battlefield and are working to propose a blueprint for its use in future warfare.

The Government of Pakistan founded the National Commission on Nano Science and Technology (NCNST) to assist universities and research centres in establishing nanoscience labs. There is tremendous potential for the development of nanotechnology in the private sector. Moreover, there are a few universities that have established research centres to conduct nanotechnology research. Despite these initiatives, the potential of nanotechnology in Pakistan has not been explored fully. Apart from lack of capital and human resource, Pakistan’s weak patent distribution is also one primary reason for this lag. The concentration of patents within the weapon-developing states limits the interested states such as Pakistan. To address this bottleneck, there is a need to fund the representation of Pakistani patents at international nanotechnology conferences and markets. This would assist in securing space for learning and sharing of knowledge as well as prowling commercial contracts, which could become a great source of revenue for Pakistan. Although less spoken of, nanotechnology is a fast-emerging province of knowledge and could significantly impact the future of warfare across the globe.

Continue Reading

Science & Technology

Expanding Information Technology: A boon or bane?

Published

on

The proponent and opponents of tech innovation argue about the blessings and harms of the expanded technological advancement in the global arena. From Hunter-gatherer societies to modern-day’s post-capitalist societies; the art which has indisputable progress for humanity is the art of technology and change. Technological change provides the economic base and societal revolution in the general. Regardless of unprecedented changes in facets of communications its expansion could turn into cyber warfare, data privacy rights, political malice, and a threat to democracy.

Discussing the inverse logic in the first place; there is not an iota of doubt that expanded information technology has revolutionized the healthcare industry across the globe. The people from Nigeria can connect to New York for medical consultancy with little effort. It has changed the paradigm of the health sector with potential phase. Secondly, in the Political arena, the concept of e-governance evolved. Automation and information technology can be used to collect records and data statistics to make new and efficient policies for the public by using evidence-based policies. Regardless of robust socio-economic and socio-political changes in the structure of society information technology posited a major setback to the overall growth of society.

The threat of individual liberty due to mass surveillance is circulated everywhere with the dawn of excessive information technology. People have lost the true independence and liberty to choose and to decide about themselves. Google and media giants have placed the autonomy. The cannibalization of jobs is also a melting point with the advent of information technology. Humans’ cognitive skills are outperformed by artificial intelligence. One of the most lethal problems which are caused by expanded information technology is inequality; the flow of information technology led revenue from the south towards Silicon Valley. All the data of the world is owned by a minutus majority which is problematic. A small data elite can capture the entire globe within clicks. The autocratic hold of data by companies can put a major threat to the independence and rational decision-making of individual as well as collective states. The prior economic inequality was less potent than the subsequent data inequalities between North and South.

Democracy which is based on the trust factor is plagued by cyber-attacks and disinformation. Public opinion is engineered in the firms where the analysis of public behavior through different apps like Candy Crush can be used to mold and shape their opinions of the favorite leader. The democracy which stands over the general will is compromised by manufactured consent. Boot camps and lobbying big data tailor-made the wishes and preferences to make political campaigns for voting and triumphing the preferred members. The manipulated biases are justified through echo chambering by advertising all the biases and prejudices of humans to confirm their biases for political agendas. Democracy replaced by populism due to expanded information technology. The other side of democracy is based on communication. It was the improved communication in the society that established the democratic governances in different parts of the world, but with time, the malfunctioning communication due to a matrix of misinformation can halt the global growth and sustainability of democracy.

Yuval Noah Hariri argued that the biggest threat to the working class is not exploitation but irrelevance in the 21st Century. In the past technology couldn’t replace human intellectual abilities but artificial intelligence can overshadow the cognitive skills of human beings. These cognitive skills were peculiar human traits that empower them to main positions in companies and firms but the modern expanded technology has outnumbered this peculiar trait. Now robots and automated machines can do a good job of hiring and recruiting people than humans. Due to this reason, humans have become irrelevant with the cannibalization of jobs.

Every decision is owned by algorithms which are moral decadence. Google owns preferences and likes and dislikes mechanisms for humans. It is a big moral dilemma that expanded technology posed over human authority and autonomy. The unique decision-making of humans is replaced by tech-based decision powers. The margin of independent thinking has declined in the 21st Century. It is argued by scholars that the ultimate goal of Google is to outsource every decision of humans to Google.

Due to expanded technology, multi-national companies and firms are becoming stronger and more sovereign than entire states. For example, the Apple Market Value in 2021 was $2274.34 billion and Microsoft’s net worth was $1988.67 billion quart triple the entire GDP of any nation in Asia. The digital elites have become super humans which is a global threat to governance in third-world countries. The owner of big firms can sabotage and challenge the governance of any small country for the collective goodwill of their companies. State sovereignty has been diluted and replaced due to the more powerful Leviathan traits of big data firms.

The possible remedies to expanded technology are many. The democratization of data is a way forward in which the concentration and autocratic hold of all the data chains can be diluted into different units by breaking up Big Data like Google and Facebook. For example, Rockefeller Oil Company was diluted into 34 companies when it became a giant holder of all the oil supply in Europe. In the same vein, Big Data can be distributed into different units for democratization purposes. Secondly; strict government regulations and oversight mechanisms can be used to control Artificial Intelligence research. The expansion of IT should be controlled and ethical otherwise it can be a potential threat to humanity.

Modern information technology has changed human lives in general but the flip side of negative outcomes can’t be overlooked. The ethics and innovation should be balanced otherwise the corporates will monopolize all data and algorithms for ulterior motives. Technological advances present significant opportunities for progress and advancement of human beings from nuclear deterrence to communication. But the long-lasting negative consequences are many which proved modern technology a bane rather than a boon. It is high time across the globe to re-consider the ethical side and controlled expansion of information technology before it becomes an uncontrollable fact for human beings to survive and sustain in the 21st Century. The balance between expanded technology and human growth should be discerned in contemporary times.

Continue Reading

Publications

Latest

Middle East1 hour ago

Sino- Arab Relations: Velvet Hopes and Tragic Realities

In the recent decade, China has become a crucial partner for many nations in West Asia. China-Arab relations have progressed...

New Social Compact9 hours ago

Tenzin Choezom – On turning her struggle into her power

Tenzin Choezom is a Tibetan refugee woman born in exile. Her life has so far oscillated between the borders of...

Environment11 hours ago

How countries can tackle devastating peatland wildfires

Today, a major wildfire in France has destroyed thousands of hectares of forest and forced many people to flee their...

waterscarcity waterscarcity
World News13 hours ago

As the climate dries the American west faces power and water shortages, experts warn

Two of the largest reservoirs in America, which provide water and electricity to millions, are in danger of reaching ‘dead...

Environment15 hours ago

How sustainable living can help counter the climate crisis

To combat the climate crisis and secure a safe future below 1.5°C, the world needs to cut emissions of planet-warming...

Middle East18 hours ago

The Intensifying War in Yemen: World’s worst Humanitarian crisis

Since the beginning of this year, the violence in Yemen’s civil conflict has increased. From being the centre of the...

Middle East20 hours ago

Israelis and Palestinians agree on one thing: Albert Einstein’s definition of insanity

If there is one thing that Israelis and Palestinians agree on and religiously adhere to, it’s Albert Einstein’s definition of...

Trending