Connect with us

Science & Technology

Russian Cyber Sovereignty: One Step Ahead

Published

on

Authors: Alexander Tabachnik  and Lev Topor*

Cyber warfare is becoming more prominent and frequent than ever before in the international arena. Struggle for hegemony, influence and power pushes international actors, mainly states, into developing their cyber capabilities to spy, sabotage and influence other actors. Globalization and the proliferation of knowledge, know-how, expertise and technology in general have made cyber warfare relatively cheap and easy to execute in comparison with conventional warfare. In fact, international law regarding cyber warfare, specifically the lack of it, as well as newly emerging norms between states, have made cyber warfare especially lucrative. That is, since there are no biding norms or laws regarding cyber operations, and since it is extremely difficult to attribute a cyber attack with a real attacker, traditional military or economic punishment is difficult to justify. This makes deterrence slow, blunt and ineffective.

In this article, we seek to discuss the importance of the Russian cyber domain and its position in the international struggle for power, influence and national security. Specifically, we argue that Russia’s cyber domain acts as a barrier from foreign cyber operations, especially since the West has escalated its operations against Russia in recent years. We also argue that in the field of national security and national interests within the cybersphere, Russia has an advantage over other powers such as the United States or the United Kingdom. Further, we elaborate and discuss the structure, vulnerabilities and importance of cyberspace in international relations, as well as the current state of Russia’s defensive cyber domain.

Less-regulated cyber domains, such as those of the U.S. and UK, are vulnerable and prone to foreign attacks not only by their adversaries but by rogue and anonymous hacking groups and cyber criminals [1]. Interestingly, clandestine surveillance programs such as the American Presidential Policy Directive 20 (PPD-20), which was leaked by Edward Snowden in 2013, allows the U.S. to spy on citizens and foreigners, but not to completely protect itself from cyber operations. It is difficult to guarantee both security and privacy. However, since cyber attacks are on the rise, should security not come first?

Interestingly, in that regard, the Russian cyber domain is “one step ahead” of other international actors, mainly global powers such as the U.S., the UK, most of the European Union and others. Due to the problem of attribution and the increase in the practice of cyber warfare, Russia perceives the cyber domain, cyberspace, as a threat to Russian national security and stability. On the one hand, the U.S. unsuccessfully tried to present the norm of privacy as more important than security and executed espionage and regulations with hiding initiatives such as PPD-20. On the other hand, Russia acted with transparency when it placed the norm of security ahead of privacy and accordingly changed its regulation of the cyber domain. Indeed, criticism has been raised over recent Russian initiatives such as the Yarovaya Law or the Sovereign Internet Law. The criticism and concerns are legitimate. However, we argue that in respect to the international struggle over power and security Russia has the lead over Western powers. Philosophically speaking, what good is privacy if there is no national security. Moreover, Russian privacy is undermined by foreign forces (i.e., States, cyber criminals) that spy and exert influence.

Cyberspace: Structure and Vulnerabilities

Cyberspace is complex and ubiquitous. By the definition of the U.S. Joint Chief of Staff (JCS), cyberspace is “the domain within the information environment that consists of the interdependent network of information technology (IT) infrastructures and resident data. It includes the Internet, telecommunications networks, computer systems, and embedded processors and controllers.” Moreover, the U.S. JCS refers mostly to the operational level of virtual cyber operations. At the same time, in practice, cyberspace is comprised of several layers, each with its own unique characteristics. Each layer facilitates and acts as the infrastructure for the next one. Thus, as suggested by Yochai Benkler or by Nazli Choucri and David D. Clark, there are four layers to cyberspace; the physical foundations, the logic layer, the information layer, and the users. These layers affect IR and IR affects them.

The physical layer of cyberspace is the infrastructure. It consists of the physical elements which are necessary for the function of the internet. Fiber optic cables, nodes of cables, satellites, cellular towers, servers, computers, and other physical components all serve as a base for the next layer (the logic). Fiber optic cables are of great importance since they interwind the world with mostly submarine cables. These cables make up approximately 95% of the intercontinental telecommunications traffic, with the rest being satellite communication used for military and research. Without such an extensive layout, the internet would have been in use only by state actors and not the general public, globally. The vulnerabilities of this layer lie within the physical elements themselves – cables can be cut, damaged, hacked, eavesdropped. Furthermore, physical damage is difficult to repair (in most cases) as it requires special ships and equipment. In the ce of satellite damage, a new one would probably be needed. Repairs are difficult and expensive.

The next layer is logical. The central nervous system of cyberspace is responsible for routing information and data, from clients to servers and back to clients. This happens through systems such as the domain name system (DNS), Transmission Control Protocol (TCP), internet protocols (IP), browsers, software that makes use of physical foundations, and websites, to name key examples. The vulnerabilities of this layer are numerous, while manipulations to the communication systems and denial of service (DoS) are just a few examples.

For instance, in regard to the physical and logical layer, Russia was accused of attacking the American power grid. The U.S., as it was mentioned earlier, also attacked the Russian power grid. Another example is that during the Cold War, American ships and submarines conducted espionage and eavesdropping operations on Soviet undersea communication cables. Today, both the U.S., Russia, China, and other capable powers conduct underwater espionage operations.

Next is the information layer, comprised of encoded text, photos, videos, audio, and any kind of data stored, transmitted, and transformed through the IPs. The main vulnerability of this layer is the information itself, which is susceptible to manipulation by malicious or unwanted means such as disinformation material and malware. Foreign actors can also steal valuable and protected information. Needless to mention that all of the mentioned types of information can be manipulated and adapted as needed for cyber operations.

The final layer consists of the users who shape the cyberspace experience and its nature by communicating with each other, creating and spreading content. The main vulnerability of the users are other manipulative users (i.e., foreign agents, criminals, terrorists). In this regard, for instance, the global Covid-19 crisis has also seen an “infodemic” alongside it. A “global battle of narratives” is taking place, as argued by the European Union High Representative for Foreign Affairs and Security Policy, Josep Borrell, on March 24, 2020. China is accused of promoting theories suggesting that the American Army was responsible for introducing the disease while visiting Wuhan in October 2019. Thus, while the outbreak might have occurred in Wuhan, this kind of disinformation campaign shifted the perception of origin away from China and blamed the virus on the U.S. “Not wishing to waste a good crisis,” China (as well as others) is promoting an intelligent and data-driven campaign against its global adversaries.

Cyber Warfare: A Tool of International Security

Since Westphalia, states longed to preserve their sovereignty. States, especially great powers, usually prefer to avoid conventional conflict and to avoid MAD (Mutual Assured Destruction), as was experienced during the Cold War. Whenever states do intervene in the affairs of others, they do so to acquire territory, domains or power to protect ethnonational groups, as well as economic, military or diplomatic interests. States also intervene due to ideological reasons and, lastly, to keep or adjust the regional or global balance of power. Hence, cyberspace serves as the perfect domain to avoid conventional military conflict and even a MAD situation while trying to obtain the reasons for intervention. Thus, states and other IR actors do so by using cyber warfare strategies and tactics.

Cyber warfare could be broadly defined as the use of cyber weapons and other systems and means in cyberspace for the purpose of injury, death, damage, destruction or influence of international actors and/or objects. Acts of cyber warfare can be executed by all types of IR actors, including individuals, organizations, companies, states, and state proxies. Cyber warfare is an integrated part of the defense and offense strategy used by many international actors. Russian military officials do not use the term “cyber warfare” as a standalone term. They prefer to conceptualize it within a wider framework of information warfare – a holistic approach which includes, inter alia, computer network operations, electronic warfare, and psychological and informational operations.

Regarding IR and international security (IS), cyber warfare can be viewed from two different perspectives. A revolutionary perspective and an evolutionary one. From a revolutionary perspective, cyber warfare and cyber weapons are a revolution of military affairs to some extent, in the same way that sailors once perceived the development and widespread of airplanes. That is, much like airplanes, cyber weapons can transform strategies and shift the balance of power in the international arena. From an evolutionary perspective, cyber warfare and cyber weapons serve merely as a tactical development with no drastic strategic changes. As mentioned before, IR actors still seek power and influence over others and are willing to fight over it, as they did for centuries. We assume that the Russian approach to the cyber domain can be defined as evolutionary. This argument was also proven to be valid for the term “Hybrid Warfare,” as recently published by Vassily Kashin.

Generally, the arsenal of cyber warfare tactics includes acts of espionage, propaganda, denial of service, data modification and infrastructure manipulation or sabotage. Further, according to the Tallinn Manual on The International Law Applicable to Cyber Operations, some tactics such as espionage or data modification and false information spread are not illegal. At the same time, cyber attacks can be regarded as kinetic attacks and retaliation can be justified only if the victim can reveal the true and full details of the perpetrator — a very limited and rare phenomenon nowadays.

Consequently, the characteristics of cyber tactics make them very attractive for use. Countries like the U.S., UK, Russia, China, smaller regional powers such as Israel or Iran, rogue states like North Korea and even terror organizations and human rights organizations are all shifting towards cyberspace.

Russian Cyber Sovereignty: A Barrier Against Foreign Influence

Russian authorities perceive cyberspace as a major threat to Russian national security, stability as the flow of information in cyberspace could undermine the regime. Social networks, online video platforms, secure messaging applications and foreign-based internet mass-media remain a great concern as Moscow has no control over information on these platforms, which are either created or influence by Russia’s global competitors such as the U.S. or the UK. Yet, as we show, cyberspace is a domain only partly controlled by the authorities, enabling a relatively free flow of information while Russia still seeks to take some necessary precautions.

Russian authorities, through legislation and cyber regulation, strive to control Russian cyberspace in order to prevent or deter, as much as possible, the dissemination of information which may mar the positive representation of its regime, or any activity which may endanger the regime’s stability. Therefore, Russian authorities seek to control the content of the information layer and the information circulating in Russian cyberspace.

Generally, Russian legislation directed at control over domestic cyberspace consists of two major categories, which are also interconnected. These categories can be defined as legal-technological and legal-psychological. The most prominent legal-technological efforts by Russian authorities consist of the following measures: the Yarovaya Law; Russia’s “sovereign internet” law; SORM system’s installation mandatory; and the law making Russian applications mandatory on smartphones or other devices. Simultaneously, the legal-psychological efforts consist of the three major measures: “disrespect law”; “fake news” law; and the new “foreign agent” law. As further explained, these are meant to pressure Russians and others from spreading disinformation from within.

The Yarovaya Law obliges provision of encryption/decryption keys (necessary for decoding transmitted electronic messages/information) to Russian special services (such as the FSB) upon request by distributors of information such as internet and telecom companies, messengers and other platforms that allow the exchange of information. Moreover, according to this law, Big Data attributed to activity in Russian cyberspace must be stored in Russian territory, while the special services should have unrestricted access to this data [2]. For example, companies like Facebook or Google must store information concerning data and activities of their Russian users in Russian territory and provide unrestricted access to the Russian special services.

Furthermore, the Decree of the Government of the Russian Federation from April 13, 2005, (number 214) with changes from October 13, 2008, regarding SORM (Russia’s System of Operational-Investigatory Measures), requires telecommunication operators to install equipment provided by the FSB. This allows the FSB and other security services to monitor unilaterally and unlimitedly, without a warrant, users’ communications metadata and content. This includes web browsing activity, emails, phone calls, messengers, social media platforms and so on. Moreover, the system has the capability of Deep Packet Inspection (DPI). Thus, the SORM system is one of the major tools helping implement and regulate the Yarovaya Law. While the Yarovaya Law is criticized by many for harming citizens’ privacy, it could be extremely effective for its initial and official purpose, which is counter-terrorism and foreign missionary interventions.

On December 2, 2019, Russian President Vladimir Putin signed a legislation bill requiring all computers, smartphones and smart devices sold in Russia to be pre-installed with Russian software. Afterwards, the government announced a list of applications developed in Russia that would need to be installed on the mentioned categories of devices. This legislation came into force on July 1, 2020. Apparently, an initiative will be promoted later on, calling to register devices with government-issued serial numbers. This will allow Moscow to tighten control over end-users through regulation, monitoring and surveillance. These kinds of laws can help Russia avoid the need to rely on technology companies for crime and terror forensics. For instance, such cases took place in the U.S., for instance, after the December 2019 terror attack at a Navy base in Florida.

On May 1, 2019, President Vladimir Putin signed the law on Russia’s “Sovereign Internet,” effectively creating the “RuNet” — Russia’s internal internet. The goal of this law is to enable the Russian internet to operate independently from the World Wide Web if and when requested by Moscow. In practice, this “kill switch” allows Russia to operate an intranet, a restricted regional network such in use by large corporations or militaries. This network gives authorities the capacity to deny access to parts of the internet in Russia, potentially ranging from cutting access to particular Internet Service Providers (ISPs) through cutting all internet access in Russia.” With risks of foreign cyber operations such as disinformation or even physical eavesdropping, this “kill switch” can prevent Russia from suffering a dangerous offensive. This can also mean that Russia could initiate cyber warfare but keep itself protected, at least from outside threats.

At the same time, the legal-psychological efforts consist of three laws directed at the prevention of distribution of unreliable facts and critique directed at the government’s activities and officials. For example, the law which regulates “disrespect” allows courts to fine and imprison people for online mockery of the government, its officials, human dignity and public morality. This law is relevant to the dissemination of information through informational-telecommunication networks. Additionally, the “fake news” law also outlaws the dissemination of what the government defines to be “fake news” – unreliable socially significant information distributed under the guise of reliable information.

These laws give Roskomnadzor (The Federal Service for Supervision of Communications, Information Technology and Mass Media) and the Kremlin’s censorship agency to remove unreliable content from the web. Moreover, the law prescribes heavy fines for knowingly spreading fake news and prescribes ISPs to deny access to websites disseminating fake news in the pretrial order following an appropriate decision issued by Roskomnadzor. Effectively, this puts a negative incentive to cooperate with foreign propaganda campaigns or other unwanted forces.

Next, the “Foreign Agent” law applies to any individual who distributes information on the internet and is funded by foreign sources. Interestingly, YouTube channels can be defined as such. According to this law, Russian citizens and foreigners can be defined as foreign agents. Consequently, all materials (including posts on social media) published by an individual who receives funds from non-Russian sources must be labeled as foreign agents. The commission of the Ministry of Justice and the Ministry of Foreign Affairs are endowed with powers to recognize individuals as foreign agents. Consequently, foreign agents will be obliged to create a legal entity and mark messages with a special mark.

Furthermore, individual foreign agents are subject to the same requirements as non-profit organizations recognized as foreign agents (the law regarding non-profit organizations was adopted in 2012). Therefore, foreign agents will be obliged to provide data on expenditures and audits regarding their activities to the Ministry of Justice. It should be noted that these administrative obligations are time consuming, complicated and expensive — they are aimed to discourage foreign agents from their activities. Overall, the purpose of the legal-psychological efforts is to discourage the population from participating in any king of anti-government activities in cyberspace. This law is of similar nature to the Foreign Agents Registration Act (FARA) enacted by the U.S. in 1938.

Conclusion: Structural Advantage and Strategic Superiority

In her article from August 25, 2017, Maria Gurova asked, “How to Tame the Cyber Beast?” Since offensive cyber operations are becoming more prominent and frequent, including cyber crime and cyber terrorism, and since these attacks are becoming more political, Russia chose to protect itself from foreign forces and global adversaries by regulating and monitoring its cyber domain directly, in contrast to Western proxy regulation practices. Interestingly, it has also created a “Kill Switch” which, if threatened by foreign forces, will allow the RuNet to keep internal internet connection — a significant need, especially for the largest country in the world. In fact, while other, less strategically sophisticated, countries will have to rely on outdated means of communication in the case of a major cyber attack, Russia can remain relatively safe and connected.

The negative aspect of the aforementioned regulation is the incompatibility to Western norms, mainly to the General Data Protection Regulation (GDPR) and the European Court of Human Rights (ECHR) decisions. This incompatibility can undermine Russian economic and socio-political relations and developments with the U.S., UK and EU, pushing Western hi-tech companies away. These regulations may also harm freedom of speech in the Russian cyber domain, as users may feel threatened to criticize the authorities. This is despite the fact that the regulations are to be implemented mostly on security-related issues. However, Western proxy regulation practices are having trouble addressing this issue as well.

All in all, Russia has the lead over Western powers — it controls all of its own cyberspace layers. In fact, as an international actor, Russia has an offensive and defensive strategic edge over its global competition. As articulated in this article, Russia has built a strategic “firewall” against foreign cyber attacks. Currently, there is no binding international law that forbids cyber attacks. In this case, the anarchy in IR and IS must be dealt with domestic solutions by each international actor. For instance, it was reported that the American Central Intelligence Agency conducted offensive cyber operations against Russia and others after a secret Presidential order in 2018. Of course, Russia had also conducted cyber and hybrid operations against its global adversaries. A U.S. special report from August 2020 concluded that Russia had created an entire ecosystem of cyber operations. This means that while Russia has a relatively secure infrastructure, the U.S., with no proper regulations, is one step behind. In this regard, China is also a step ahead of the U.S. and the EU.

*Lev Topor, PhD, Senior Research Fellow, Center for Cyber Law and Policy, University of Haifa, Israel

[1] We ask to clarify that the U.S. and the UK do regulate their cyber domains extensively. However, they mainly focus on privacy and not security and conduct some regulation through Public-Private Partnerships (PPP). For more, see Madeline Carr or Niva Elkin-Koren and Eldar Haber.

[2] Metadata is stored for a period of one year and data (messages of internet users, voice information, images, sounds, video etc.) for a period of six months.

From our partner RIAC

Continue Reading
Comments

Science & Technology

American Big Tech: No Rules

Published

on

Over the past few years, a long-term trend towards the regulation of technology giants has clearly emerged in many countries throughout the world. Interestingly, attempts to curb Big Tech are being made in the United States itself, where corporate headquarters are located. The Big 5 tech companies are well-known to everyone—Microsoft, Amazon, Meta (banned in Russia), Alphabet and Apple. From small IT companies, they quickly grew into corporate giants; their total capitalisation today is approximately $8 trillion (more than the GDP of most G20 countries). The concern of American regulators about the power of corporations arose not so much because of their unprecedented economic growth, but because of their ability to influence domestic politics, censor presidents, promote fake news, and so on.

No laws, no rules

Traditionally, Americans have been less eager to put pressure on Big Tech than, for example, the Europeans, who introduced the General Data Protection Regulation (GDPR) in 2018; it was followed by the Digital Markets Act (DMA) and the Digital Service Act (DSA).

In the United States, there is no law that protects the personal data of users at the federal level; regulation is carried out only at the level of individual states. California, Virginia, Utah and Colorado have adopted their own privacy laws. Florida and Texas have social media laws that aim to punish internet platforms for censoring conservative views.

Dozens of federal privacy data protection and security bills have been defeated without bipartisan support.

One of the few areas where US legislators have reached a consensus is protection of children’s online privacy. This bill largely repeats many of the points of the DSA, such as establishing requirements for the transparency of algorithms and forcing companies to oversee their products.

It is also worth mentioning the accession of the USA in May 2021 to an international initiative to eliminate terrorist and violent extremist content on the Internet (Christchurch Call), but this call is not legally binding.

Perhaps all the successes of the US in the “pacification” of Big Tech are limited to the abovementioned steps.

As for the antimonopoly legislation, it is becoming tougher, but it is also being applied very selectively. The numbers speak for themselves: there have been 750 mergers in the high technology sector in the last 20 years.

Thus, we can conclude that today in the United States, there is still no comprehensive regulation of digital platforms.

Causes of Regulatory Inertia

There are several reasons for America’s soft attitude towards the dominant companies: First, the intellectual basis of U.S. antitrust policy over the past 40 years has largely been based on the ideas of the Chicago school of economics, according to which it is inappropriate for the state to overregulate companies if they show economic efficiency and do not violate the interests of consumers. The main inspirer of the Chicago school, Robert Bork, has many followers, so lawsuits filed by the Federal Trade Commission or individual state prosecutors often end in nothing. For example, in June 2021, the court dismissed two antitrust lawsuits against Facebook: claims against Facebook related to the acquisition of WhatsApp and Instagram by the company, which could have forced it to sell these assets. These were filed in December 2020 by the Federal Trade Commission (FTC) and a group of attorneys general from 48 states. U.S. District Judge James Boasberg ruled that the FTC’s lawsuit was “not legally sound” because it does not provide enough evidence to support claims of Facebook’s monopoly position in the social media market.

Second, Americans profess the “California model” of Internet governance, which also implies minimal government intervention in the affairs of Silicon Valley companies.

Third, one can note the close relationship between government structures and private business. Such a connection is provided both by the phenomenon of “revolving doors” (when civil servants go to work in corporations and vice versa), and by the active lobbying activities of corporations. The American “Tech five” actively interact with the US Congress and the European Parliament, allocating impressive amounts for lobbying and hiring personnel with political connections. In 2020, Big Tech’s total spending for these purposes in the US Congress amounted to more than $63 million.

Finally, given the fragmentation of the political and economic space, techno-economic blocs are being formed, which are precisely centred on such tech giants. They are the ones who provide America with economic and technological leadership, dominance and influence in the global digital space, which explains the cautious attitude of the authorities towards the industry.

Too much freedom…

At the same time, appetites for pacifying the tech giants are also growing in the United States. They stem from allegations of a variety of significant abuses. For example, the report of the Subcommittee on Antitrust, Commercial and Administrative Law, issued in October 2020, highlights the following violations: dissemination of disinformation and hatred, monopolisation of markets, violation of consumer rights.

Concerns about the political and economic power of dominant companies arose against the backdrop of declining wages, declining start-ups, declining productivity, increasing inequality and rising prices. In addition, some experts point out “concentrated corporate power actually harms workers, innovation, prosperity and sustainable democracy in general.” There are fears among some politicians and experts that the US economy has become too monopolised and, therefore, less attractive to the rest of the world, which reduces the ability of the United States to make a constructive contribution to the development of basic international standards in the field of competition and technology.

Another issue that worries the American establishment is content moderation. The 2020 presidential election and the storming of the US Capitol have shown the power of social media and its impact on the public consciousness. Joe Biden, like his predecessor Donald Trump, has threatened to reform or completely remove Section 230 from the text of the Communications Decency Act, according to which social networks are not “publishers” of information, and therefore are not responsible for the statements of third parties that use their services. While the issue of abolishing or reforming this section has not been resolved, 18 bills have already arisen around it from various members of Congress.

As mentioned above, there is no comprehensive regulation of tech giants in the United States, but this does not mean that they feel at ease on American soil and are not fined. Here we can recall a 2019 case, when the FTC fined Facebook a record $5 billion due to a data leak of millions of social network users to Cambridge Analytica, which advised Donald Trump’s headquarters. The fine was the largest in US history and, cumulatively, was almost five times (as of February 2021) more than all fines imposed by the EU under its Privacy Regulation (GDPR). In addition, a series of antitrust lawsuits against Google followed in late 2020. Thus, it is obvious that companies in some cases experience significant pressure from regulators.

From rhetoric to practical steps?

Washington Post columnists predicted that 2022 could be a watershed year in the regulation of Gatekeepers in the USA. However, if we sum up the interim results of the fight between Joe Biden and the tech giants, then progress is not so obvious yet. Of all the proposals currently before Congress, this is an antitrust bill (the American Innovation and Choice Online Act), which would prohibit Apple, Alphabet and Amazon from providing advantages to their own services and products presented in app stores and e-commerce platforms, to the detriment of those offered by their competitors. According to some experts, this bill has good prospects, and perhaps as early as this summer, it will be put to a vote.

The US authorities have demonstrated that they are not ignoring the problem and are responding to it. A June 9 presidential decree on combating monopoly practices, and the appointment of well-known critics of Big Tech to key positions such as Lina Khan (FTC Chair), Tim Wu (Special Assistant to the President for Technology and Competition Policy), and Jonathan Kanter (Chair of the U.S. Justice Department’s Antitrust Division) are proof of this. The American government earns points for showing that it’s proactive. However, all of the aforementioned measures are only the first cautious steps.

The solution to the problem of tech sector regulation is complicated not only by the lobbying power of technology companies, but also by the fact that there is no unanimity in the US Congress regarding how narrow and rigid the rules should be. There are fierce debates between representatives of both parties on this issue.

It is hardly worth expecting the United States to quickly adopt something similar to the Digital Market Act, Digital Services Act or GDPR at the federal level. This should be seen as a matter for the more distant future; not just when a consensus emerges on the issue of regulation within the leading parties, but also when the current model of interaction between regulators and large private business has been completely revised.

Today, America lags behind its European peers in rule-making. It is likely that the global leadership of the EU in the field of technical regulation could potentially spur the US government to take more active steps. As experts note, such a “gap” leaves American companies exposed to other countries where they carry out their activities. The status of the US as a leader in the field of digital products and services is threatened when policies and rules in the digital marketplace are determined by other states.

From our partner RIAC

Continue Reading

Science & Technology

Artificial intelligence and moral issues: The cyborg concept

Published

on

San Francisco, California, March 27, 2017. Entrepreneur Elon Musk, one of the masterminds behind projects such as Tesla and SpaceX, announced his next venture, namely Neuralink. The company aims to merge humans with electronics, creating what Musk calls the neural lace. It is a device that injected into the jugular vein would reach the brain and then unfold into a network of electrical connections connected directly to human neurons. The idea is to develop enhanced brain-computer interfaces to increase the extent to which the biological brain can interact and communicate with external computers. The neural lace will go down to the level of brain neurons: it will be a mesh that will be able to connect directly to brain matter and then connect with a computer. That human being will be a cyborg. The cyborg is a biological mix of man and machine.

Prof. Kaku wonders: “What drives us to merge with computers rather than compete with them? An inferiority complex? Nothing can prevent machines from becoming ever smarter until they are able to programme and make the robot themselves. This is the reason why humans try to take advantage of superhuman abilities”.

As we all know, although Elon Musk has made it clear what the dangers of creating an artificial intelligence that gets out of control are, he is also convinced that if the project is developed properly, humans will enjoy the power of an advanced computer technology, thus taking a step further than current biology. Nevertheless, while Neuralink technology is still at an embryonic stage, there are many people who insist that merging man and machine is not something so remote and are convinced that one way or another this has been happening for decades.

In 2002, Prof. Kevin Warwick – an engineer and Professor of cybernetics at Coventry University in the UK – demonstrated that a neural implant could not only be controlled by a prosthesis, but also by another human being.

In that same year, he and his wife had a set of electrodes – 100 each to be precise – implanted in their nervous system so that it could in turn be connected to a computer. Then all they did was connect the two nervous systems so that they could communicate with each other. Hence every time the wife closed her hand, the husband’s brain always received an impulse. If his wife opened and closed her hand three times in a row, he would receive three impulses. In this way they were able to connect two nervous systems. Who knows what might happen in the future.

Instead of talking and sending messages or emails, will we soon be able to communicate with each other? It is only a matter of time before cybernetic technology offers us an endless array of possible options. This would enable us to order something just by thinking; to listen to music directly in our brain or to search the Internet just by thinking about what we are interested in finding.

Prof. Kaku states: ‘We are heading to a new form of immortality, i.e. that of information technology. By digitalising all known information in our consciousness, then probably the soul becomes computerised. At that juncture the soul and information could be separated from the body and when the body dies the essence, soul and memory would live on indefinitely”.

In that case, humans will be about to replace body and mind, piece by piece, as they prepare to transform into cyborgs.

The marriage between man and machine has turned into something that is increasingly happening in the area of personal computers, tablets, mobile phones and even implants that provide an extraordinarily large amount of data ranging from a person’s vital signs to geolocation, from diet to recreational behaviour, etc.. We are therefore destined to merge with the machines we are creating. These technologies will help us make the leaps forward that can take us beyond our planet and the moon – as we will see more clearly later on. This is the future that awaits us: a future in which evolution will no longer take place by natural selection as Darwin’s theory maintains, but by human management. This will happen in the coming decades, in the short-term future.

On Icarus (vol. 224, issue No. 1, May 2013) – a journal dedicated to the field of planetary sciences, and published under the auspices of the Division for Planetary Sciences of the American Astronomical Society – mathematician Vladimir Ščerbak and astrobiologist Maksim A. Makukov, both from Kazakhstan, published a study conducted on the human genome: The ‘Wow! signal’ of the terrestrial genetic code.

The conclusions of the study are shocking. There is allegedly a hidden code in our DNA containing precise mathematical patterns and an unknown symbolic language. Examination of the human genome reveals the presence of a sort of non-terrestrial imprint on our genetic code, which would function just like a mathematical code. The probability that this sequence may repeat nine times in the randomness of our genetic code – as “’assumed” by Darwin’s theory – is one in ten billion. The DNA certainly has origins that are not random and have nothing to do with the 19th century Darwin’s theories, tiredly repeated to this day.

Our genes have been artificially mutated, and if the theory of the two Kazakh scholars were true, the fact that man is inclined to turn into a cyborg would be perfectly plausible since he has a non-random intelligence that can join the artificial intelligence that, for the time being, is only the heritage of sophisticated computers or early attempts of humanoid robots. There is also the answer to Prof. Kaku’s question: for this reason, from time immemorial, humans have had a penchant for creating their own variants and improving them with cybernetics (programming robots with artificial intelligence), as well as being eager to merge with AI itself. Many scholars and experts agree that – in view of surviving, evolving and travelling across the cosmos – any intelligent species shall overcome the biological stage. This is because by leaving the earth’s atmosphere and trying to go further, much further, humans must be able to adapt to different environments, to places where the atmosphere is poisonous, or where the gravitational pull is much stronger or much weaker than on our planet.

The best answer to Prof. Kaku’s question is that humans are somehow compelled to create robots ever more like themselves not to satisfy their desire to outdo one another by creating intelligent creatures in their image, but to fulfil their destiny outside the earth. This is demonstrated by further clues and signs coming from an analysis of the latest technologies developed by man in anticipation of the next phase of his evolution in the outer space.

Science Robotics – the prestigious scientific journal published by the American Association for the Advancement of Science – published the article Robotic Space Exploration Agents (vol. 2, issue no. 7, June 2017), written by Steven Chien and Kiri L. Wagstaff, from NASA’s Jet Propulsion Laboratory at the California Institute of Technology. According to their theory, astronauts travelling through the space will very soon be replaced by robots, e.g. synthetic human beings capable of making autonomous decisions using artificial intelligence. Space is a very hostile environment for humans. There is strong radioactivity and moving in a vacuum is not so easy, while machines can move nimbly in space. The important thing is that the electronic circuits are protected from damage. It is therefore easier and cheaper for a machine to explore another planet or another solar system. It is believed that space exploration will be more machine-based than man-based. It will not be man who will explore space on a large scale: we will send machines with artificial intelligence that will not have acceleration problems since they will be able to travel outside the solar system using the acceleration of gravity. It would be very useful to have an intelligent system capable of communicating, for example, with Alpha Centauri – our nearest star system – since it would take 8 years and 133 days to send a signal to earth and receive a response. Hence why not use artificial intelligence to make decisions and work? Missions to Mars and Alpha Centauri guided by artificial intelligence could become a reality. NASA has been testing this technology as early as 1998 with the Deep Space 1 probe. It was sent to the asteroid belt located between Mars and Jupiter. Using a system called AutoNav, the probe took photos of asteroids following its itinerary without any human support. The Mars rover is basically an autonomous terrestrial robot that travels around Mars collecting samples and transmitting information. It is a newly fielded autonomous system, which means that, as soon as artificial intelligence is sufficiently reliable to be deployed aboard a spacecraft, there will be a robotic spacecraft that can arrive on Mars. Once we send robotic spaceships programmed with artificial intelligence we will relinquish all possibility of control because it will be our “envoys” that will make decisions on the spot.

Continue Reading

Science & Technology

Artificial intelligence and moral issues: AI between war and self-consciousness

Published

on

At the beginning of 2018, the number of mobile phones in use surpassed the number of humans on the planet, reaching 8 billion. In theory, each of these devices is connected to two billion computers, which are themselves networked. Given the incredible amount of data involved in this type of use, and considering that the computer network is in constant contact and growing, is it possible that mankind has already created a massive brain? An artificial intelligence that has taken on an identity of its own?

The field of robotics is constantly evolving and continues to make strides. It is therefore clear that sooner or later we shall move from artificial intelligence to super-intelligence, i.e. a being on this planet that is smarter than we are and will soon not be any smarter. It will not be pleasant when artificial intelligence with its knowledge and intellectual abilities corners the human being, surpassing flesh and blood people in any field of knowledge. It will be a pivotal moment that will radically change world history – as for now our existence is justified by the fact that we are at the top of the food chain, but the moment when an entity is self-created that does not need to feed itself on pasta and meat, what will we exist for if that entity only needs solar energy to perpetuate itself indefinitely?

If sooner or later we are to be replaced by artificial intelligence, we must begin to prepare ourselves psychologically. Portland, Oregon, April 7, 2016: the US Defence Advanced Research Projects Agency (DARPA) launched the prototype of the unmanned anti-submarine vessel Sea Hunter, marking the beginning of a new era. Unlike the Predator and Air Force drones, this vessel does not need a remote operator and is built to be able to navigate on its own while avoiding all kinds of obstacles at sea. It has enough fuel to withstand up to three months at sea and is very silent. It also transmits encrypted information to Defence Intelligence Services. When the US Department of Defence says that an unmanned submarine would not be launched without remote control, they are telling the truth. But there is more to consider, i.e. that Russia has developed a remotely piloted submarine with a nuclear weapon. This means that between 5 and 15 years will elapse before the US Defence can respond to a remotely piloted submarine with a nuclear weapon on board.

It has always been said that the war drone replaces the flesh and blood soldier, who becomes a remote “playstation” operator. Hence the idea of the drone as a substitute for the human soldier, who would be guaranteed total safety and security, so as to avoid unnecessary dangers. It had been forgotten, however, that remote control could be intercepted by the enemy and change targets by striking its own army. At that point, however, drones would have to be made completely autonomous. Such a drone would be a killing machine that would wipe out entire armies, which is the reason why care should be taken to avoid their proliferation on battlefields. Any kind of accident, a fire or even a minor malfunction would trigger a “madness” mechanism that would cause the machine to kill anyone. Developing killer robots is possible. Facial recognition technology has made great strides and artificial intelligence can recognise faces and detect targets. In fact, drones are already being used to detect and target individuals, based on facial features: they kill and injure.

The application of artificial intelligence to military technology will change warfare forever. It is possible for the army’s autonomous machines to take wrong decisions, thus reaping tens of thousands of casualties among friends, enemies and defenceless civilians. What if they even go so far as to ignore instructions? If so, if autonomous, self-driving killing machines independent of human commands are designed, could we be facing a violent fate of extinction for the human race?

While many experts and scholars agree that humans will be the architects of their own violent downfall first and destruction later, others believe that the advancement of artificial intelligence may be the key to mankind salvation.

Los Angeles, May 2018: at the University of California, Professor Veronica Santos was working on the development of a project to create increasingly human-like robots capable of sensing physical contact and reacting to it. She was also testing different ways of robot tactile sensitivity. Combining all this with artificial intelligence, there may one day be a humanoid robot capable of exploring the space as far as Mars. Humanoid robots are increasingly a reality, ranging from the field of neuroprosthetics to machines for colonising celestial bodies.

Although the use of humanoid robots is a rather controversial topic, this sector has the merit of having great prospects, especially for those who intend to invest in the field. Funding development projects could prove useful in the creation of artificial human beings that are practically impossible to distinguish from flesh and blood individuals.

These humanoids, however, could conceivably express desires and feel pain, as well as display a wide range of feelings and emotions. It is actually well-known that we do not know what an emotion really is. Hence would we really be able to create an artificial emotion, or would we make fatal errors in software processing? If a robot can distinguish between good and evil and know suffering, will this be the first step towards the possibility of developing feelings and a conscience?

Let us reflect. Although computers surpass humans in data processing, they pale into insignificance faced with the complexity and sophistication of the central nervous system. In April 2013, the Japanese technology company Fujitsu tried to simulate the network of neurons in the brain using one of the most powerful supercomputers on the planet. Despite being equipped with 82,000 of the world’s fastest processors, it took over 40 minutes to simulate just one second of 1% of human brain activity (Tim Hornyak, Fujitsu supercomputer simulates 1 second of brain activity in https://www.cnet.com/culture/fujitsu-supercomputer-simulates-1-second-of-brain-activity/)

Japanese-born astrophysicist Michio Kaku – graduated summa cum laude from Harvard University – stated:

“Fifty years ago we made a big mistake thinking that the brain was a digital computer. It is not! The brain is a machine capable of learning, which regenerates itself when it has completed its task. Children have the ability to learn from their mistakes: when they come across something new, they learn to understand how it works by interacting with the world. This is exactly what we need and to do this we need a computer that is up to the job: a quantum computer”.

Unlike today’s computers that rely on bits – a binary series of 0s and 1s to process data – quantum computers use quantum bits, or qubits – which can use 0s and 1s at the same time. This enables them to perform millions of calculations simultaneously in much the same way as the human brain does.

Kaku added: “Robots are machines and as such they do not think and have no silicon consciousness. They are not aware of who they are and their surroundings. It has to be recognised, however, that it is only a matter of time before they can have some awareness”.

Is it really possible for machines to become sentient entities fully aware of themselves and their surroundings?

Kaku maintained: ‘We can imagine a future time when robots will be as intelligent as a mouse, and after the mouse as a rabbit, and then as a cat, a dog, until they become as cunning as a monkey. Robots do not know they are machines and I think that, by the end of this century, robots will probably begin to realise that they are different, that they are something else than their master”.

Continue Reading

Publications

Latest

Americas14 mins ago

Aligning values into an interest-based Canadian Indo-Pacific Strategy

Russia’s invasion of Ukraine is an explicit challenge to the post-WW 2 order. This order has brought peace and stability...

Science & Technology5 hours ago

American Big Tech: No Rules

Over the past few years, a long-term trend towards the regulation of technology giants has clearly emerged in many countries...

World News8 hours ago

Rise of disinformation a symptom of ‘global diseases’ undermining public trust

Societies everywhere are beset by “global diseases” including systemic inequality which have helped fuel a rise in disinformation, or the...

Urban Development10 hours ago

Inclusive cities critical to post-pandemic recovery

A UN conference on transforming the world’s urban areas is underway in Poland this week, which will include a dialogue...

Finance12 hours ago

Import Control System 2 (ICS2) Release 2: New requirements for inbound air shipments to the EU

From 1 March 2023, all freight forwarders, air carriers, express couriers, and postal operators transporting goods to or through the...

South Asia14 hours ago

Rohingya repatriation between Myanmar-Bangladesh

Refugees find themselves in a situation of limbo because of the prolonged refugee scenario. They are neither eligible for repatriation...

East Asia17 hours ago

Taiwan dispute, regional stability in East Asia and US policy towards it

In the 1950s, armed confrontation erupted between the People’s Republic of China (PRC) and the Republic of China (ROC) over...

Trending