Connect with us

Tech

Repealing Net Neutrality: A Dissenting Opinion

Saurabh Malkar

Published

on

I must preface that I am not a certified or self-trained expert in computer networking, the Internet, or Information-Technology (IT). The following views are mine and have been arrived at by listening to/reading up on the issue of net neutrality from partisan and non-partisan sources. Well-informed and fact-based views from experts on the subject are most welcome and highly sought.

The Trump administration placed net neutrality on the chopping block and AjitPai did the honors by repealing it. The issue created a large furor in the world of Internet and social media with divergent explanations floated by both sides.

Conservatives and right-wingers supported the repeal stating that the government shouldn’t impose itself on service providers and get to have a say in their operations. Folks on the left claimed that the Internet is no longer free and that loss of net-neutrality will usher in tiered tariffs and throttling/blocking of web content at the whim of the service providers (ISP).

It’s increasingly difficult to take a purely scientific approach towards technical issues in a culture where the pettiest things are used to smear the opposition and play partisan political games. With much effort, I have attempted to put aside politics and look merely into the nerdy details of this extremely obtuse concept of net-neutrality.

The premise of net neutrality hinges on the aphorism that the Internet/Web (* a nuanced, yet significant, distinction between the two will be discussed briefly later) is a public utility, hence, should be made available and accessible to everyone equally, just like electricity, cooking gas, and water. Corporations are profit-driven and heartless, as a result, the government should get involved in the markets and make sure that everyone gets these utilities and nobody is left in the lurch.

So, is the Internet a public utility?

The science of economics describes two characteristics for a service to qualify as public utility: non-excludability (people cannot be denied the product regardless of whether they have paid) and non-rivalry (consumption by one doesn’t reduce availability for others).

The Internet certainly doesn’t meet the non-excludability criterion, in that people who don’t pay for the service don’t get to use it. Major cities across the US have set up public Wi-Fi in a bid to provide Internet to all, but such “access-for-all” isn’t standard across the vast majority of the nation.

Thankfully, the Internet doesn’t fail to meet the non-rivalry criteria. A huge slug of new users might overwhelm existing service capabilities transiently, but additional hardware can be added to accommodate the growing demand. Thus, for all practical purposes, the Internet qualifies the non-rivalry criterion.

In summary, the Internet isn’t a public utility, at least not now.

But I would like to make additional depositions to make my case well-rounded and cogent.

The Internet was conceived in the 1960s as an effort on part of the US federal government to transfer data over foolproof communication networks run by computers. What started as a nascent and clunky project involving huge machines and laughable transfer speeds evolved into a means of global networking, telephony, and information transfer at incredibly fast speeds. This evolution was majorly spearheaded by researchers at several government agencies from different parts of the world. In the 1990s, the Internet was opened up to private players for commercial usage. Thus, the Internet has been built and developed using taxpayer money. Also, of note is that the Internet is a decentralized space that no one has hegemony over.

Now, over to the Web. Thrown around carelessly and interchangeably to describe the Internet, the Web is actually different from the Internet. The Web is an application developed by Sir Tim Berners-Lee, during his time at CERN – a multi-government funded organization – to access documents, pictures, videos and other files on the Internet that are marked in a distinguished manner. It’s one of the several ways to access stuff on the Internet and communicate with one another. By corollary, the Web was thus crafted by an individual using public’s (taxpayer)money. It’s this little, yet extremely important, corner of the Internet that this brouhaha is all about.

ISPs function as middlemen connecting end users to the Internet space, mainly through the World Wide Web or the Web or WWW. Neither did they create the Internet or the WWW, nor do they maintain it.

Effectively, private corporations are helping us access a digital space that was created using public’s money. Moreover, the creators of this space – whether it be governmental agencies or individuals – in all their largesse decided to open up the space for commercial use and allow people to freely (not to be conflated with ‘for free’) use the space.

Over the years, the Web has grown from an information archive and emailing medium to a source of employment, a means of starting and running a business, a tool to reach out to people across the world, a place to broadcast yourself and your work, and much more. While the Web doesn’t qualify as public utility, it does serve as one of the few ways by which people in first countries can augment the socioeconomic momentum of the Industrial Revolution using digital technology and by which people in third countries can change their destinies by creating an app, or by engaging in commerce across borders, or educating themselves for free.

Repealing net neutrality gives ISPs a kind of hegemony, not over the Web or the Internet, but over what we consume from this public-utility-hopeful. While larger corporations can find a way around by paying the large sums of money ISPs might demand for a certain degree of visibility on their respective services, it is almost difficult for an entrepreneur or a blogger or an independent journalist to pay the same sum for the same degree of visibility on those services.

“Take your business over to Facebook or on some other social media outlet and you won’t be discriminated against,” one might argue. Not quite true! Social media have tailored news feeds and show you what you have already seen. It will be difficult to market your business on fronts that are slowly devolving into echo chambers. Also, one cannot be certain that social media giants are unbiased in the way they deliver content, as has been the case with Facebook, which was accused of manipulating the ‘trending’ feature to suit their political leaning.

The gravity of the problem is further compounded when one factors in the regional monopolies that ISPs enjoy in the US. Competition is scarce because of the cost-intensive nature of running cables under the streets and setting up hardware. Overbuilders (ISPs using existing hardware and cables to provide an alternative) can increase competition, but financial feasibility and ROI of such ventures are pretty dim. In this regard, the Web certainly functions like a public utility and requires some sort of accountability on part of the ISP.

There is also a technical angle to the importance of net neutrality, which is lucidly explained here.

Repeal of net-neutrality should get everyone disconcerted, especially, small business owners, entrepreneurs, innovators, and the most vulnerable – alternative news media outlets, especially the ones with unsavory views – many of which tend to be on the political right. Cheering along to your own demise because your guy did it is the gold standard of intellectual indolence and buffoonery.

I would like to once again post face that I am not a certified or self-trained expert in matters of the Internet, computing, or networking and would welcome fact-based feedback on this subject.

Having said that, I can tell you two things with certainty: 1. Capitalize the first letter of Internet and Web and place the definite article the before these words when referencing them; and 2. We use the Web to get on the Internet to do stuff.

Signed

A Conservative-Libertarian

An ex-dentist and a business graduate who is greatly influenced by American conservatism and western values. Having born and brought up in a non-western, third world country, he provides an ‘outside-in’ view on western values. As a budding writer and analyst, he is very much stoked about western culture and looks forward to expound and learn more. Mr. Malkar receives correspondence at saurabh.malkar[at]gmail.com. To read his 140-character commentary on Twitter, follow him at @saurabh_malkar

Continue Reading
Comments

Tech

The Ethical and Legal Issues of Artificial Intelligence

Published

on

Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical issues. Artificial intelligence adds a new dimension to these questions. Systems that use artificial intelligence technologies are becoming increasingly autonomous in terms of the complexity of the tasks they can perform, their potential impact on the world and the diminishing ability of humans to understand, predict and control their functioning. Most people underestimate the real level of automation of these systems, which have the ability to learn from their own experience and perform actions beyond the scope of those intended by their creators. This causes a number of ethical and legal difficulties that we will touch upon in this article.

Ethics and Artificial Intelligence

There is a well-known thought experiment in ethics called the trolley problem. The experiment raises a number of important ethical issues that are directly related to artificial intelligence. Imagine a runaway trolley going down the railway lines. There are five people tied to the track ahead. You are standing next to a lever. If you pull it, the trolley will switch to a different set of track. However, there is another person tied to that set of track. Do you pull the lever or not?

Source: Wikimedia.org

There is no clear-cut answer to this question. What is more, there are numerous situations in which such a decision may have to be made [1]. And different social groups tend to give different answers. For example, Buddhist monks are overwhelmingly willing to sacrifice the life of one person in order to save five, even if presented with a more complicated variation of the trolley problem.

As for artificial intelligence, such a situation could arise, for example, if a self-driving vehicle is travelling along a road in a situation where an accident is unavoidable. The question thus arises as to whose lives should take priority – those of the passengers, the pedestrians or neither. A special website has been created by the Massachusetts Institute of Technology that deals with this very issue: users can test out various scenarios out on themselves and decide which courses of action would be the most worthwhile.

Other questions also arise in this case: What actions can be allowed from the legal point of view? What should serve as a basis for such decisions? Who should ultimately be held responsible? This problem has already been addressed by companies and regulators. Representatives at Mercedes, for example, have said outright that their cars will prioritize the lives of passengers. The Federal Ministry of Transport and Digital Infrastructure of Germany responded to this immediately, anticipating future regulation by stating that making such a choice based on a set of criteria would be illegal, and that the car manufacturer be held responsible for any injury or loss of life.

Other countries may go a different route. Take the Chinese Social Credit System, for example, which rates its citizens based how law-abiding and how useful to society they are, etc. Those with low ratings will face sanctions. What is stopping the Chinese government from introducing a law that forces manufacturers of self-driving vehicles to sacrifice the lives of lower-rated citizens in the event of an unavoidable accident? Face recognition technologies and access to the relevant databases make it perfectly possible to identify potential victims and compare their social credit ratings.

The Main Problems Facing the Law

The legal problems run even deeper, especially in the case of robots. A system that learns from information it receives from the outside world can act in ways that its creators could not have predicted [2], and predictability is crucial to modern legal approaches. What is more, such systems can operate independently from their creators or operators thus complicating the task of determining responsibility. These characteristics pose problems related to predictability and the ability to act independently while at the same time not being held responsible [3].

There are numerous options in terms of regulation, including regulation that is based on existing norms and standards. For example, technologies that use artificial intelligence can be regulated as items subject to copyright or as property. Difficulties arise here, however, if we take into account the ability of such technologies to act autonomously, against the will of their creators, owners or proprietors. In this regard, it is possible to apply the rules that regulate a special kind of ownership, namely animals, since the latter are also capable of autonomous actions. In Russian Law, the general rules of ownership are applied to animals (Article 137 of the Civil Code of the Russian Federation); the issue of responsibility, therefore, comes under Article 1064 of the Civil Code of the Russian Federation: injury inflicted on the personality or property of an individual shall be subject to full compensation by the person who inflicted the damage.

Proposals on the application of the law on animals have been made [4], although they are somewhat limited. First, the application of legislation on the basis of analogy is unacceptable within the framework of criminal law. Second, these laws have been created primarily for household pets, which we can reasonably expect will not cause harm under normal circumstances. There have been calls in more developed legal systems to apply similar rules to those that regulate the keeping of wild animals, since the rules governing wild animals are more stringent [5]. The question arises here, however, of how to make a separation with regard to the specific features of artificial intelligence mentioned above. Moreover, stringent rules may actually slow down the introduction of artificial intelligence technologies due to the unexpected risks of liability for creators and inventors.

Another widespread suggestion is to apply similar norms to those that regulate the activities of legal entities [6]. Since a legal entity is an artificially constructed subject of the law [7], robots can be given similar status. The law can be sufficiently flexible and grant the rights to just about anybody. It can also restrict rights. For example, historically, slaves had virtually no rights and were effectively property. The opposite situation can also be observed, in which objects that do not demonstrate any explicit signs of the ability to do anything are vested with rights. Even today, there are examples of unusual objects that are recognized as legal entities, both in developed and developing countries. In 2017, a law was passed in New Zealand recognizing the status of the Whanganui River as a legal entity. The law states that the river is a legal entity and, as such, has all the rights, powers and obligations of a legal entity. The law thus transformed the river from a possession or property into a legal entity, which expanded the boundaries of what can be considered property and what cannot. In 2000, the Supreme Court of India recognized the main sacred text of the Sikhs, the Guru Granth Sahib, as a legal entity.

Even if we do not consider the most extreme cases and cite ordinary companies as an example, we can say that some legal systems make legal entities liable under civil and, in certain cases, criminal law [8]. Without determining whether a company (or state) can have free will or intent, or whether they can act deliberately or knowingly, they can be recognized as legally responsible for certain actions. In the same way, it is not necessary to ascribe intent or free will to robots to recognize them as responsible for their actions.

The analogy of legal entities, however, is problematic, as the concept of legal entity is necessary in order to carry out justice in a speedy and effective manner. But the actions of legal entities always go back to those of a single person or group of people, even if it is impossible to determine exactly who they are [9]. In other words, the legal responsibility of companies and similar entities is linked to the actions performed by their employees or representatives. What is more, legal entities are only deemed to be criminally liable if an individual performing the illegal action on behalf of the legal entity is determined [10]. The actions of artificial intelligence-based systems will not necessarily be traced back to the actions of an individual.

Finally, legal norms on the sources of increased danger can be applied to artificial intelligence-based systems. In accordance with Paragraph 1 of Article 1079 of the Civil Code of the A Russian Federation, legal entities and individuals whose activities are associated with increased danger for the surrounding population (the use of transport vehicles, mechanisms, etc.) shall be obliged to redress the injury inflicted by the source of increased danger, unless they prove that injury has been inflicted as a result of force majeure circumstances or at the intent of the injured person. The problem is identifying which artificial intelligence systems can be deemed sources of increased danger. The issue is similar to the one mentioned above regarding domestic and wild animals.

National and International Regulation

Many countries are actively creating the legal conditions for the development of technologies that use artificial intelligence. For example, the “Intelligent Robot Development and Dissemination Promotion Law” has been in place in South Korea since 2008. The law is aimed at improving the quality of life and developing the economy through the creation and promotion of a strategy for the sustainable development of the smart robot industry. Every five years, the government works out a basic plan to ensure that these goals are achieved.

I would like to pay particular attention here to two recent examples: France, which has declared its ambitions to become a European and world leader in artificial intelligence; and the European Union, which has put forward advanced rules for the regulation of smart robots.

France

In late March 2018, President of France Emmanuel Macron presented the country’s new national artificial intelligence strategy, which involves investing 1.5 billion Euros over the next five years to support research an innovation in the field. The strategy is based on the recommendations made in the report prepared under the supervision of French mathematician and National Assembly deputy Cédric Villani. The decision was made for the strategy to be aimed at four specific sectors: healthcare; transport; the environment and environmental protection; and security. The reasoning behind this is to focus potential of the comparative advantages and competencies in artificial intelligence on sectors where companies can play a key role at the global level, and because these technologies are important for the public interest, etc.

Seven key proposals are given, one of which is of particular interest for the purposes of this article – namely, to make artificial intelligence more open. It is true that the algorithms used in artificial intelligence are discrete and, in most cases, trade secrets. However, algorithms can be biased, for example, in the process of self-learning, they can absorb and adopt the stereotypes that exist in society or which are transferred to them by developers and make decisions based on them. There is already legal precedent for this. A defendant in the United States received a lengthy prison sentence on the basis of information obtained from an algorithm predicting the likelihood of repeat offences being committed. The defendant’s appeal against the use of an algorithm in the sentencing process was rejected because the criteria used to evaluate the possibility of repeat offences were a trade secret and therefore not presented. The French strategy proposes developing transparent algorithms that can be tested and verified, determining the ethical responsibility of those working in artificial intelligence, creating an ethics advisory committee, etc.

European Union

The creation of the resolution on the Civil Law Rules on Robotics marked the first step towards the regulation of artificial intelligence in the European Union. A working group on legal questions related to the development of robotics and artificial intelligence in the European Union was established back in 2015. The resolution is not a binding document, but it does give a number of recommendations to the European Commission on possible actions in the area of artificial intelligence, not only with regard to civil law, but also to the ethical aspects of robotics.

The resolution defines a “smart robot” as “one which has autonomy through the use of sensors and/or interconnectivity with the environment, which has at least a minor physical support, which adapts its behaviour and actions to the environment and which cannot be defined as having ‘life’ in the biological sense.” The proposal is made to “introduce a system for registering advanced robots that would be managed by an EU Agency for Robotics and Artificial Intelligence.” As regards liability for damage caused by robots, two options are suggested: “either strict liability (no fault required) or on a risk-management approach (liability of a person who was able to minimise the risks).” Liability, according to the resolution, “should be proportionate to the actual level of instructions given to the robot and to its degree of autonomy. Rules on liability could be complemented by a compulsory insurance scheme for robot users, and a compensation fund to pay out compensation in case no insurance policy covered the risk.”

The resolution proposes two codes of conduct for dealing with ethical issues: a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics Committees. The first code proposes four ethical principles in robotics engineering: 1) beneficence (robots should act in the best interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).

The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore, our attitude to autonomous systems (whether they are robots or something else), and our reinterpretation of their role in society and their place among us, can have a transformational effect. Legal personality determines what is important for society and allows the decision to made as to whether “something” is a valuable and reasonable object for the purposes of possessing rights and obligations.

Due to the specific features of artificial intelligence, suggestions have been put forward regarding the direct responsibility of certain systems [11]. According to this line of thought, there are no fundamental reasons why autonomous systems should not be legally liable for their actions. The question remains, however, about the necessity or desirability of introducing this kind of liability (at least at the present stage). It is also related to the ethical issues mentioned above. Perhaps making programmers or users of autonomous systems liable for the actions of those systems would be more effective. But this could slow down innovation. This is why we need to continue to search for the perfect balance.

In order to find this balance, we need to address a number of issues. For example: What goals are we pursuing in the development of artificial intelligence? And how effective will it be? The answers to these questions will help us to prevent situations like the one that appeared in Russia in the 17th century, when an animal (specifically goats) was exiled to Siberia for its actions [12].

First published at our partner RIAC

  1. 1. See, for example. D. Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong, Princeton University Press, 2013.
  2. 2. Asaro P., “From Mechanisms of Adaptation to Intelligence Amplifiers: The Philosophy of W. Ross Ashby,” in Wheeler M., Husbands P., and Holland O. (eds.) The Mechanical Mind in History, Cambridge, MA: MIT Press: pp. 149–184
  3. 3. Asaro P. The Liability Problem for Autonomous Artificial Agents // AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA. March 21–23, 2016, p. 191.
  4. 4. Arkhipov, V., Naumov, V. On Certain Issues Regarding the Theoretical Grounds for Developing Legislation on Robotics: Aspects of Will and Legal Personality // Zakon. 2017, No. 5, p. 167.
  5. 5. Asaro P. The Liability Problem for Autonomous Artificial Agents, p. 193.
  6. 6. Arkhipov, V., Naumov, V. Op. cit., p. 164.
  7. 7. See, for example. Winkler A. We the Corporations: How American Businesses Won Their Civil Rights. Liverlight, 2018. See a description here: https://www.nytimes.com/2018/03/05/books/review/adam-winkler-we-the-corporations.html
  8. 8. In countries that use the Anglo-Saxon legal system, the European Union and some Middle Eastern countries. This kind of liability also exists in certain former Soviet countries: Georgia, Kazakhstan, Moldova and Ukraine. It does not exist in Russia, although it is under discussion.
  9. 9. Brożek B., Jakubiec M. On the Legal Responsibility of Autonomous Machines // Artificial Intelligence Law. 2017, No. 25(3), pp. 293–304.
  10. 10. Khanna V.S. Corporate Criminal Liability: What Purpose Does It Serve? // Harvard Law Review. 1996, No. 109, pp. 1477–1534.
  11. 11. Hage J. Theoretical Foundations for the Responsibility of Autonomous Agents // Artificial Intelligence Law. 2017, No. 25(3), pp. 255–271.
  12. 12. U. Pagallo, The Laws of Robots. Crimes, Contracts, and Torts. Springer, 2013, p. 36.

Continue Reading

Tech

Busting the Blockchain Hype: How to Tell if Distributed Ledger Technology is Right for You

MD Staff

Published

on

Blockchain has been hailed as the solution for everything, from resolving global financial inequality, providing IDs for refugees, to enabling people to sell their houses without an estate agent. However, the overwhelming hype surrounding this technology over the past year is misleading and untrue.

“We have been up and down on the blockchain roller coaster this past year,” said Sheila Warren, head of the Blockchain and Distributed Ledger Technology project at the World Economic Forum Center for the Fourth Industrial Revolution. “Blockchain is an innovative solution, but it is not the solution to all problems. Blockchain has to be the right solution for the right business problem. Busting the blockchain hype is necessary to make sure businesses are using it in the right way and not damaging the long-term prospects of the technology.”

Through research and analysis of the technology’s capabilities and the ways it is used around the world, the team found there were 11 questions (at most) to answer to determine if blockchain can be the solution.

“To bust some of the blockchain hype, we had to design a practical framework for people who knew nothing about the technology. We started with the premise that blockchain is like any other technology – a tool in a company’s toolbox,” said Cathy Mulligan, Visiting Researcher at Imperial College London and member of the Forum’s Global Future Council on Blockchain. “If you break down the kinds of problems blockchain technology is solving and its potential, clear paths emerge.”

The paths were incorporated into a framework of “yes” and “no” questions, which guide a business leader once a specific problem is articulated. “This framework cuts through the noise about blockchain and refocuses the technology into the way business leaders think,” said Jennifer Zhu Scott, Founding Partner of Radian and member of the Global Futures Council on Blockchain.

“These 11 questions were developed and then trialled with chief executive officers at a workshop at the World Economic Forum Annual Meeting 2018. The test group included C-suite executives from large corporations, most of whom said they were actively considering adopting blockchain technology in some manner,” said JP Rangaswami, Chief Data Officer, Deutsche Bank.

During the workshop, one publicly listed energy company discussed its plans for an initial coin offering (ICO) and a large bank shared how it was considering using blockchain-based crypto-tokens for transferring remittances. Even in the much-debated cryptocurrency space, 100% of the participants believed that even after the cryptocurrency bubble burst, the token economy would be here to stay.

Continue Reading

Tech

The Artificial Intelligence Race: U.S. China and Russia

Ecatarina Garcia

Published

on

Artificial intelligence (AI), a subset of machine learning, has the potential to drastically impact a nation’s national security in various ways. Coined as the next space race, the race for AI dominance is both intense and necessary for nations to remain primary in an evolving global environment. As technology develops so does the amount of virtual information and the ability to operate at optimal levels when taking advantage of this data. Furthermore, the proper use and implementation of AI can facilitate a nation in the achievement of information, economic, and military superiority – all ingredients to maintaining a prominent place on the global stage. According to Paul Scharre, “AI today is a very powerful technology. Many people compare it to a new industrial revolution in its capacity to change things. It is poised to change not only the way we think about productivity but also elements of national power.”AI is not only the future for economic and commercial power, but also has various military applications with regard to national security for each and every aspiring global power.

While the U.S. is the birthplace of AI, other states have taken a serious approach to research and development considering the potential global gains. Three of the world’s biggest players, U.S., Russia, and China, are entrenched in non-kinetic battle to out-pace the other in AI development and implementation. Moreover, due to the considerable advantages artificial intelligence can provide it is now a race between these players to master AI and integrate this capability into military applications in order to assert power and influence globally. As AI becomes more ubiquitous, it is no longer a next-generation design of science fiction. Its potential to provide strategic advantage is clear. Thus, to capitalize on this potential strategic advantage, the U.S. is seeking to develop a deliberate strategy to position itself as the permanent top-tier of AI implementation.

Problem

The current AI reality is near-peer competitors are leading or closing the gap with the U.S. Of note, Allen and Husain indicate the problem is exacerbated by a lack of AI in the national agenda, diminishing funds for science and technology funding, and the public availability of AI research. The U.S. has enjoyed a technological edge that, at times, enabled military superiority against near-peers. However, there is argument that the U.S. is losing grasp of that advantage. As Flournoy and Lyons indicate, China and Russia are investing massively in research and development efforts to produce technologies and capabilities “specifically designed to blunt U.S. strengths and exploit U.S. vulnerabilities.”

The technological capabilities once unique to the U.S. are now proliferated across both nation-states and other non-state actors. As Allen and Chan indicate, “initially, technological progress will deliver the greatest advantages to large, well-funded, and technologically sophisticated militaries. As prices fall, states with budget-constrained and less technologically-advanced militaries will adopt the technology, as will non-state actors.” As an example, the American use of unmanned aerial vehicles in Iraq and Afghanistan provided a technological advantage in the battle space. But as prices for this technology drop, non-state actors like the Islamic State is making noteworthy use of remotely-controlled aerial drones in its military operations. While the aforementioned is part of the issue, more concerning is the fact that the Department of Defense (DoD) and U.S. defense industry are no longer the epicenter for the development of next-generation advancements. Rather, the most innovative development is occurring more with private commercial companies. Unlike China and Russia, the U.S. government cannot completely direct the activities of industry for purely governmental/military purposes. This has certainly been a major factor in closing the gap in the AI race.

Furthermore, the U.S. is falling short to China in the quantity of studies produced regarding AI, deep-learning, and big data. For example, the number of AI-related papers submitted to the International Joint Conferences on Artificial Intelligence (IJCAI) in 2017 indicated China totaled a majority 37 percent, whereas the U.S. took third position at only 18 percent. While quantity is not everything (U.S. researchers were awarded the most awards at IJCAI 2017, for example), China’s industry innovations were formally marked as “astonishing.”For these reasons, there are various strategic challenges the U.S. must seek to overcome to maintain its lead in the AI race.

Perspectives

Each of the three nations have taken divergent perspectives on how to approach and define this problem. However, one common theme among them is the understanding of AI’s importance as an instrument of international competitiveness as well as a matter of national security. Sadler writes, “failure to adapt and lead in this new reality risks the U.S. ability to effectively respond and control the future battlefield.” However, the U.S. can longer “spend its way ahead of these challenges.” The U.S. has developed what is termed the third offset, which Louth and Taylor defined as a policy shift that is a radical strategy to reform the way the U.S. delivers defense capabilities to meet the perceived challenges of a fundamentally changed threat environment. The continuous development and improvement of AI requires a comprehensive plan and partnership with industry and academia. To cage this issue two DOD-directed studies, the Defense Science Board Summer Study on Autonomy and the Long-Range Research and Development Planning Program, highlighted five critical areas for improvement: (1) autonomous deep-learning systems,(2) human-machine collaboration, (3) assisted human operations, (4) advanced human-machine combat teaming, and (5) network-enabled semi-autonomous weapons.

Similar to the U.S., Russian leadership has stated the importance of AI on the modern battlefield. Russian President Vladimir Putin commented, “Whoever becomes the leader in this sphere (AI) will become the ruler of the world.” Not merely rhetoric, Russia’s Chief of General Staff, General Valery Gerasimov, also predicted “a future battlefield populated with learning machines.” As a result of the Russian-Georgian war, Russia developed a comprehensive military modernization plan. Of note, a main staple in the 2008 modernization plan was the development of autonomous military technology and weapon systems. According to Renz, “The achievements of the 2008 modernization program have been well-documented and were demonstrated during the conflicts in Ukraine and Syria.”

China, understanding the global impact of this issue, has dedicated research, money, and education to a comprehensive state-sponsored plan.  China’s State Council published a document in July of 2017 entitled, “New Generation Artificial Intelligence Development Plan.” It laid out a plan that takes a top-down approach to explicitly mapout the nation’s development of AI, including goals reaching all the way to 2030.  Chinese leadership also highlights this priority as they indicate the necessity for AI development:

AI has become a new focus of international competition. AI is a strategic technology that will lead in the future; the world’s major developed countries are taking the development of AI as a major strategy to enhance national competitiveness and protect national security; intensifying the introduction of plans and strategies for this core technology, top talent, standards and regulations, etc.; and trying to seize the initiative in the new round of international science and technology competition. (China’s State Council 2017).

The plan addresses everything from building basic AI theory to partnerships with industry to fostering educational programs and building an AI-savvy society.

Recommendations

Recommendations to foster the U.S.’s AI advancement include focusing efforts on further proliferating Science, Technology, Engineering and Math (STEM)programs to develop the next generation of developers. This is similar to China’s AI development plan which calls to “accelerate the training and gathering of high-end AI talent.” This lofty goal creates sub-steps, one of which is to construct an AI academic discipline. While there are STEM programs in the U.S., according to the U.S. Department of Education, “The United States is falling behind internationally, ranking 29th in math and 22nd in science among industrialized nations.” To maintain the top position in AI, the U.S. must continue to develop and attract the top engineers and scientists. This requires both a deliberate plan for academic programs as well as funding and incentives to develop and maintain these programs across U.S. institutions. Perhaps most importantly, the United States needs to figure out a strategy to entice more top American students to invest their time and attention to this proposed new discipline. Chinese and Russian students easily outpace American students in this area, especially in terms of pure numbers.

Additionally, the U.S. must research and capitalize on the dual-use capabilities of AI. Leading companies such as Google and IBM have made enormous headway in the development of algorithms and machine-learning. The Department of Defense should levy these commercial advances to determine relevant defense applications. However, part of this partnership with industry must also consider the inherent national security risks that AI development can present, thus introducing a regulatory role for commercial AI development. Thus, the role of the U.S. government with AI industry cannot be merely as a consumer, but also as a regulatory agent. The dangerous risk, of course, is this effort to honor the principles of ethical and transparent development will not be mirrored in the competitor nations of Russia and China.

Due to the population of China and lax data protection laws, the U.S. has to develop innovative ways to overcome this challenge in terms of machine-learning and artificial intelligence. China’s large population creates a larger pool of people to develop as engineers as well as generates a massive volume of data to glean from its internet users. Part of this solution is investment. A White House report on AI indicated, “the entire U.S. government spent roughly $1.1 billion on unclassified AI research and development in 2015, while annual U.S. government spending on mathematics and computer science R&D is $3 billion.” If the U.S. government considers AI an instrument of national security, then it requires financial backing comparable to other fifth-generation weapon systems. Furthermore, innovative programs such as the DOD’s Project Maven must become a mainstay.

Project Maven, a pilot program implemented in April 2017, was mandated to produce algorithms to combat big data and provide machine-learning to eliminate the manual human burden of watching full-motion video feeds. The project was expected to provide algorithms to the battlefield by December of 2018 and required partnership with four unnamed startup companies. The U.S. must implement more programs like this that incite partnership with industry to develop or re-design current technology for military applications. To maintain its technological advantage far into the future the U.S. must facilitate expansive STEM programs, seek to capitalize on the dual-use of some AI technologies, provide fiscal support for AI research and development, and implement expansive, innovative partnership programs between industry and the defense sector. Unfortunately, at the moment, all of these aspects are being engaged and invested in only partially. Meanwhile, countries like Russia and China seem to be more successful in developing their own versions, unencumbered by ‘obstacles’ like democracy, the rule of law, and the unfettered free-market competition. The AI Race is upon us. And the future seems to be a wild one indeed.

References

Allen, Greg, and Taniel Chan. “Artificial Intelligence and National Security.” Publication. Belfer Center for Science and International Affairs, Harvard University. July 2017. Accessed April 9, 2018. https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf

Allen, John R., and Amir Husain. “The Next Space Race is Artificial Intelligence.” Foreign Policy. November 03, 2017. Accessed April 09, 2018. http://foreignpolicy.com/2017/11/03/the-next-space-race-is-artificial-intelligence-and-america-is-losing-to-china/.

China. State Council. Council Notice on the Issuance of the Next Generation Artificial Intelligence Development Plan. July 20, 2017. Translated by RogierCreemers, Graham Webster, Paul, Paul Triolo and Elsa Kania.

Doubleday, Justin. 2017. “Project Maven’ Sending First FMV Algorithms to Warfighters in December.” Inside the Pentagon’s Inside the Army 29 (44). Accessed April 1, 2018.https://search-proquest-com.ezproxy2.apus.edu/docview/1960494552?accountid=8289.

Flournoy, Michèle A., and Robert P. Lyons. “Sustaining and Enhancing the US Military’s Technology Edge.” Strategic Studies Quarterly 10, no. 2 (2016): 3-13. Accessed April 12, 2018. http://www.jstor.org/stable/26271502.

Gams, Matjaz. 2017. “Editor-in-chief’s Introduction to the Special Issue on “Superintelligence”, AI and an Overview of IJCAI 2017.” Accessed April 14, 2018. Informatica 41 (4): 383-386.

Louth, John, and Trevor Taylor. 2016. “The US Third Offset Strategy.” RUSI Journal 161 (3): 66-71. DOI: 10.1080/03071847.2016.1193360.

Sadler, Brent D. 2016. “Fast Followers, Learning Machines, and the Third Offset Strategy.” JFQ: Joint Force Quarterly no. 83: 13-18. Accessed April 13, 2018. Academic Search Premier, EBSCOhost.

Scharre, Paul, and SSQ. “Highlighting Artificial Intelligence: An Interview with Paul Scharre Director, Technology and National Security Program Center for a New American Security Conducted 26 September 2017.” Strategic Studies Quarterly 11, no. 4 (2017): 15-22. Accessed April 10, 2018.http://www.jstor.org/stable/26271632.

“Science, Technology, Engineering and Math: Education for Global Leadership.” Science, Technology, Engineering and Math: Education for Global Leadership. U.S. Department of Education. Accessed April 15, 2018. https://www.ed.gov/stem.

Continue Reading

Latest

Middle East18 hours ago

A Mohammedan Game of Thrones: Iran, Saudi Arabia, and the Fight for Regional Hegemony

Authors: James J. Rooney, Jr. & Dr. Matthew Crosston* The people in the United States didn’t think well of those...

Middle East19 hours ago

Might Trump Ask Israel to Fund America’s Invasion-Occupation of Syria?

On 16 April 2018, the internationally respected analyst of Middle-Eastern affairs, Abdel Bari Atwan, headlined about Trump’s increasingly overt plan...

Economy21 hours ago

Financial Inclusion on the Rise, But Gaps Remain

Financial inclusion is on the rise globally, accelerated by mobile phones and the internet, but gains have been uneven across...

Newsdesk22 hours ago

ADB Operations Reach $32.2 Billion in 2017- ADB Annual Report

The Asian Development Bank (ADB) Annual Report 2017, released today, provides a clear, comprehensive, and detailed record of ADB’s operations,...

Middle East24 hours ago

Trump lacks proper strategy towards Middle East, Syria

About five years ago, when former US President Barack Obama spoke of a military strike in Syria, Zbigniew Brzezinski, former...

Tech1 day ago

The Ethical and Legal Issues of Artificial Intelligence

Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical...

Middle East1 day ago

US: No Restitution to Syria

On April 22nd, an anonymous U.S. “Senior Administration Official” told a press conference in Toronto, that the only possible circumstance...

Newsletter

Trending

Copyright © 2018 Modern Diplomacy