Connect with us

Tech

Go Home, Occupy Movement!!

Anis H. Bajrektarevic

Published

on

Ever since, years ago, I coined the expression “McFB way of life” and particularly since my intriguing FB articles (Is there life after Facebook I and II) have been published, I was confronted with numerous requests to clarify the meaning. My usual answer was a contra-question: If humans hardly ever question fetishisation or oppose the (self-) trivialization, why then is the subsequent brutalization a surprise to them?

Not pretending to reveal a coherent theory, the following lines are my instructive findings, most of all on the issue why it is time to go home, de-pirate, and search for a silence.  

Largely drawing on the works of the grand philosophers of the German Classicism and Dialectic Materialism, it was sociologist Max Weber who was the first – among modern age thinkers – to note that the industrialized world is undergoing a rapid process of rationalization of its state (and other vital societal) institutions. This process – Weber points out – is charac-terized by an increased efficiency, predictability, calculability, and control over any ‘threat’ of uncertainty. Hereby, the uncertainty should be understood in relation to the historically unstable precognitive and cognitive human, individual and group, dynamics. A disheartened, cold and calculative over-rationalization might lead to obscurity of irrationality, Weber warns. His famous metaphor of the iron cage or irrationality of rationality refers to his concern that an extremely rationalized (public) institution inevitably alienates itself and turns dehumanized to both, those who staff them and those they serve, with a tiny upper caste of controllers steadily losing touch of reality.

Revisiting, rethinking and rejuvenating Weber’s theory (but also those of Sartre, Heidegger, Lukács, Lefebvre, Horkheimer, Marcuse and Bloch), it was the US sociologist George Ritzer who postulated that the late 20th century institutions are rationalized to a degree that the entire state becomes ‘McDonaldized’, since the principles of the fast food industry have gradually pervaded other segments of society and very aspects of life (The McDonaldization of Society, a controversial and highly inspiring book of popular language, written in 1993).

Thus paraphrased, Ritzer states that (i) McEfficiency is achieved by the systematic elimination of unnecessary time or effort in pursuing an objective. As the economy has to be just-in-time competitively productive, society has to be efficient as well. Corresponding to this mantra, only a society whose forms and contents are governed by business models, whose sociability runs on marketing principles is a successfully optimized polity. Premium efficiency in the workplace (or over broader aspects of sociableness) is attainable by introducing F.W. Taylor’s and H. Ford’s assembly line into human resources and their intellectual activity (sort of intellectual assembly line).  Even an average daily exposure to the so-called news and headlines serves an instructive and directional rather than any informational and exploratory purpose. Hence, McEfficiency solidifies the system, protecting its karma and dharma from any spontaneity, digression, unnecessary questioning and experimenting or surprise.

(ii) McCalculability is an attempt to measure quality in terms of quantity, whereby quality becomes secondary, if at all a concern. The IT sector, along with the search engines and cyber -social clubs, has considerably contributed to the growing emphasis on calculability. Not only the fast food chains (1 billion meals, everybody-served-in-a-minute), Google, Facebook, TV Reality Shows, and the like, as well as the universities, hospitals, travel agencies – all operate on a nearly fetishised and worshiped ‘most voted’, ‘frequently visited’,‘most popular’, a big is beautiful, matrix. It is a calculability which mystically assures us that the BigMac is always the best meal – given its quantity; that the best reader is always a bestseller book; and that the best song is a tune with the most clicks on YouTube. One of the most wanted air carriers, AirAsia, has a slogan: Everyone can fly now. In the world where everyone is armed with mobile-launcher gadgets powered by the micro-touch, soft-screen & scream tech to add to the noisy cacophony – amount, size, frequency, length and volume is all what matters. Thus, a number, a pure digit becomes the (Burger) king. Long Yahoo, the king! Many of my students admitted to me that Google for them is more than a search engine; that actually googalization is a well-domesticated method, which considerably and frequently replaces the cognitive selection when preparing their assignments and exams. Ergo, instead of complementing, this k(l)icky-Wiki-picky method increasingly substitutes the process of human reasoning.

(iii) McPredictability is the key factor of the rationalized McDonalds process. On the broader scale, a rational (rationally optimized) society is one in which people know well beforehand what (and when) to expect. Hence, fast food is always mediocre – it never tastes very bad or very good. The parameter of McFood is therefore a surprise-less world in which equally both disappointment and delight are considerably absent. McMeals will always blend uniform preparation and contents, as well as the standardized serving staff outfit and their customized approach. In the end, it is not about food at all. What makes McDonalds so durably popular is a size, numbers and predictability. (All three are proportionately and causally objectivized and optimized: a meal, who serves it and those served – until the locality and substance of each of the three becomes fluid, obsolete and irrelevant. And what would symbolize this relativization and /self-/trivialization better than a clown – well-known mascot Ronald of McDonaldland). In such an atmosphere of predictability or better to say predictive seduction and gradual loss of integrity, the culture of tacit obedience (ignorance of self-irrelevance through the corrosive addiction) is breeding, even unspotted. Consequently, more similarities than differences is the central to a question of predictability, on both ends: demand (expectation, possibility) and supply (determination, probability). No wonder that even the Pirates offer just a routinized protests under only one simplified and uniformed, ‘anonymous’ mask for all.

(iv) McControl represents the fourth and final Weberian aspect for Ritzer. Traditionally (ever since the age of cognitivity), humans are the most unpredictable element, a variable for the rationalized, bureaucratic systems, so it is an imperative for the McOrganization to (pacify through) control. Nowadays, technology offers a variety of palliatives and tools for the effective control of both employers (supply, probability) and customers (demand, possibility), as well as to control the controllers. A self-articulation, indigenous opinionation, spontaneous initiative and unconstrained action is rather simulated, yet stimulated very seldom. Only once the wide spectrum of possibilities is quietly narrowed down, a limited field of probabilities will appear so large. To this end, the IT appliances are very convenient (cheap, discreet and invisible, but omnipresent and highly accurate) as they compute, pre-decide, channel and filter moves, as well as they store and analyze behavior patterns with their heartless algorithms. (The ongoing SOPA, PIPA and ACTA fuss or any other rendering stringent regulative in future does not constitute, but only confirms and supplements its very cyber nature.)

Aided by the instruments of efficiency, calculability and predictability, the control eliminates (the premium or at least minimizes any serious impact of) authenticity, autonomous thinking and independent (non-consumerist) judgment. Depth and frequency of critical insights and of unpredictable human actions driven by unexpected conclusions is rationalized to a beforehand calculable, and therefore tolerable few. Hyper-rationalized, frigid-exercised, ultra-efficient, predictable and controlled environment subscribes also a full coherence to the socio-asymmetric and dysfunctional-emphatic atmosphere of disaffected but ultimately obedient subjects (‘guided without force’, ‘prompted without aim’, “poked, tweeted & fleshmobbed for ‘fun”, ‘useful idiots’, ‘fitting the social machine without friction’). Hence, what is welcomed is not an engagement, but compliance: a self-actualization through exploration challenges while consumerism confirms – status quo. Veneration of nullity!

Ergo, the final McSociety product is a highly efficient, predictable, computed, standardized, typified, instant, unison, routinized, addictive, imitative and controlled environment which is – paradoxically enough – mystified through the worshiping glorification (of scale). Subjects of such a society are fetishising the system and trivializing their own contents – smooth and nearly unnoticed trade-off. When aided by the IT in a mass, unselectively frequent and severe use within the scenery of huge shopping malls (enveloped by a consumerist fever and spiced  up by an ever larger cyber-neurosis, disillusional and psychosomatic disorders, and functional illiteracy of misinformed, undereducated, cyber-autistic and egotistic under-aged and hardly-aged individuals – all caused by the constant (in)flow of clusters of addictive alerts on diver-ting banalities), it is an environment which epitomizes what I coined as the McFB way of life.

This is a cyber–iron cage habitat: a shiny but directional and instrumented, egotistic and autistic, cold and brutal place; incapable of vision, empathy, initiative or action. It only accelerates our disconnection with a selfhood and the rest. If and while so, is there any difference between Gulag and Goo(g)lag – as both being prisons of free mind? Contrary to the established rhetoric; courage, solidarity, vision and initiative were far more monitored, restricted, stigmatized and prosecuted than enhanced, supported and promoted throughout the human history – as they’ve been traditionally perceived like a threat to the inaugurated order, a challenge to the functioning status quo, defiant to the dogmatic conscripts of admitted, permissible, advertized, routinized, recognized and prescribed social conduct.

Elaborating on a well known argument of ‘defensive modernization’ of Fukuyama, it is to state that throughout the entire human history a technological drive was aimed to satisfy the security (and control) objective; and it was rarely (if at all) driven by a desire to (enlarge the variable and to) ease human existence or to enhance human emancipation and liberation of societies at large. Thus, unless operationalized by the system, both intellectualism (human autonomy, mastery and purpose), and technological breakthroughs were traditionally felt and perceived as a threat.  

Consequently, all cyber-social networks and related search engines are far away from what they are portrayed to be: a decentralized but unified intelligence, attracted by gravity of quality rather than navigated by force of a specific locality. In fact, they primarily serve the predictability, efficiency, calculability and control purpose, and only then they serve everything else – as to be e.g. user-friendly and en mass service attractive. To observe the new corrosive dynamics of social phenomenology between manipulative fetishisation (probability) and self-trivialization (possibility), the cyber-social platforms – these dustbins of human empathy in the muddy suburbs of consciousness – are particularly interesting.   

Facebook itself is a perfect example of how to utilize (to simulate, instead of to stimulate and empathically live) human contents. Its toolkit offers efficient, rationalized, predictable, clean, transparent, and most intriguing of all, very user-friendly convenient reduction of all possible relations between two individuals: ‘friend’, ‘no-friend’. It sets a universally popular language, so standardized and uncomplicated that even any anonymous machine can understand it – a binary code: ‘1’ (friend) ‘0’ (no-friend), or eventually ‘1’ (brother/sister), ‘1/0’ (friend), ‘0’ no-friend – just two digits to feed precise algorithmic calculations. Remember, number is the king. Gott ist tot, dear Nietzsche – so are men.

Be it occupied or besieged, McDonalds will keep up its menu. Instead, we should finally occupy ourselves by de-pirating enormous tweet/mob noise pollution in and all around us.    It is a high time to replace the dis-conceptual flux on streets for a silent reflection at home.
Sorry Garcin, hell is not other people. Hell are we!!

Post Scriptum:

In his emotionally charged speech of December 2011, President Obama openly warned the US citizens: “Inequality distorts our democracy. It gives an outsized voice to the few who can afford high-priced lobbyists (…) the wealthiest Americans are paying the lowest taxes in over half a century (…) Some billionaires have a tax rate as low as 1%. One per cent! (…) The free market has never been a free license to take whatever you want from whoever you can…”
(The Oswatomie High School, Kansas, 06 December 2011, the While House Press Release).

Two months before that speech, the highly respected, politically balanced and bipartisan Budget Office of the US Congress (CBO) released its own study “Trends in the Distribution of Household Income between 1979 and 2007” (October 2011). The CBO finds that, between 1979 and 2007, income grew by: 275% for the top 1% of the US households, 65% increase for the next 19% of households, less than a 40% increase for the following segment of households of the next 60%, and finally only an 18% income increase for the bottom of 20% of the US households. If we consider an inflation for the examined period of nearly 30 years, then the nominal growth would turn to a negative increase in real incomes for almost 80% of the US households; a single digit real income increase for the upper 19% of households; and still a three-digit income growth for the top 1% of population.    

According to the available internet search engine counters, this CBO study has been retrieved 74,000 times since posted some 3 months ago. For the sake of comparison, an average clip of great-granddaughter of ultra-rich, billionaire Conrad Hilton is clicked on YouTube over 31 million times. Roughly 3 million Americans would represent the top 1% of its population. Who are other 99% – pardon, 28 million individuals – interested in trivial clip/s (with obscure but explicit lines: They can’t do this to me, I’m rich) of Miss Paris?

Remember what I asked at the beginning of this article: If humans hardly ever question fetishisation or oppose the (self-) trivialization, why then is the subsequent brutalization a surprise to them?

*  This is the so-called FB3 article (Is there life after Facebook? III – the Cyber Goo(g)lag Revelations). Its early version was first published by the US Journal of Foreign Relations /12 January 2012/.

References:

1.    Weber, M. (1951), Wirtschaft und Gesellschaft – Grundriss der verstehenden Sociologie (Economy and Society), Tübingen, J.C.B. Mohr (Paul Siebeck)
2.    Ritzer, G. (1993), The McDonaldization of Society: An Investigation into the Changing Character of Contemporary Social Life, Thousand Oaks, CA: Pine Forge Press
3.    Zappa, F.V. (1989), The Real Frank Zappa Book, Touchstone (1999 Edition)
4.    Schlitz, M. (1998), On consciousness, causation and evolution, Journal of Parapsychology (61: 185-96)   
5.    Fukuyama, F. (2002), Our Posthuman Future – Consequences of the Biotech Revolution, Profile Books, London (page: 126/232)
6.    Bajrektarevic, A. (2004), Environmental Ethics – Four Societal Normative Orders, Lectures/Students Reader, Vienna (IMC University Krems), Austria
7.    Mumford, L. (1967), Technics and Human Development – Myth of the Machine (Vol. 1), Mariner Books (Ed. 1971)  
8.    McTaggart, L. (2001), The Field, HarperCollins Publishers

Modern Diplomacy Advisory Board, Chairman Geopolitics of Energy Editorial Member Professor and Chairperson for Intl. Law & Global Pol. Studies contact: anis@bajrektarevic.eu

Continue Reading
Comments

Tech

The Ethical and Legal Issues of Artificial Intelligence

Published

on

Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical issues. Artificial intelligence adds a new dimension to these questions. Systems that use artificial intelligence technologies are becoming increasingly autonomous in terms of the complexity of the tasks they can perform, their potential impact on the world and the diminishing ability of humans to understand, predict and control their functioning. Most people underestimate the real level of automation of these systems, which have the ability to learn from their own experience and perform actions beyond the scope of those intended by their creators. This causes a number of ethical and legal difficulties that we will touch upon in this article.

Ethics and Artificial Intelligence

There is a well-known thought experiment in ethics called the trolley problem. The experiment raises a number of important ethical issues that are directly related to artificial intelligence. Imagine a runaway trolley going down the railway lines. There are five people tied to the track ahead. You are standing next to a lever. If you pull it, the trolley will switch to a different set of track. However, there is another person tied to that set of track. Do you pull the lever or not?

Source: Wikimedia.org

There is no clear-cut answer to this question. What is more, there are numerous situations in which such a decision may have to be made [1]. And different social groups tend to give different answers. For example, Buddhist monks are overwhelmingly willing to sacrifice the life of one person in order to save five, even if presented with a more complicated variation of the trolley problem.

As for artificial intelligence, such a situation could arise, for example, if a self-driving vehicle is travelling along a road in a situation where an accident is unavoidable. The question thus arises as to whose lives should take priority – those of the passengers, the pedestrians or neither. A special website has been created by the Massachusetts Institute of Technology that deals with this very issue: users can test out various scenarios out on themselves and decide which courses of action would be the most worthwhile.

Other questions also arise in this case: What actions can be allowed from the legal point of view? What should serve as a basis for such decisions? Who should ultimately be held responsible? This problem has already been addressed by companies and regulators. Representatives at Mercedes, for example, have said outright that their cars will prioritize the lives of passengers. The Federal Ministry of Transport and Digital Infrastructure of Germany responded to this immediately, anticipating future regulation by stating that making such a choice based on a set of criteria would be illegal, and that the car manufacturer be held responsible for any injury or loss of life.

Other countries may go a different route. Take the Chinese Social Credit System, for example, which rates its citizens based how law-abiding and how useful to society they are, etc. Those with low ratings will face sanctions. What is stopping the Chinese government from introducing a law that forces manufacturers of self-driving vehicles to sacrifice the lives of lower-rated citizens in the event of an unavoidable accident? Face recognition technologies and access to the relevant databases make it perfectly possible to identify potential victims and compare their social credit ratings.

The Main Problems Facing the Law

The legal problems run even deeper, especially in the case of robots. A system that learns from information it receives from the outside world can act in ways that its creators could not have predicted [2], and predictability is crucial to modern legal approaches. What is more, such systems can operate independently from their creators or operators thus complicating the task of determining responsibility. These characteristics pose problems related to predictability and the ability to act independently while at the same time not being held responsible [3].

There are numerous options in terms of regulation, including regulation that is based on existing norms and standards. For example, technologies that use artificial intelligence can be regulated as items subject to copyright or as property. Difficulties arise here, however, if we take into account the ability of such technologies to act autonomously, against the will of their creators, owners or proprietors. In this regard, it is possible to apply the rules that regulate a special kind of ownership, namely animals, since the latter are also capable of autonomous actions. In Russian Law, the general rules of ownership are applied to animals (Article 137 of the Civil Code of the Russian Federation); the issue of responsibility, therefore, comes under Article 1064 of the Civil Code of the Russian Federation: injury inflicted on the personality or property of an individual shall be subject to full compensation by the person who inflicted the damage.

Proposals on the application of the law on animals have been made [4], although they are somewhat limited. First, the application of legislation on the basis of analogy is unacceptable within the framework of criminal law. Second, these laws have been created primarily for household pets, which we can reasonably expect will not cause harm under normal circumstances. There have been calls in more developed legal systems to apply similar rules to those that regulate the keeping of wild animals, since the rules governing wild animals are more stringent [5]. The question arises here, however, of how to make a separation with regard to the specific features of artificial intelligence mentioned above. Moreover, stringent rules may actually slow down the introduction of artificial intelligence technologies due to the unexpected risks of liability for creators and inventors.

Another widespread suggestion is to apply similar norms to those that regulate the activities of legal entities [6]. Since a legal entity is an artificially constructed subject of the law [7], robots can be given similar status. The law can be sufficiently flexible and grant the rights to just about anybody. It can also restrict rights. For example, historically, slaves had virtually no rights and were effectively property. The opposite situation can also be observed, in which objects that do not demonstrate any explicit signs of the ability to do anything are vested with rights. Even today, there are examples of unusual objects that are recognized as legal entities, both in developed and developing countries. In 2017, a law was passed in New Zealand recognizing the status of the Whanganui River as a legal entity. The law states that the river is a legal entity and, as such, has all the rights, powers and obligations of a legal entity. The law thus transformed the river from a possession or property into a legal entity, which expanded the boundaries of what can be considered property and what cannot. In 2000, the Supreme Court of India recognized the main sacred text of the Sikhs, the Guru Granth Sahib, as a legal entity.

Even if we do not consider the most extreme cases and cite ordinary companies as an example, we can say that some legal systems make legal entities liable under civil and, in certain cases, criminal law [8]. Without determining whether a company (or state) can have free will or intent, or whether they can act deliberately or knowingly, they can be recognized as legally responsible for certain actions. In the same way, it is not necessary to ascribe intent or free will to robots to recognize them as responsible for their actions.

The analogy of legal entities, however, is problematic, as the concept of legal entity is necessary in order to carry out justice in a speedy and effective manner. But the actions of legal entities always go back to those of a single person or group of people, even if it is impossible to determine exactly who they are [9]. In other words, the legal responsibility of companies and similar entities is linked to the actions performed by their employees or representatives. What is more, legal entities are only deemed to be criminally liable if an individual performing the illegal action on behalf of the legal entity is determined [10]. The actions of artificial intelligence-based systems will not necessarily be traced back to the actions of an individual.

Finally, legal norms on the sources of increased danger can be applied to artificial intelligence-based systems. In accordance with Paragraph 1 of Article 1079 of the Civil Code of the A Russian Federation, legal entities and individuals whose activities are associated with increased danger for the surrounding population (the use of transport vehicles, mechanisms, etc.) shall be obliged to redress the injury inflicted by the source of increased danger, unless they prove that injury has been inflicted as a result of force majeure circumstances or at the intent of the injured person. The problem is identifying which artificial intelligence systems can be deemed sources of increased danger. The issue is similar to the one mentioned above regarding domestic and wild animals.

National and International Regulation

Many countries are actively creating the legal conditions for the development of technologies that use artificial intelligence. For example, the “Intelligent Robot Development and Dissemination Promotion Law” has been in place in South Korea since 2008. The law is aimed at improving the quality of life and developing the economy through the creation and promotion of a strategy for the sustainable development of the smart robot industry. Every five years, the government works out a basic plan to ensure that these goals are achieved.

I would like to pay particular attention here to two recent examples: France, which has declared its ambitions to become a European and world leader in artificial intelligence; and the European Union, which has put forward advanced rules for the regulation of smart robots.

France

In late March 2018, President of France Emmanuel Macron presented the country’s new national artificial intelligence strategy, which involves investing 1.5 billion Euros over the next five years to support research an innovation in the field. The strategy is based on the recommendations made in the report prepared under the supervision of French mathematician and National Assembly deputy Cédric Villani. The decision was made for the strategy to be aimed at four specific sectors: healthcare; transport; the environment and environmental protection; and security. The reasoning behind this is to focus potential of the comparative advantages and competencies in artificial intelligence on sectors where companies can play a key role at the global level, and because these technologies are important for the public interest, etc.

Seven key proposals are given, one of which is of particular interest for the purposes of this article – namely, to make artificial intelligence more open. It is true that the algorithms used in artificial intelligence are discrete and, in most cases, trade secrets. However, algorithms can be biased, for example, in the process of self-learning, they can absorb and adopt the stereotypes that exist in society or which are transferred to them by developers and make decisions based on them. There is already legal precedent for this. A defendant in the United States received a lengthy prison sentence on the basis of information obtained from an algorithm predicting the likelihood of repeat offences being committed. The defendant’s appeal against the use of an algorithm in the sentencing process was rejected because the criteria used to evaluate the possibility of repeat offences were a trade secret and therefore not presented. The French strategy proposes developing transparent algorithms that can be tested and verified, determining the ethical responsibility of those working in artificial intelligence, creating an ethics advisory committee, etc.

European Union

The creation of the resolution on the Civil Law Rules on Robotics marked the first step towards the regulation of artificial intelligence in the European Union. A working group on legal questions related to the development of robotics and artificial intelligence in the European Union was established back in 2015. The resolution is not a binding document, but it does give a number of recommendations to the European Commission on possible actions in the area of artificial intelligence, not only with regard to civil law, but also to the ethical aspects of robotics.

The resolution defines a “smart robot” as “one which has autonomy through the use of sensors and/or interconnectivity with the environment, which has at least a minor physical support, which adapts its behaviour and actions to the environment and which cannot be defined as having ‘life’ in the biological sense.” The proposal is made to “introduce a system for registering advanced robots that would be managed by an EU Agency for Robotics and Artificial Intelligence.” As regards liability for damage caused by robots, two options are suggested: “either strict liability (no fault required) or on a risk-management approach (liability of a person who was able to minimise the risks).” Liability, according to the resolution, “should be proportionate to the actual level of instructions given to the robot and to its degree of autonomy. Rules on liability could be complemented by a compulsory insurance scheme for robot users, and a compensation fund to pay out compensation in case no insurance policy covered the risk.”

The resolution proposes two codes of conduct for dealing with ethical issues: a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics Committees. The first code proposes four ethical principles in robotics engineering: 1) beneficence (robots should act in the best interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).

The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore, our attitude to autonomous systems (whether they are robots or something else), and our reinterpretation of their role in society and their place among us, can have a transformational effect. Legal personality determines what is important for society and allows the decision to made as to whether “something” is a valuable and reasonable object for the purposes of possessing rights and obligations.

Due to the specific features of artificial intelligence, suggestions have been put forward regarding the direct responsibility of certain systems [11]. According to this line of thought, there are no fundamental reasons why autonomous systems should not be legally liable for their actions. The question remains, however, about the necessity or desirability of introducing this kind of liability (at least at the present stage). It is also related to the ethical issues mentioned above. Perhaps making programmers or users of autonomous systems liable for the actions of those systems would be more effective. But this could slow down innovation. This is why we need to continue to search for the perfect balance.

In order to find this balance, we need to address a number of issues. For example: What goals are we pursuing in the development of artificial intelligence? And how effective will it be? The answers to these questions will help us to prevent situations like the one that appeared in Russia in the 17th century, when an animal (specifically goats) was exiled to Siberia for its actions [12].

First published at our partner RIAC

  1. 1. See, for example. D. Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong, Princeton University Press, 2013.
  2. 2. Asaro P., “From Mechanisms of Adaptation to Intelligence Amplifiers: The Philosophy of W. Ross Ashby,” in Wheeler M., Husbands P., and Holland O. (eds.) The Mechanical Mind in History, Cambridge, MA: MIT Press: pp. 149–184
  3. 3. Asaro P. The Liability Problem for Autonomous Artificial Agents // AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA. March 21–23, 2016, p. 191.
  4. 4. Arkhipov, V., Naumov, V. On Certain Issues Regarding the Theoretical Grounds for Developing Legislation on Robotics: Aspects of Will and Legal Personality // Zakon. 2017, No. 5, p. 167.
  5. 5. Asaro P. The Liability Problem for Autonomous Artificial Agents, p. 193.
  6. 6. Arkhipov, V., Naumov, V. Op. cit., p. 164.
  7. 7. See, for example. Winkler A. We the Corporations: How American Businesses Won Their Civil Rights. Liverlight, 2018. See a description here: https://www.nytimes.com/2018/03/05/books/review/adam-winkler-we-the-corporations.html
  8. 8. In countries that use the Anglo-Saxon legal system, the European Union and some Middle Eastern countries. This kind of liability also exists in certain former Soviet countries: Georgia, Kazakhstan, Moldova and Ukraine. It does not exist in Russia, although it is under discussion.
  9. 9. Brożek B., Jakubiec M. On the Legal Responsibility of Autonomous Machines // Artificial Intelligence Law. 2017, No. 25(3), pp. 293–304.
  10. 10. Khanna V.S. Corporate Criminal Liability: What Purpose Does It Serve? // Harvard Law Review. 1996, No. 109, pp. 1477–1534.
  11. 11. Hage J. Theoretical Foundations for the Responsibility of Autonomous Agents // Artificial Intelligence Law. 2017, No. 25(3), pp. 255–271.
  12. 12. U. Pagallo, The Laws of Robots. Crimes, Contracts, and Torts. Springer, 2013, p. 36.

Continue Reading

Tech

Busting the Blockchain Hype: How to Tell if Distributed Ledger Technology is Right for You

MD Staff

Published

on

Blockchain has been hailed as the solution for everything, from resolving global financial inequality, providing IDs for refugees, to enabling people to sell their houses without an estate agent. However, the overwhelming hype surrounding this technology over the past year is misleading and untrue.

“We have been up and down on the blockchain roller coaster this past year,” said Sheila Warren, head of the Blockchain and Distributed Ledger Technology project at the World Economic Forum Center for the Fourth Industrial Revolution. “Blockchain is an innovative solution, but it is not the solution to all problems. Blockchain has to be the right solution for the right business problem. Busting the blockchain hype is necessary to make sure businesses are using it in the right way and not damaging the long-term prospects of the technology.”

Through research and analysis of the technology’s capabilities and the ways it is used around the world, the team found there were 11 questions (at most) to answer to determine if blockchain can be the solution.

“To bust some of the blockchain hype, we had to design a practical framework for people who knew nothing about the technology. We started with the premise that blockchain is like any other technology – a tool in a company’s toolbox,” said Cathy Mulligan, Visiting Researcher at Imperial College London and member of the Forum’s Global Future Council on Blockchain. “If you break down the kinds of problems blockchain technology is solving and its potential, clear paths emerge.”

The paths were incorporated into a framework of “yes” and “no” questions, which guide a business leader once a specific problem is articulated. “This framework cuts through the noise about blockchain and refocuses the technology into the way business leaders think,” said Jennifer Zhu Scott, Founding Partner of Radian and member of the Global Futures Council on Blockchain.

“These 11 questions were developed and then trialled with chief executive officers at a workshop at the World Economic Forum Annual Meeting 2018. The test group included C-suite executives from large corporations, most of whom said they were actively considering adopting blockchain technology in some manner,” said JP Rangaswami, Chief Data Officer, Deutsche Bank.

During the workshop, one publicly listed energy company discussed its plans for an initial coin offering (ICO) and a large bank shared how it was considering using blockchain-based crypto-tokens for transferring remittances. Even in the much-debated cryptocurrency space, 100% of the participants believed that even after the cryptocurrency bubble burst, the token economy would be here to stay.

Continue Reading

Tech

The Artificial Intelligence Race: U.S. China and Russia

Ecatarina Garcia

Published

on

Artificial intelligence (AI), a subset of machine learning, has the potential to drastically impact a nation’s national security in various ways. Coined as the next space race, the race for AI dominance is both intense and necessary for nations to remain primary in an evolving global environment. As technology develops so does the amount of virtual information and the ability to operate at optimal levels when taking advantage of this data. Furthermore, the proper use and implementation of AI can facilitate a nation in the achievement of information, economic, and military superiority – all ingredients to maintaining a prominent place on the global stage. According to Paul Scharre, “AI today is a very powerful technology. Many people compare it to a new industrial revolution in its capacity to change things. It is poised to change not only the way we think about productivity but also elements of national power.”AI is not only the future for economic and commercial power, but also has various military applications with regard to national security for each and every aspiring global power.

While the U.S. is the birthplace of AI, other states have taken a serious approach to research and development considering the potential global gains. Three of the world’s biggest players, U.S., Russia, and China, are entrenched in non-kinetic battle to out-pace the other in AI development and implementation. Moreover, due to the considerable advantages artificial intelligence can provide it is now a race between these players to master AI and integrate this capability into military applications in order to assert power and influence globally. As AI becomes more ubiquitous, it is no longer a next-generation design of science fiction. Its potential to provide strategic advantage is clear. Thus, to capitalize on this potential strategic advantage, the U.S. is seeking to develop a deliberate strategy to position itself as the permanent top-tier of AI implementation.

Problem

The current AI reality is near-peer competitors are leading or closing the gap with the U.S. Of note, Allen and Husain indicate the problem is exacerbated by a lack of AI in the national agenda, diminishing funds for science and technology funding, and the public availability of AI research. The U.S. has enjoyed a technological edge that, at times, enabled military superiority against near-peers. However, there is argument that the U.S. is losing grasp of that advantage. As Flournoy and Lyons indicate, China and Russia are investing massively in research and development efforts to produce technologies and capabilities “specifically designed to blunt U.S. strengths and exploit U.S. vulnerabilities.”

The technological capabilities once unique to the U.S. are now proliferated across both nation-states and other non-state actors. As Allen and Chan indicate, “initially, technological progress will deliver the greatest advantages to large, well-funded, and technologically sophisticated militaries. As prices fall, states with budget-constrained and less technologically-advanced militaries will adopt the technology, as will non-state actors.” As an example, the American use of unmanned aerial vehicles in Iraq and Afghanistan provided a technological advantage in the battle space. But as prices for this technology drop, non-state actors like the Islamic State is making noteworthy use of remotely-controlled aerial drones in its military operations. While the aforementioned is part of the issue, more concerning is the fact that the Department of Defense (DoD) and U.S. defense industry are no longer the epicenter for the development of next-generation advancements. Rather, the most innovative development is occurring more with private commercial companies. Unlike China and Russia, the U.S. government cannot completely direct the activities of industry for purely governmental/military purposes. This has certainly been a major factor in closing the gap in the AI race.

Furthermore, the U.S. is falling short to China in the quantity of studies produced regarding AI, deep-learning, and big data. For example, the number of AI-related papers submitted to the International Joint Conferences on Artificial Intelligence (IJCAI) in 2017 indicated China totaled a majority 37 percent, whereas the U.S. took third position at only 18 percent. While quantity is not everything (U.S. researchers were awarded the most awards at IJCAI 2017, for example), China’s industry innovations were formally marked as “astonishing.”For these reasons, there are various strategic challenges the U.S. must seek to overcome to maintain its lead in the AI race.

Perspectives

Each of the three nations have taken divergent perspectives on how to approach and define this problem. However, one common theme among them is the understanding of AI’s importance as an instrument of international competitiveness as well as a matter of national security. Sadler writes, “failure to adapt and lead in this new reality risks the U.S. ability to effectively respond and control the future battlefield.” However, the U.S. can longer “spend its way ahead of these challenges.” The U.S. has developed what is termed the third offset, which Louth and Taylor defined as a policy shift that is a radical strategy to reform the way the U.S. delivers defense capabilities to meet the perceived challenges of a fundamentally changed threat environment. The continuous development and improvement of AI requires a comprehensive plan and partnership with industry and academia. To cage this issue two DOD-directed studies, the Defense Science Board Summer Study on Autonomy and the Long-Range Research and Development Planning Program, highlighted five critical areas for improvement: (1) autonomous deep-learning systems,(2) human-machine collaboration, (3) assisted human operations, (4) advanced human-machine combat teaming, and (5) network-enabled semi-autonomous weapons.

Similar to the U.S., Russian leadership has stated the importance of AI on the modern battlefield. Russian President Vladimir Putin commented, “Whoever becomes the leader in this sphere (AI) will become the ruler of the world.” Not merely rhetoric, Russia’s Chief of General Staff, General Valery Gerasimov, also predicted “a future battlefield populated with learning machines.” As a result of the Russian-Georgian war, Russia developed a comprehensive military modernization plan. Of note, a main staple in the 2008 modernization plan was the development of autonomous military technology and weapon systems. According to Renz, “The achievements of the 2008 modernization program have been well-documented and were demonstrated during the conflicts in Ukraine and Syria.”

China, understanding the global impact of this issue, has dedicated research, money, and education to a comprehensive state-sponsored plan.  China’s State Council published a document in July of 2017 entitled, “New Generation Artificial Intelligence Development Plan.” It laid out a plan that takes a top-down approach to explicitly mapout the nation’s development of AI, including goals reaching all the way to 2030.  Chinese leadership also highlights this priority as they indicate the necessity for AI development:

AI has become a new focus of international competition. AI is a strategic technology that will lead in the future; the world’s major developed countries are taking the development of AI as a major strategy to enhance national competitiveness and protect national security; intensifying the introduction of plans and strategies for this core technology, top talent, standards and regulations, etc.; and trying to seize the initiative in the new round of international science and technology competition. (China’s State Council 2017).

The plan addresses everything from building basic AI theory to partnerships with industry to fostering educational programs and building an AI-savvy society.

Recommendations

Recommendations to foster the U.S.’s AI advancement include focusing efforts on further proliferating Science, Technology, Engineering and Math (STEM)programs to develop the next generation of developers. This is similar to China’s AI development plan which calls to “accelerate the training and gathering of high-end AI talent.” This lofty goal creates sub-steps, one of which is to construct an AI academic discipline. While there are STEM programs in the U.S., according to the U.S. Department of Education, “The United States is falling behind internationally, ranking 29th in math and 22nd in science among industrialized nations.” To maintain the top position in AI, the U.S. must continue to develop and attract the top engineers and scientists. This requires both a deliberate plan for academic programs as well as funding and incentives to develop and maintain these programs across U.S. institutions. Perhaps most importantly, the United States needs to figure out a strategy to entice more top American students to invest their time and attention to this proposed new discipline. Chinese and Russian students easily outpace American students in this area, especially in terms of pure numbers.

Additionally, the U.S. must research and capitalize on the dual-use capabilities of AI. Leading companies such as Google and IBM have made enormous headway in the development of algorithms and machine-learning. The Department of Defense should levy these commercial advances to determine relevant defense applications. However, part of this partnership with industry must also consider the inherent national security risks that AI development can present, thus introducing a regulatory role for commercial AI development. Thus, the role of the U.S. government with AI industry cannot be merely as a consumer, but also as a regulatory agent. The dangerous risk, of course, is this effort to honor the principles of ethical and transparent development will not be mirrored in the competitor nations of Russia and China.

Due to the population of China and lax data protection laws, the U.S. has to develop innovative ways to overcome this challenge in terms of machine-learning and artificial intelligence. China’s large population creates a larger pool of people to develop as engineers as well as generates a massive volume of data to glean from its internet users. Part of this solution is investment. A White House report on AI indicated, “the entire U.S. government spent roughly $1.1 billion on unclassified AI research and development in 2015, while annual U.S. government spending on mathematics and computer science R&D is $3 billion.” If the U.S. government considers AI an instrument of national security, then it requires financial backing comparable to other fifth-generation weapon systems. Furthermore, innovative programs such as the DOD’s Project Maven must become a mainstay.

Project Maven, a pilot program implemented in April 2017, was mandated to produce algorithms to combat big data and provide machine-learning to eliminate the manual human burden of watching full-motion video feeds. The project was expected to provide algorithms to the battlefield by December of 2018 and required partnership with four unnamed startup companies. The U.S. must implement more programs like this that incite partnership with industry to develop or re-design current technology for military applications. To maintain its technological advantage far into the future the U.S. must facilitate expansive STEM programs, seek to capitalize on the dual-use of some AI technologies, provide fiscal support for AI research and development, and implement expansive, innovative partnership programs between industry and the defense sector. Unfortunately, at the moment, all of these aspects are being engaged and invested in only partially. Meanwhile, countries like Russia and China seem to be more successful in developing their own versions, unencumbered by ‘obstacles’ like democracy, the rule of law, and the unfettered free-market competition. The AI Race is upon us. And the future seems to be a wild one indeed.

References

Allen, Greg, and Taniel Chan. “Artificial Intelligence and National Security.” Publication. Belfer Center for Science and International Affairs, Harvard University. July 2017. Accessed April 9, 2018. https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf

Allen, John R., and Amir Husain. “The Next Space Race is Artificial Intelligence.” Foreign Policy. November 03, 2017. Accessed April 09, 2018. http://foreignpolicy.com/2017/11/03/the-next-space-race-is-artificial-intelligence-and-america-is-losing-to-china/.

China. State Council. Council Notice on the Issuance of the Next Generation Artificial Intelligence Development Plan. July 20, 2017. Translated by RogierCreemers, Graham Webster, Paul, Paul Triolo and Elsa Kania.

Doubleday, Justin. 2017. “Project Maven’ Sending First FMV Algorithms to Warfighters in December.” Inside the Pentagon’s Inside the Army 29 (44). Accessed April 1, 2018.https://search-proquest-com.ezproxy2.apus.edu/docview/1960494552?accountid=8289.

Flournoy, Michèle A., and Robert P. Lyons. “Sustaining and Enhancing the US Military’s Technology Edge.” Strategic Studies Quarterly 10, no. 2 (2016): 3-13. Accessed April 12, 2018. http://www.jstor.org/stable/26271502.

Gams, Matjaz. 2017. “Editor-in-chief’s Introduction to the Special Issue on “Superintelligence”, AI and an Overview of IJCAI 2017.” Accessed April 14, 2018. Informatica 41 (4): 383-386.

Louth, John, and Trevor Taylor. 2016. “The US Third Offset Strategy.” RUSI Journal 161 (3): 66-71. DOI: 10.1080/03071847.2016.1193360.

Sadler, Brent D. 2016. “Fast Followers, Learning Machines, and the Third Offset Strategy.” JFQ: Joint Force Quarterly no. 83: 13-18. Accessed April 13, 2018. Academic Search Premier, EBSCOhost.

Scharre, Paul, and SSQ. “Highlighting Artificial Intelligence: An Interview with Paul Scharre Director, Technology and National Security Program Center for a New American Security Conducted 26 September 2017.” Strategic Studies Quarterly 11, no. 4 (2017): 15-22. Accessed April 10, 2018.http://www.jstor.org/stable/26271632.

“Science, Technology, Engineering and Math: Education for Global Leadership.” Science, Technology, Engineering and Math: Education for Global Leadership. U.S. Department of Education. Accessed April 15, 2018. https://www.ed.gov/stem.

Continue Reading

Latest

Middle East16 hours ago

A Mohammedan Game of Thrones: Iran, Saudi Arabia, and the Fight for Regional Hegemony

Authors: James J. Rooney, Jr. & Dr. Matthew Crosston* The people in the United States didn’t think well of those...

Middle East17 hours ago

Might Trump Ask Israel to Fund America’s Invasion-Occupation of Syria?

On 16 April 2018, the internationally respected analyst of Middle-Eastern affairs, Abdel Bari Atwan, headlined about Trump’s increasingly overt plan...

Economy19 hours ago

Financial Inclusion on the Rise, But Gaps Remain

Financial inclusion is on the rise globally, accelerated by mobile phones and the internet, but gains have been uneven across...

Newsdesk20 hours ago

ADB Operations Reach $32.2 Billion in 2017- ADB Annual Report

The Asian Development Bank (ADB) Annual Report 2017, released today, provides a clear, comprehensive, and detailed record of ADB’s operations,...

Middle East22 hours ago

Trump lacks proper strategy towards Middle East, Syria

About five years ago, when former US President Barack Obama spoke of a military strike in Syria, Zbigniew Brzezinski, former...

Tech23 hours ago

The Ethical and Legal Issues of Artificial Intelligence

Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical...

Middle East1 day ago

US: No Restitution to Syria

On April 22nd, an anonymous U.S. “Senior Administration Official” told a press conference in Toronto, that the only possible circumstance...

Newsletter

Trending

Copyright © 2018 Modern Diplomacy