Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical issues. Artificial intelligence adds a new dimension to these questions. Systems that use artificial intelligence technologies are becoming increasingly autonomous in terms of the complexity of the tasks they can perform, their potential impact on the world and the diminishing ability of humans to understand, predict and control their functioning. Most people underestimate the real level of automation of these systems, which have the ability to learn from their own experience and perform actions beyond the scope of those intended by their creators. This causes a number of ethical and legal difficulties that we will touch upon in this article.
Ethics and Artificial Intelligence
There is a well-known thought experiment in ethics called the trolley problem. The experiment raises a number of important ethical issues that are directly related to artificial intelligence. Imagine a runaway trolley going down the railway lines. There are five people tied to the track ahead. You are standing next to a lever. If you pull it, the trolley will switch to a different set of track. However, there is another person tied to that set of track. Do you pull the lever or not?
There is no clear-cut answer to this question. What is more, there are numerous situations in which such a decision may have to be made . And different social groups tend to give different answers. For example, Buddhist monks are overwhelmingly willing to sacrifice the life of one person in order to save five, even if presented with a more complicated variation of the trolley problem.
As for artificial intelligence, such a situation could arise, for example, if a self-driving vehicle is travelling along a road in a situation where an accident is unavoidable. The question thus arises as to whose lives should take priority – those of the passengers, the pedestrians or neither. A special website has been created by the Massachusetts Institute of Technology that deals with this very issue: users can test out various scenarios out on themselves and decide which courses of action would be the most worthwhile.
Other questions also arise in this case: What actions can be allowed from the legal point of view? What should serve as a basis for such decisions? Who should ultimately be held responsible? This problem has already been addressed by companies and regulators. Representatives at Mercedes, for example, have said outright that their cars will prioritize the lives of passengers. The Federal Ministry of Transport and Digital Infrastructure of Germany responded to this immediately, anticipating future regulation by stating that making such a choice based on a set of criteria would be illegal, and that the car manufacturer be held responsible for any injury or loss of life.
Other countries may go a different route. Take the Chinese Social Credit System, for example, which rates its citizens based how law-abiding and how useful to society they are, etc. Those with low ratings will face sanctions. What is stopping the Chinese government from introducing a law that forces manufacturers of self-driving vehicles to sacrifice the lives of lower-rated citizens in the event of an unavoidable accident? Face recognition technologies and access to the relevant databases make it perfectly possible to identify potential victims and compare their social credit ratings.
The Main Problems Facing the Law
The legal problems run even deeper, especially in the case of robots. A system that learns from information it receives from the outside world can act in ways that its creators could not have predicted , and predictability is crucial to modern legal approaches. What is more, such systems can operate independently from their creators or operators thus complicating the task of determining responsibility. These characteristics pose problems related to predictability and the ability to act independently while at the same time not being held responsible .
There are numerous options in terms of regulation, including regulation that is based on existing norms and standards. For example, technologies that use artificial intelligence can be regulated as items subject to copyright or as property. Difficulties arise here, however, if we take into account the ability of such technologies to act autonomously, against the will of their creators, owners or proprietors. In this regard, it is possible to apply the rules that regulate a special kind of ownership, namely animals, since the latter are also capable of autonomous actions. In Russian Law, the general rules of ownership are applied to animals (Article 137 of the Civil Code of the Russian Federation); the issue of responsibility, therefore, comes under Article 1064 of the Civil Code of the Russian Federation: injury inflicted on the personality or property of an individual shall be subject to full compensation by the person who inflicted the damage.
Proposals on the application of the law on animals have been made , although they are somewhat limited. First, the application of legislation on the basis of analogy is unacceptable within the framework of criminal law. Second, these laws have been created primarily for household pets, which we can reasonably expect will not cause harm under normal circumstances. There have been calls in more developed legal systems to apply similar rules to those that regulate the keeping of wild animals, since the rules governing wild animals are more stringent . The question arises here, however, of how to make a separation with regard to the specific features of artificial intelligence mentioned above. Moreover, stringent rules may actually slow down the introduction of artificial intelligence technologies due to the unexpected risks of liability for creators and inventors.
Another widespread suggestion is to apply similar norms to those that regulate the activities of legal entities . Since a legal entity is an artificially constructed subject of the law , robots can be given similar status. The law can be sufficiently flexible and grant the rights to just about anybody. It can also restrict rights. For example, historically, slaves had virtually no rights and were effectively property. The opposite situation can also be observed, in which objects that do not demonstrate any explicit signs of the ability to do anything are vested with rights. Even today, there are examples of unusual objects that are recognized as legal entities, both in developed and developing countries. In 2017, a law was passed in New Zealand recognizing the status of the Whanganui River as a legal entity. The law states that the river is a legal entity and, as such, has all the rights, powers and obligations of a legal entity. The law thus transformed the river from a possession or property into a legal entity, which expanded the boundaries of what can be considered property and what cannot. In 2000, the Supreme Court of India recognized the main sacred text of the Sikhs, the Guru Granth Sahib, as a legal entity.
Even if we do not consider the most extreme cases and cite ordinary companies as an example, we can say that some legal systems make legal entities liable under civil and, in certain cases, criminal law . Without determining whether a company (or state) can have free will or intent, or whether they can act deliberately or knowingly, they can be recognized as legally responsible for certain actions. In the same way, it is not necessary to ascribe intent or free will to robots to recognize them as responsible for their actions.
The analogy of legal entities, however, is problematic, as the concept of legal entity is necessary in order to carry out justice in a speedy and effective manner. But the actions of legal entities always go back to those of a single person or group of people, even if it is impossible to determine exactly who they are . In other words, the legal responsibility of companies and similar entities is linked to the actions performed by their employees or representatives. What is more, legal entities are only deemed to be criminally liable if an individual performing the illegal action on behalf of the legal entity is determined . The actions of artificial intelligence-based systems will not necessarily be traced back to the actions of an individual.
Finally, legal norms on the sources of increased danger can be applied to artificial intelligence-based systems. In accordance with Paragraph 1 of Article 1079 of the Civil Code of the A Russian Federation, legal entities and individuals whose activities are associated with increased danger for the surrounding population (the use of transport vehicles, mechanisms, etc.) shall be obliged to redress the injury inflicted by the source of increased danger, unless they prove that injury has been inflicted as a result of force majeure circumstances or at the intent of the injured person. The problem is identifying which artificial intelligence systems can be deemed sources of increased danger. The issue is similar to the one mentioned above regarding domestic and wild animals.
National and International Regulation
Many countries are actively creating the legal conditions for the development of technologies that use artificial intelligence. For example, the “Intelligent Robot Development and Dissemination Promotion Law” has been in place in South Korea since 2008. The law is aimed at improving the quality of life and developing the economy through the creation and promotion of a strategy for the sustainable development of the smart robot industry. Every five years, the government works out a basic plan to ensure that these goals are achieved.
I would like to pay particular attention here to two recent examples: France, which has declared its ambitions to become a European and world leader in artificial intelligence; and the European Union, which has put forward advanced rules for the regulation of smart robots.
In late March 2018, President of France Emmanuel Macron presented the country’s new national artificial intelligence strategy, which involves investing 1.5 billion Euros over the next five years to support research an innovation in the field. The strategy is based on the recommendations made in the report prepared under the supervision of French mathematician and National Assembly deputy Cédric Villani. The decision was made for the strategy to be aimed at four specific sectors: healthcare; transport; the environment and environmental protection; and security. The reasoning behind this is to focus potential of the comparative advantages and competencies in artificial intelligence on sectors where companies can play a key role at the global level, and because these technologies are important for the public interest, etc.
Seven key proposals are given, one of which is of particular interest for the purposes of this article – namely, to make artificial intelligence more open. It is true that the algorithms used in artificial intelligence are discrete and, in most cases, trade secrets. However, algorithms can be biased, for example, in the process of self-learning, they can absorb and adopt the stereotypes that exist in society or which are transferred to them by developers and make decisions based on them. There is already legal precedent for this. A defendant in the United States received a lengthy prison sentence on the basis of information obtained from an algorithm predicting the likelihood of repeat offences being committed. The defendant’s appeal against the use of an algorithm in the sentencing process was rejected because the criteria used to evaluate the possibility of repeat offences were a trade secret and therefore not presented. The French strategy proposes developing transparent algorithms that can be tested and verified, determining the ethical responsibility of those working in artificial intelligence, creating an ethics advisory committee, etc.
The creation of the resolution on the Civil Law Rules on Robotics marked the first step towards the regulation of artificial intelligence in the European Union. A working group on legal questions related to the development of robotics and artificial intelligence in the European Union was established back in 2015. The resolution is not a binding document, but it does give a number of recommendations to the European Commission on possible actions in the area of artificial intelligence, not only with regard to civil law, but also to the ethical aspects of robotics.
The resolution defines a “smart robot” as “one which has autonomy through the use of sensors and/or interconnectivity with the environment, which has at least a minor physical support, which adapts its behaviour and actions to the environment and which cannot be defined as having ‘life’ in the biological sense.” The proposal is made to “introduce a system for registering advanced robots that would be managed by an EU Agency for Robotics and Artificial Intelligence.” As regards liability for damage caused by robots, two options are suggested: “either strict liability (no fault required) or on a risk-management approach (liability of a person who was able to minimise the risks).” Liability, according to the resolution, “should be proportionate to the actual level of instructions given to the robot and to its degree of autonomy. Rules on liability could be complemented by a compulsory insurance scheme for robot users, and a compensation fund to pay out compensation in case no insurance policy covered the risk.”
The resolution proposes two codes of conduct for dealing with ethical issues: a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics Committees. The first code proposes four ethical principles in robotics engineering: 1) beneficence (robots should act in the best interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).
The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore, our attitude to autonomous systems (whether they are robots or something else), and our reinterpretation of their role in society and their place among us, can have a transformational effect. Legal personality determines what is important for society and allows the decision to made as to whether “something” is a valuable and reasonable object for the purposes of possessing rights and obligations.
Due to the specific features of artificial intelligence, suggestions have been put forward regarding the direct responsibility of certain systems . According to this line of thought, there are no fundamental reasons why autonomous systems should not be legally liable for their actions. The question remains, however, about the necessity or desirability of introducing this kind of liability (at least at the present stage). It is also related to the ethical issues mentioned above. Perhaps making programmers or users of autonomous systems liable for the actions of those systems would be more effective. But this could slow down innovation. This is why we need to continue to search for the perfect balance.
In order to find this balance, we need to address a number of issues. For example: What goals are we pursuing in the development of artificial intelligence? And how effective will it be? The answers to these questions will help us to prevent situations like the one that appeared in Russia in the 17th century, when an animal (specifically goats) was exiled to Siberia for its actions .
First published at our partner RIAC
- 1. See, for example. D. Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong, Princeton University Press, 2013.
- 2. Asaro P., “From Mechanisms of Adaptation to Intelligence Amplifiers: The Philosophy of W. Ross Ashby,” in Wheeler M., Husbands P., and Holland O. (eds.) The Mechanical Mind in History, Cambridge, MA: MIT Press: pp. 149–184
- 3. Asaro P. The Liability Problem for Autonomous Artificial Agents // AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA. March 21–23, 2016, p. 191.
- 4. Arkhipov, V., Naumov, V. On Certain Issues Regarding the Theoretical Grounds for Developing Legislation on Robotics: Aspects of Will and Legal Personality // Zakon. 2017, No. 5, p. 167.
- 5. Asaro P. The Liability Problem for Autonomous Artificial Agents, p. 193.
- 6. Arkhipov, V., Naumov, V. Op. cit., p. 164.
- 7. See, for example. Winkler A. We the Corporations: How American Businesses Won Their Civil Rights. Liverlight, 2018. See a description here: https://www.nytimes.com/2018/03/05/books/review/adam-winkler-we-the-corporations.html
- 8. In countries that use the Anglo-Saxon legal system, the European Union and some Middle Eastern countries. This kind of liability also exists in certain former Soviet countries: Georgia, Kazakhstan, Moldova and Ukraine. It does not exist in Russia, although it is under discussion.
- 9. Brożek B., Jakubiec M. On the Legal Responsibility of Autonomous Machines // Artificial Intelligence Law. 2017, No. 25(3), pp. 293–304.
- 10. Khanna V.S. Corporate Criminal Liability: What Purpose Does It Serve? // Harvard Law Review. 1996, No. 109, pp. 1477–1534.
- 11. Hage J. Theoretical Foundations for the Responsibility of Autonomous Agents // Artificial Intelligence Law. 2017, No. 25(3), pp. 255–271.
- 12. U. Pagallo, The Laws of Robots. Crimes, Contracts, and Torts. Springer, 2013, p. 36.
The Dark Ghosts of Technology
Last many decades, if accidently, we missed the boat on understanding equality, diversity and tolerance, nevertheless, how obediently and intentionally we worshiped the technology no matter how dark or destructive a shape it morphed into; slaved to ‘dark-technology’ our faith remained untarnished and faith fortified that it will lead us as a smarter and successful nation.
How wrong can we get, how long in the spell, will we ever find ourselves again?
The dumb and dumber state of affairs; extreme and out of control technology has taken human-performances on ‘real-value-creation’ as hostage, crypto-corruption has overtaken economies, shiny chandeliers now only cast giant shadows, tribalism nurturing populism and socio-economic-gibberish on social media narratives now as new intellectualism.
Only the mind is where critical thinking resides, not in some app.
The most obvious missing link, is theabandonment of own deeper thinking. By ignoring critical thinking, and comfortably accepting our own programming, labeled as ‘artificial intelligence’ forgetting in AI there is nothing artificial just our own ‘ignorance’ repackaged and branded. AI is not some runaway train; there is always a human-driver in the engine room, go check. When ‘mechanized-programming, sensationalized by Hollywood as ‘celestially-gifted-artificial-intelligence’ now corrupting global populace in assuming somehow we are in safe hands of some bionic era of robotized smartness. All designed and suited to sell undefined glittering crypto-economies under complex jargon with illusions of great progress. The shiny towers of glittering cities are already drowning in their own tent-cities.
A century ago, knowing how to use a pencil sharpener, stapler or a filing cabinet got us a job, today with 100+ miscellaneous, business or technology related items, little or nothing considered as big value-added gainers. Nevertheless, Covidians, the survivors of the covid-19 cruelties now like regimented disciples all lining up at the gates. There never ever was such a universal gateway to a common frontier or such massive assembly of the largest mindshare in human history.
Some of the harsh lessons acquired while gasping during the pandemic were to isolate techno-logy with brain-ology. Humankind needs humankind solutions, where progress is measured based on common goods. Humans will never be bulldozers but will move mountains. Without mind, we become just broken bodies, in desperate search for viagra-sunrises, cannabis-high-afternoons and opioid-sunsets dreaming of helicopter-monies.
Needed more is the mental-infrastructuring to cope with platform economies of global-age and not necessarily cemented-infrastructuring to manage railway crossings. The new world already left the station a while ago. Chase the brain, not the train. How will all this new thinking affect the global populace and upcoming of 100 new National Elections, scheduled over the next 500 days? The world of Covidians is in one boat; the commonality of problems bringing them closer on key issues.
Newspapers across the world dying; finally, world-maps becoming mandatory readings of the day
Smart leadership must develop smart economies to create the real ‘need’ of the human mind and not just jobs, later rejected only as obsolete against robotization. Across the world, damaged economies are visible. Lack of pragmatic support to small medium businesses, micro-mega exports, mini-micro-manufacturing, upskilling, and reskilling of national citizenry are all clear measurements pointing as national failures. Unlimited rainfall of money will not save us, but the respectable national occupationalism will. Study ‘population-rich-nations’ and new entrapments of ‘knowledge-rich-nations’ on Google and also join Expothon Worldwide on ‘global debate series’ on such topics.
Emergency meetings required; before relief funding expires, get ready with the fastest methodologies to create national occupationalism, at any costs, or prepare for fast waves of populism surrounded by almost broken systems. Bold nations need smart play; national debates and discussions on common sense ideas to create local grassroots prosperity and national mobilization of hidden talents of the citizenry to stand up to the global standard of competitive productivity of national goods and services.
The rest is easy
China and AI needs in the security field
On the afternoon of December 11, 2020, the Political Bureau of the Central Committee of the Communist Party of China (CPC) held the 26th Collective Study Session devoted to national security. On that occasion, the General Secretary of the CPC Central Committee, Xi Jinping, stressed that the national security work was very important in the Party’s management of State affairs, as well as in ensuring that the country was prosperous and people lived in peace.
In view of strengthening national security, China needs to adhere to the general concept of national security; to seize and make good use of an important and propitious period at strategic level for the country’s development; to integrate national security into all aspects of the CPC and State’s activity and consider it in planning economic and social development. In other words, it needs to builda security model in view of promoting international security and world peace and offering strong guarantees for the construction of a modern socialist country.
In this regard, a new cycle of AI-driven technological revolution and industrial transformation is on the rise in the Middle Empire. Driven by new theories and technologies such as the Internet, mobile phone services, big data, supercomputing, sensor networks and brain science, AI offers new capabilities and functionalities such as cross-sectoral integration, human-machine collaboration, open intelligence and autonomous control. Economic development, social progress, global governance and other aspects have a major and far-reaching impact.
In recent years, China has deepened the AI significance and development prospects in many important fields. Accelerating the development of a new AI generation is an important strategic starting point for rising up to the challenge of global technological competition.
What is the current state of AI development in China? How are the current development trends? How will the safe, orderly and healthy development of the industry be oriented and led in the future?
The current gap between AI development and the international advanced level is not very wide, but the quality of enterprises must be “matched” with their quantity. For this reason, efforts are being made to expand application scenarios, by enhancing data and algorithm security.
The concept of third-generation AI is already advancing and progressing and there are hopes of solving the security problem through technical means other than policies and regulations-i.e. other than mere talk.
AI is a driving force for the new stages of technological revolution and industrial transformation. Accelerating the development of a new AI generation is a strategic issue for China to seize new opportunities in the organisation of industrial transformation.
It is commonly argued that AI has gone through two generations so far. AI1 is based on knowledge, also known as “symbolism”, while AI2 is based on data, big data, and their “deep learning”.
AI began to be developed in the 1950s with the famous Test of Alan Turing (1912-54), and in 1978 the first studies on AI started in China. In AI1, however, its progress was relatively small. The real progress has mainly been made over the last 20 years – hence AI2.
AI is known for the traditional information industry, typically Internet companies. This has acquired and accumulated a large number of users in the development process, and has then established corresponding patterns or profiles based on these acquisitions, i.e. the so-called “knowledge graph of user preferences”. Taking the delivery of some products as an example, tens or even hundreds of millions of data consisting of users’ and dealers’ positions, as well as information about the location of potential buyers, are incorporated into a database and then matched and optimised through AI algorithms: all this obviously enhances the efficacy of trade and the speed of delivery.
By upgrading traditional industries in this way, great benefits have been achieved. China is leading the way and is in the forefront in this respect: facial recognition, smart speakers, intelligent customer service, etc. In recent years, not only has an increasing number of companies started to apply AI, but AI itself has also become one of the professional directions about which candidates in university entrance exams are worried.
According to statistics, there are 40 AI companies in the world with a turnover of over one billion dollars, 20 of them in the United States and as many as 15 in China. In quantitative terms, China is firmly ranking second. It should be noted, however, that although these companies have high ratings, their profitability is still limited and most of them may even be loss-making.
The core AI sector should be independent of the information industry, but should increasingly open up to transport, medicine, urban fabric and industries led independently by AI technology. These sectors are already being developed in China.
China accounts for over a third of the world’s AI start-ups. And although the quantity is high, the quality still needs to be improved. First of all, the application scenarios are limited. Besides facial recognition, security, etc., other fields are not easy to use and are exposed to risks such as 1) data insecurity and 2) algorithm insecurity. These two aspects are currently the main factors limiting the development of the AI industry, which is in danger of being prey to hackers of known origin.
With regard to data insecurity, we know that the effect of AI applications depends to a large extent on data quality, which entails security problems such as the loss of privacy (i.e. State security). If the problem of privacy protection is not solved, the AI industry cannot develop in a healthy way, as it would be working for ‘unknown’ third parties.
When we log into a webpage and we are told that the most important thing for them is the surfers’ privacy, this is a lie as even teenage hackers know programs to violate it: at least China tells us about the laughableness of such politically correct statements.
The second important issue is the algorithm insecurity. The so-called insecure algorithm is a model that is used under specific conditions and will not work if the conditions are different. This is also called unrobustness, i.e. the algorithm vulnerability to the test environment.
Taking autonomous driving as an example, it is impossible to consider all scenarios during AI training and to deal with new emergencies when unexpected events occur. At the same time, this vulnerability also makes AI systems permeable to attacks, deception and frauds.
The problem of security in AI does not lie in politicians’ empty speeches and words, but needs to be solved from a technical viewpoint. This distinction is at the basis of AI3.
It has a development path that combines the first generation knowledge-based AI and the second generation data-driven AI. It uses the four elements – knowledge, data, algorithms and computing power – to establish a new theory and interpretable and robust methods for a safe, credible and reliable technology.
At the moment, the AI2 characterised by deep learning is still in a phase of growth and hence the question arises whether the industry can accept the concept of AI3 development.
As seen above, AI has been developing for over 70 years and now it seems to be a “prologue’.
Currently most people are not able to accept the concept of AI3 because everybody was hoping for further advances and steps forward in AI2. Everybody felt that AI could continue to develop by relying on learning and not on processing. The first steps of AI3 in China took place in early 2015 and in 2018.
The AI3 has to solve security problems from a technical viewpoint. Specifically, the approach consists in combining knowledge and data. Some related research has been carried out in China over the past four or five years and the results have also been applied at industrial level. The RealSecure data security platform and the RealSafe algorithm security platform are direct evidence of these successes.
What needs to be emphasised is that these activities can only solve particular security problems in specific circumstances. In other words, the problem of AI security has not yet found a fundamental solution, and it is likely to become a long-lasting topic without a definitive solution since – just to use a metaphor – once the lock is found, there is always an expert burglar. In the future, the field of AI security will be in a state of ongoing confrontation between external offence and internal defence – hence algorithms must be updated constantly and continuously.
The progression of AI3 will be a natural long-term process. Fortunately, however, there is an important AI characteristic – i.e. that every result put on the table always has great application value. This is also one of the important reasons why all countries attach great importance to AI development, as their national interest and real independence are at stake.
With changes taking place around the world and a global economy in deep recession due to Covid-19, the upcoming 14th Five-Year Plan (2021-25) of the People’s Republic of China will be the roadmap for achieving the country’s development goals in the midst of global turmoil.
As AI is included in the aforementioned plan, its development shall also tackle many “security bottlenecks”. Firstly, there is a wide gap in the innovation and application of AI in the field of network security, and many scenarios are still at the stage of academic exploration and research.
Secondly, AI itself lacks a systematic security assessment and there are severe risks in all software and hardware aspects. Furthermore, the research and innovation environment on AI security is not yet at its peak and the relevant Chinese domestic industry not yet at the top position, seeking more experience.
Since 2017, in response to the AI3 Development Plan issued by the State Council, 15 Ministries and Commissions including the Ministry of Science and Technology, the Development and Reform Commission, etc. have jointly established an innovation platform. This platform is made up of leading companies in the industry, focusing on open innovation in the AI segment.
At present, thanks to this platform, many achievements have been made in the field of security. As first team in the world to conduct research on AI infrastructure from a system implementation perspective, over 100 vulnerabilities have been found in the main machine learning frameworks and dependent components in China.
The number of vulnerabilities make Chinese researchers rank first in the world. At the same time, a future innovation plan -developed and released to open tens of billions of security big data – is being studied to promote the solution to those problems that need continuous updates.
The government’s working report promotes academic cooperation and pushes industry and universities to conduct innovative research into three aspects: a) AI algorithm security comparison; 2) AI infrastructure security detection; 3) AI applications in key cyberspace security scenarios.
By means of state-of-the-art theoretical and basic research, we also need to provide technical reserves for the construction of basic AI hardware and open source software platforms (i.e. programmes that are not protected by copyright and can be freely modified by users) and AI security detection platforms, so as to reduce the risks inherent in AI security technology and ensure the healthy development of AI itself.
With specific reference to security, on March 23 it was announced that the Chinese and Russian Foreign Ministers had signed a joint statement on various current global governance issues.
The statement stresses that the continued spread of the Covid-19 pandemic has accelerated the evolution of the international scene, has caused a further imbalance in the global governance system and has affected the process of economic development while new global threats and challenges have emerged one after another and the world has entered a period of turbulent changes. The statement appeals to the international community to put aside differences, build consensus, strengthen coordination, preserve world peace and geostrategic stability, as well as promote the building of a more equitable, democratic and rational multipolar international order.
In view of ensuring all this, the independence enshrined by international law is obviously not enough, nor is the possession of nuclear deterrent. What is needed, instead, is the country’s absolute control of information security, which in turn orients and directs the weapon systems, the remote control of which is the greedy prey to the usual suspects.
Factories of the Future Find Growth and Sustainability Through Digitalization
The World Economic Forum announced today the addition of 15 new sites to its Global Lighthouse Network, a community of world-leading manufacturers using Fourth Industrial Revolution technologies to enable bottom-line growth. Despite the COVID-19 pandemic’s unprecedented disruption, 93% achieved an increase in product output and found new revenue streams.
Notably, these leading innovators created new revenue streams while driving environmental sustainability – 53% are seeing measurable and marked environmental sustainability benefits. Some have seen almost a total reduction in CO2 emissions, double-digit increases in efficiency and reduction in material use. The new report, Reimagining Operations for Growth, outlines how manufacturers accomplished these results. Their CEOs will provide more insights at the Lighthouses Live event, featuring keynote speaker Satya Nadella, CEO of Microsoft and Alex Gorsky, chairman and CEO of Johnson & Johnson on 17 March at 14.00 CET. See below for a full list of the new Lighthouses and their achievements.
The Lighthouse Network and its 69 sites are a platform to develop, replicate and scale innovations, creating opportunities for cross-company learning and collaboration, while setting new benchmarks for the global manufacturing community.
While 74% of companies remained stuck in pilot purgatory in 2020, research based on learnings from the network reveals that scalable Fourth Industrial Revolution technologies are key to long-term growth. By fully embracing agile ways of working, these manufacturers have been able to respond to disruption and ongoing shifts in supply and demand along their production network and value chains. They also prioritized workforce development – reskilling and upskilling employees for advanced manufacturing jobs – at the same pace and scale.
The new Lighthouses:
Bosch (Suzhou, China):As a role model of manufacturing excellence within the group, Bosch Suzhou deployed a digital transformation strategy in manufacturing and logistics, reducing manufacturing costs by 15% while improving quality by 10%.
Foxconn (Chengdu, China): Confronted with fast-growing demand and labour skill scarcity, Foxconn Chengdu adopted mixed reality, artificial intelligence (AI) and internet of things (IoT) technologies to increase labour efficiency by 200% and improve overall equipment effectiveness by 17%.
HP Inc. (Singapore): Facing an increase in product complexity and labour shortages leading to quality and cost challenges, along with a move at the country level to focus on higher-value manufacturing, HP Singapore embarked on its Fourth Industrial Revolution journey to transform its factory from being manual, labour intensive and reactive to being highly digitized, automated and driven by AI, improving its manufacturing costs by 20%, and its productivity and quality by 70%.
Midea (Shunde, China): To expand its e-commerce presence and overseas market share, Midea invested in digital procurement, flexible automation, digital quality, smart logistics and digital sales to improve product cost by 6%, order lead times by 56% and CO2 emissions by 9.6%.
ReNew Power (Hubli, India): Facing exponential asset growth and rising competitiveness from new entrants, ReNew Power, India’s largest renewables company, developed Fourth Industrial Revolution technologies, such as proprietary advanced analytics and machine learning solutions, to increase the yield of its wind and solar assets by 2.2%, reduce downtime by 31% without incurring any additional capital expenditure, and improve employee productivity by 31%.
Tata Steel (Jamshedpur, India): Facing operational KPI stagnation and an impending loss of captive raw material advantage, Tata Steel Jamshedpur’s 110-year-old plant with deeply rooted cultural and technology legacies deployed multiple Fourth Industrial Revolution technologies, such as machine learning and advanced analytics in procurement to save 4% on raw material costs, and prescriptive analytics in production and logistics planning to reduce the cost of serving customers by 21%.
Tsingtao Brewery (Qingdao, China): Facing growing consumer expectations for personalized, differentiated and diverse beers, Tsingtao Brewery rethought its use of smart digital technologies along its value chain to enable its 118-year-old factory to meet consumer needs, reducing customized order and new product development lead times by 50%. As a result, it increased its share of customized beers to 33% and revenue by 14%.
Wistron (Kunshan, China): In response to high-mix and low-volume business challenges, Wistron leveraged AI, IoT and flexible automation technologies to improve labour, asset and energy productivity, not only in production and logistics but also in supplier management, improving manufacturing costs by 26% while reducing energy consumption by 49%.
Henkel (Montornès, Spain): To drive further improvements in productivity and boost the company’s sustainability, Henkel built on its digital backbone to scale Fourth Industrial Revolution technologies linking its cyber and physical systems across the Montornès plant, reducing costs by 15% and accelerating its time to market by 30% while improving its carbon footprint by 10%.
Johnson & Johnson Consumer Health (Helsingborg, Sweden): In a highly regulated healthcare and fast-moving consumer goods environment, J&J Consumer Health addressed customer needs through increased agility using digital twins, robotics and high-tech tracking and tracing to enable 7% product volume growth, with 25% accelerated time to market and 20% cost of goods sold reduction. It made further investments in connecting green tech through Fourth Industrial Revolution technologies to become Johnson & Johnson’s first ever CO2-neutral facility.
Procter & Gamble (Amiens, France): P&G Amiens, a plant with a steady history of transforming operations to manufacture new products, embraced Fourth Industrial Revolution technologies to accommodate a consistent volume increase of 30% over three years through digital twin technology as well as digital operations management and warehouse optimization. This led to 6% lower inventory levels, a 10% improvement in overall equipment effectiveness and a 40% reduction in scrap waste.
Siemens (Amberg, Germany): To achieve its productivity goals, this site implemented a structured lean digital factory approach, deploying smart robotics, AI-powered process controls and predictive maintenance algorithms to achieve 140% factory output at double product complexity without an increase in electricity or a change in resources.
STAR Refinery (Izmir, Turkey): To maintain a competitive edge within the European refinery industry, Izmir STAR Refinery was designed and built to be “the technologically most advanced refinery in the world”. Leveraging more than $70 million investments in advanced technologies (e.g., asset digital performance management, digital twin, machine learning) and organizational capabilities, STAR was able to increase diesel and jet yield by 10% while reducing maintenance costs by 20%.
Ericsson (Lewisville, USA):Faced with increasing demand for 5G radios, Ericsson built a US-based, 5G-enabled digital native factory to stay close to its customers. Leveraging agile ways of working and a robust IIoT architecture, the team was able to deploy 25 use cases in 12 months. As a result, it increased output per employee by 120%, reduced lead time by 75% and reduced inventory by 50%.
Procter & Gamble (Lima, USA): A shift in consumer trends meant more complex packaging and an increased number of products that had to be outsourced. To reverse the tide, P&G Lima invested in supply chain flexibility, leveraging digital twins, advanced analytics and robotic automation. This resulted in an acceleration of speed to market for new products by a factor of 10, an increase in labour productivity by 5% year on year, and plant performance that was two times better than competitors in avoiding stock-outs during the year.
“This is a time of unparalleled industry transformation. The future belongs to those companies willing to embrace disruption and capture new opportunities. Today’s disruptions, despite their challenges, are a powerful invitation to re-envision growth. The lighthouses are illuminating the future of manufacturing and the future of the industry,” said Francisco Betti, Head of Shaping the Future of Advanced Manufacturing and Production, World Economic Forum.
Enno de Boer, Partner, McKinsey & Company, and Global Lead, Manufacturing, said: “The 69 Lighthouse manufacturers open a window into the future of operations. Though no industry is immune from digital transformation, four sectors are resetting benchmarks – Advanced Industries, Consumer Packaged Goods, Pharmaceutical and Medical products, and Heavy Industries. We are seeing a paradigm shift emerge, from reducing cost to more focus on enabling growth and environmental sustainability. The Lighthouses are proving that unlocking smart capacity through digital technologies is more effective than spending on capital infrastructure.”
The goal of the Global Lighthouse Network is to share and learn from best practices, support new partnerships and help other manufacturers deploy technology, adopt sustainable solutions and transform their workforces at pace and scale. The extended network of “Manufacturing Lighthouses” will be officially recognized at Lighthouse Live: Reimagining Operations for Growth at 14.00 CET/09.00 EST 17 March.
Together with a diverse group of experts and innovators, the meeting aims to initiate, accelerate and scale-up entrepreneurial solutions to tackle climate change and advance sustainable development.
Pakistan and Germany are keen to Sustain Multifaceted and Mutually beneficial Cooperation
Pakistan has varied history of relationship and cooperation with other countries in international arena. Despite of proactive foreign policy Pakistan...
Disability policies must be based on what the disabled need
Diversity policies, especially when it comes to disabled people, are often created and implemented by decision makers with very different...
Preparing (Mega)Cities for the 2020s: An Inmovative Image and Investment Diplomacy
Globalized megacities will definitely dominate the future, in the same way as colonial empires dominated the 19th century and nation-states...
The Galwan Conflict: Beginning of a new Relationship Dynamics
The 15th June, 2020 may very well mark a new chapter in the Indo-Chinese relationship and pave the way for...
Aviation Sector Calls for Unified Cybersecurity Practices to Mitigate Growing Risks
The aviation industry needs to unify its approach to prevent cybersecurity shocks, according to a new study released today by...
7 Driving Habits That Are Secretly Damaging Your Diesel Engine
When it comes to driving, no one is perfect, and everyone makes mistakes. But could these habits be costing you...
Ммm is a new trend in the interaction between the EU and Turkey:”Silence is golden” or Musical chair?
On April 6, a protocol collapse occurred during a meeting between President of Turkey R. Erdogan, President of the European...
Economy2 days ago
Future of Work: Next Election Agenda 2022
Economy2 days ago
North Macedonia’s Journey to the EU
Eastern Europe3 days ago
Peace, Problems and Perspectives in the Post-war South Caucasus
Middle East3 days ago
Israel and Turkey in search of solutions
Southeast Asia2 days ago
New Leadership Takes Charge in Vietnam: Challenges and Prospects
Intelligence3 days ago
COVID-19 As an Agent of Change in World Order
Economy2 days ago
How to incorporate the environment in economic ventures for a sustainable future?
Africa3 days ago
Scaling Up Development Could Help Southern African leaders to Defeat Frequent Miltant Attacks