On the afternoon of December 11, 2020, the Political Bureau of the Central Committee of the Communist Party of China (CPC) held the 26th Collective Study Session devoted to national security. On that occasion, the General Secretary of the CPC Central Committee, Xi Jinping, stressed that the national security work was very important in the Party’s management of State affairs, as well as in ensuring that the country was prosperous and people lived in peace.
In view of strengthening national security, China needs to adhere to the general concept of national security; to seize and make good use of an important and propitious period at strategic level for the country’s development; to integrate national security into all aspects of the CPC and State’s activity and consider it in planning economic and social development. In other words, it needs to builda security model in view of promoting international security and world peace and offering strong guarantees for the construction of a modern socialist country.
In this regard, a new cycle of AI-driven technological revolution and industrial transformation is on the rise in the Middle Empire. Driven by new theories and technologies such as the Internet, mobile phone services, big data, supercomputing, sensor networks and brain science, AI offers new capabilities and functionalities such as cross-sectoral integration, human-machine collaboration, open intelligence and autonomous control. Economic development, social progress, global governance and other aspects have a major and far-reaching impact.
In recent years, China has deepened the AI significance and development prospects in many important fields. Accelerating the development of a new AI generation is an important strategic starting point for rising up to the challenge of global technological competition.
What is the current state of AI development in China? How are the current development trends? How will the safe, orderly and healthy development of the industry be oriented and led in the future?
The current gap between AI development and the international advanced level is not very wide, but the quality of enterprises must be “matched” with their quantity. For this reason, efforts are being made to expand application scenarios, by enhancing data and algorithm security.
The concept of third-generation AI is already advancing and progressing and there are hopes of solving the security problem through technical means other than policies and regulations-i.e. other than mere talk.
AI is a driving force for the new stages of technological revolution and industrial transformation. Accelerating the development of a new AI generation is a strategic issue for China to seize new opportunities in the organisation of industrial transformation.
It is commonly argued that AI has gone through two generations so far. AI1 is based on knowledge, also known as “symbolism”, while AI2 is based on data, big data, and their “deep learning”.
AI began to be developed in the 1950s with the famous Test of Alan Turing (1912-54), and in 1978 the first studies on AI started in China. In AI1, however, its progress was relatively small. The real progress has mainly been made over the last 20 years – hence AI2.
AI is known for the traditional information industry, typically Internet companies. This has acquired and accumulated a large number of users in the development process, and has then established corresponding patterns or profiles based on these acquisitions, i.e. the so-called “knowledge graph of user preferences”. Taking the delivery of some products as an example, tens or even hundreds of millions of data consisting of users’ and dealers’ positions, as well as information about the location of potential buyers, are incorporated into a database and then matched and optimised through AI algorithms: all this obviously enhances the efficacy of trade and the speed of delivery.
By upgrading traditional industries in this way, great benefits have been achieved. China is leading the way and is in the forefront in this respect: facial recognition, smart speakers, intelligent customer service, etc. In recent years, not only has an increasing number of companies started to apply AI, but AI itself has also become one of the professional directions about which candidates in university entrance exams are worried.
According to statistics, there are 40 AI companies in the world with a turnover of over one billion dollars, 20 of them in the United States and as many as 15 in China. In quantitative terms, China is firmly ranking second. It should be noted, however, that although these companies have high ratings, their profitability is still limited and most of them may even be loss-making.
The core AI sector should be independent of the information industry, but should increasingly open up to transport, medicine, urban fabric and industries led independently by AI technology. These sectors are already being developed in China.
China accounts for over a third of the world’s AI start-ups. And although the quantity is high, the quality still needs to be improved. First of all, the application scenarios are limited. Besides facial recognition, security, etc., other fields are not easy to use and are exposed to risks such as 1) data insecurity and 2) algorithm insecurity. These two aspects are currently the main factors limiting the development of the AI industry, which is in danger of being prey to hackers of known origin.
With regard to data insecurity, we know that the effect of AI applications depends to a large extent on data quality, which entails security problems such as the loss of privacy (i.e. State security). If the problem of privacy protection is not solved, the AI industry cannot develop in a healthy way, as it would be working for ‘unknown’ third parties.
When we log into a webpage and we are told that the most important thing for them is the surfers’ privacy, this is a lie as even teenage hackers know programs to violate it: at least China tells us about the laughableness of such politically correct statements.
The second important issue is the algorithm insecurity. The so-called insecure algorithm is a model that is used under specific conditions and will not work if the conditions are different. This is also called unrobustness, i.e. the algorithm vulnerability to the test environment.
Taking autonomous driving as an example, it is impossible to consider all scenarios during AI training and to deal with new emergencies when unexpected events occur. At the same time, this vulnerability also makes AI systems permeable to attacks, deception and frauds.
The problem of security in AI does not lie in politicians’ empty speeches and words, but needs to be solved from a technical viewpoint. This distinction is at the basis of AI3.
It has a development path that combines the first generation knowledge-based AI and the second generation data-driven AI. It uses the four elements – knowledge, data, algorithms and computing power – to establish a new theory and interpretable and robust methods for a safe, credible and reliable technology.
At the moment, the AI2 characterised by deep learning is still in a phase of growth and hence the question arises whether the industry can accept the concept of AI3 development.
As seen above, AI has been developing for over 70 years and now it seems to be a “prologue’.
Currently most people are not able to accept the concept of AI3 because everybody was hoping for further advances and steps forward in AI2. Everybody felt that AI could continue to develop by relying on learning and not on processing. The first steps of AI3 in China took place in early 2015 and in 2018.
The AI3 has to solve security problems from a technical viewpoint. Specifically, the approach consists in combining knowledge and data. Some related research has been carried out in China over the past four or five years and the results have also been applied at industrial level. The RealSecure data security platform and the RealSafe algorithm security platform are direct evidence of these successes.
What needs to be emphasised is that these activities can only solve particular security problems in specific circumstances. In other words, the problem of AI security has not yet found a fundamental solution, and it is likely to become a long-lasting topic without a definitive solution since – just to use a metaphor – once the lock is found, there is always an expert burglar. In the future, the field of AI security will be in a state of ongoing confrontation between external offence and internal defence – hence algorithms must be updated constantly and continuously.
The progression of AI3 will be a natural long-term process. Fortunately, however, there is an important AI characteristic – i.e. that every result put on the table always has great application value. This is also one of the important reasons why all countries attach great importance to AI development, as their national interest and real independence are at stake.
With changes taking place around the world and a global economy in deep recession due to Covid-19, the upcoming 14th Five-Year Plan (2021-25) of the People’s Republic of China will be the roadmap for achieving the country’s development goals in the midst of global turmoil.
As AI is included in the aforementioned plan, its development shall also tackle many “security bottlenecks”. Firstly, there is a wide gap in the innovation and application of AI in the field of network security, and many scenarios are still at the stage of academic exploration and research.
Secondly, AI itself lacks a systematic security assessment and there are severe risks in all software and hardware aspects. Furthermore, the research and innovation environment on AI security is not yet at its peak and the relevant Chinese domestic industry not yet at the top position, seeking more experience.
Since 2017, in response to the AI3 Development Plan issued by the State Council, 15 Ministries and Commissions including the Ministry of Science and Technology, the Development and Reform Commission, etc. have jointly established an innovation platform. This platform is made up of leading companies in the industry, focusing on open innovation in the AI segment.
At present, thanks to this platform, many achievements have been made in the field of security. As first team in the world to conduct research on AI infrastructure from a system implementation perspective, over 100 vulnerabilities have been found in the main machine learning frameworks and dependent components in China.
The number of vulnerabilities make Chinese researchers rank first in the world. At the same time, a future innovation plan -developed and released to open tens of billions of security big data – is being studied to promote the solution to those problems that need continuous updates.
The government’s working report promotes academic cooperation and pushes industry and universities to conduct innovative research into three aspects: a) AI algorithm security comparison; 2) AI infrastructure security detection; 3) AI applications in key cyberspace security scenarios.
By means of state-of-the-art theoretical and basic research, we also need to provide technical reserves for the construction of basic AI hardware and open source software platforms (i.e. programmes that are not protected by copyright and can be freely modified by users) and AI security detection platforms, so as to reduce the risks inherent in AI security technology and ensure the healthy development of AI itself.
With specific reference to security, on March 23 it was announced that the Chinese and Russian Foreign Ministers had signed a joint statement on various current global governance issues.
The statement stresses that the continued spread of the Covid-19 pandemic has accelerated the evolution of the international scene, has caused a further imbalance in the global governance system and has affected the process of economic development while new global threats and challenges have emerged one after another and the world has entered a period of turbulent changes. The statement appeals to the international community to put aside differences, build consensus, strengthen coordination, preserve world peace and geostrategic stability, as well as promote the building of a more equitable, democratic and rational multipolar international order.
In view of ensuring all this, the independence enshrined by international law is obviously not enough, nor is the possession of nuclear deterrent. What is needed, instead, is the country’s absolute control of information security, which in turn orients and directs the weapon systems, the remote control of which is the greedy prey to the usual suspects.