Connect with us

Science & Technology

The Development of Artificial Intelligence in China: Conclusions

Avatar photo

Published

on

Artificial Intelligence has many impacts on culture. AI technology can improve human knowledge, language and cultural life. Therefore AI education needs to be created and developed, as the dissemination and popularisation of AI knowledge is also a fundamental part of the overall knowledge process.

Scientific dissemination and popularisation involves two aspects. On the one hand, disseminating the basic knowledge of Artificial Intelligence to the public, enabling them to understand AI objectively and correctly, and supporting related activities. On the other hand, popularising the basic knowledge of Artificial Intelligence among young people, nurturing and cultivating their specific interest in the subject, and even discovering and training a group of budding scholars in the AI science and technology. These are important guarantees for the sustainable development of Artificial Intelligence. Furthermore, an important step is to make Artificial Intelligence an angel rather than a devil. The scientific work open to the popular masses in China is playing a considerable role.

Secondly, the country fights for strong support from the State and enterprises. A scientific basis of Artificial Intelligence is established and an exemplary role is being played in the spreading of benefits. Scientific and technological staff and teachers at all levels are encouraged and the publication of scientific popularisation works, which are widely disseminated among citizens, is supported. In this regard, the publication of scientific journals on Artificial Intelligence is important to present development trends, application examples, scientific knowledge and to illustrate the developments of Artificial Intelligence at home and abroad to Chinese youth. Various AI technology competitions and summer and winter camps are standardised and organised. The interest of the general public is extended, also in primary schools, in particular. Online AI competitions are organised for university students and for primary and secondary schools across the country to also extend a good ecological culture of Artificial Intelligence.

Furthermore, in the development of AI culture and technology in China, special attention is paid to the role of academic groups at all levels, so that these organisations can play a special role in the dissemination of AI knowledge and the development of AI culture.

Although, as is the case with any new technology, Artificial Intelligence brings economic benefits to creators, vendors and users, its development has caused or is about to cause a number of problems, concerns and regrets for some people. These problems are related to employment, changes in the social structure, changes in the ways of thinking and in the concepts, psychological threats and the danger of technology getting out of control. Some people fear that the AI technology may deprive them of their jobs and lead to unemployment, and that the intelligence of robots may surpass that of humans and threaten security itself. These are all social issues worthy of great attention and affecting social stability and harmony.

The sociological issue of Artificial Intelligence is on the country’s agenda. Government departments, scientific research institutes and academic groups in the People’s Republic of China incorporate the sociological research on Artificial Intelligence into the corresponding plans and study countermeasures and methods. Some possible negative effects or new problems of Artificial Intelligence, such as the use of AI technology to perpetrate financial crimes (the so-called “intelligent crime” without bloodshed and material theft of property), and smart driving vehicles need corresponding laws and traffic rules and regulations. Relevant policies, case law and regulations are therefore established to avoid possible risks and ensure the positive effects of Artificial Intelligence. Only when Artificial Intelligence is implemented and understood well can we ensure that AI is not abused and it is an angel rather than a devil.

Furthermore, as mentioned above, Artificial Intelligence has changed the social structure: collaboration and intelligent humans-machine coexistence will become the new normal of the human social structure, which will certainly have a momentous impact on human society.

After sixty years of development, international Artificial Intelligence has made great progress and is currently showing an explosive growth trend. In recent years, there has been an unprecedented favourable development environment for Artificial Intelligence in the People’s Republic of China and abroad. Several new AI ideas and technologies have sprung up like mushrooms after the rain. In general terms, however, Artificial Intelligence is still in the early stage of development, and is still far from being sufficient to theoretically threaten the survival of human beings, but its presence in society should be greatly appreciated.

For historical reasons, Chinese Artificial Intelligence started late and suffered long diversions. However, after the reform and opening process in the late 1970s, Chinese Artificial Intelligence has gradually embarked on a broad development path. Chinese Artificial Intelligence has currently ushered in the spring of development and is preparing for major changes and innovations, which will surely provide a historic contribution to China’s modernisation.

As a key technology in the cyber age, Artificial Intelligence will increasingly become the engine of a new industrial revolution cycle, which will profoundly influence the country’s international competition and competitiveness model. China has seized national strategic opportunities such as Internet+, Made in China 2025 and Artificial Intelligence+, as well as the historic opportunity of the second machine revolution. It has vigorously developed AI technology and industries and injected momentum into the new normal of the economy, in such a manner as to change the way of thinking.

In China the development trend of international Artificial Intelligence is systematically planned, based on the actual needs of domestic social development; relevant national resources are coordinated and integrated, and development goals are scientifically set.

It is absolutely essential to respect and explore the law of AI development, to recognise the development situation, to identify gaps, to clarify the direction of efforts, to catch up with the international advanced level and provide a positive contribution to the development of international Artificial Intelligence.

With a view to developing AI technology and industry, past, present and future experts in China take decisions, build confidence, are open-minded, establish perseverance, exercise patience, pursue meticulous work, are original and attentive, so as to reassure people all over the country about the urgency of pursuing Artificial Intelligence as a primary basis for growth, free from fear.

It is believed that, faced with the opportunity of Artificial Intelligence development, China’s Ministries and Departments at all levels and AI developers are surely capable of seizing opportunities, creating new national splendour and welcoming the new era of Artificial Intelligence. AI technology and products are all around us and the beginning of the AI era is just behind us.

With specific reference to scientific and technological innovation, the Secretary General of the Communist Party of China, Xi Jinping, has always stressed the need to strive for building world power in science and technology, since the more the planet advances in technology, the more it is a guiding light for China’s technological and scientific research and exploration, in all fields including Artificial Intelligence. Faced with Xi Jinping’s continuous appeals on the subject, the country currently mobilises the powerful engine of technological innovation and continues to advance towards the goal of becoming a leading nation in AI.

Advisory Board Co-chair Honoris Causa Professor Giancarlo Elia Valori is an eminent Italian economist and businessman. He holds prestigious academic distinctions and national orders. Mr. Valori has lectured on international affairs and economics at the world’s leading universities such as Peking University, the Hebrew University of Jerusalem and the Yeshiva University in New York. He currently chairs “International World Group”, he is also the honorary president of Huawei Italy, economic adviser to the Chinese giant HNA Group. In 1992 he was appointed Officier de la Légion d’Honneur de la République Francaise, with this motivation: “A man who can see across borders to understand the world” and in 2002 he received the title “Honorable” of the Académie des Sciences de l’Institut de France. “

Continue Reading
Comments

Science & Technology

The AI Wars Between Nations: A New Arms Race or a Catalyst for Collaboration?

Avatar photo

Published

on

In the 20th century, the nuclear arms race defined the global power struggle. Today, the race to develop the most advanced artificial intelligence (AI) capabilities is driving the narrative of international competition. Nations are investing billions into R&D, striving to have the superior AI technology. This ‘AI war’ evokes a range of emotions: from fear of robotic armies to concerns over subtle, deep-fake propagandas. But is this war only about outdoing each other, or can it spur collaborative breakthroughs?

The Nature of the AI Wars

Unlike traditional arms races, the AI race isn’t just about the creation of lethal machinery. It encompasses economic, technological, ethical, and military dimensions. From improving healthcare diagnostics to forecasting economic shifts and enhancing national security, the potential applications of AI are boundless.

However, as nations rush to outpace each other, concerns arise. There’s the risk of under-regulated AI technologies causing unintentional harm, be it through biased algorithms or intrusive surveillance. On the military front, the emergence of autonomous lethal weapons can redefine the rules of warfare, making conflicts more impersonal and potentially more devastating.

Economic and Strategic Implications

Nations that lead in AI will undeniably hold significant geopolitical leverage. AI has the potential to reshape economies, making certain industries obsolete while giving rise to new ones. A dominant position in AI can mean economic prosperity, more jobs, and greater influence in international decisions.

Moreover, the strategic advantages are evident. AI can enhance cybersecurity, predict and mitigate threats, and offer a strategic edge in negotiations and diplomacy by providing real-time insights.

Collaboration Over Competition?

Despite the competitive undertones, the AI race holds enormous potential for collaborative growth. Just as the Space Race of the Cold War era eventually led to joint space missions and international space stations, the AI race can pave the way for shared research, ethical standards, and mutual growth.

Collaborative efforts can help address some of AI’s biggest challenges, including:

1. Ethical Guidelines: By working together, nations can create universal standards and ethics for AI development, ensuring technologies respect human rights and democratic values.
 
2. Shared Research: Pooling resources can accelerate breakthroughs in areas like healthcare, climate change, and energy, benefiting humanity at large.

3. Global Security: Jointly developed frameworks can regulate the deployment of AI in warfare, preventing uncontrolled escalation.

Conclusion

The AI wars between nations, while driven by a desire for supremacy, don’t have to end in zero-sum outcomes. The very nature of AI – its reliance on vast amounts of data, the benefits of diverse perspectives, and its global applications – makes collaboration not only advantageous but also essential.

As the world treads this new frontier, the hope is that nations recognize the transformative power of AI not just as a tool of dominion but as a means to forge a more interconnected, prosperous, and harmonious future.

From our partner RIAC

Continue Reading

Science & Technology

Battle for Semiconductors: Will There Be Winners?

Published

on

On the eve of U.S. Secretary of Commerce Gina Raimondo’s visit to China on August 27-29, the White House took some conciliatory steps. The U.S. removed 27 Chinese companies from the so-called Unverified list compiled by the Bureau of Industry and Security at the U.S. Department of Commerce. This list includes companies for which the agency cannot verify information on their transactions and whose exports from the U.S. are restricted in some way. While the Chinese Foreign Ministry certainly welcomed that move, the basis of U.S. technology policy toward China remains unchanged. The regional fragmentation of the semiconductor industry will only increase over time. However, the degree of supply chain interdependence and global division of labor in this area is so great that the creation of technological regional blocs under the influence of geopolitical considerations will inevitably lead to supply chain disruptions, multiplied costs, and possibly a slowdown in the growth of technological capabilities for all parties.

Key players

Semiconductors form the backbone of all modern electronics. They are used not only in computers and smartphones, but also in household appliances, cars, children’s toys, military equipment, etc. In other words, semiconductor circuits are indispensable in the production of almost any modern good containing electronic components. Historically, the semiconductor industry evolved in the United States in the mid-1950s. It was there that the first operable integrated circuit was invented and produced. Up until the mid-1980s, Silicon Valley in the U.S. state of California had retained its undisputed global leadership: the U.S. share in global semiconductor production exceeded 50%. However, gradually, under the sway of globalization and international division of labor, as well as with the growing technological complexity of the semiconductor circuitry, the production chain was lengthened to become dispersed across different countries. In the mid-1980s, Japan took over some of the key production processes in this area. Later, Taiwan secured a strong position in the mass final production of semiconductors. Finally, in the 2000s, the Netherlands became an absolute leader—and later a monopolist—in the production of advanced equipment required for lithography of semiconductor circuitry on a silicon wafer. As a result, the current production chain may involve thousands of suppliers scattered around the world, many of them being absolute monopolists on the market. For example, U.S.-based companies such as Cadence Design and Synopsis control 90% of the market for electronic design automation tools (EDA Tools), essential at the initial stage of microchip design. The Netherlands-based ASML is the world’s only supplier of equipment for ultra-deep ultraviolet (EUV) lithography on silicon slabs or wafers. Japan’s Tokyo Electron supplies state-of-the-art equipment for plasma etching, a necessary process for removing layers of material from the wafer surface after lithography. Finally, Taiwanese companies account for more than 50% of the entire global semiconductor end-market, as well as more than 90% of the market for advanced chips made in the 10nm process and below. Meanwhile, South Korean manufacturers control up to 64% of global production of dynamic random-access memory (DRAM) chips.

It is important to realize that China, too, plays a key role in the global semiconductor industry. First, the country is the world’s largest consumer of chips, as it dominates the global production of electronic products. Approximately one-third of all consumer electronics in the world are manufactured in China. In 2022, semiconductors worth $573.5 billion were produced globally, with China accounting for 53.7% of all sales of these products. It is natural that global chip makers have sought to localize production closer to their markets. Thus, both the largest Taiwanese contract manufacturer TSMC and South Korean SK Hynix and Samsung have their own production facilities in China. For example, Chinese plants produce up to 40% of the total volume of NAND chips (non-volatile memory chips) manufactured by Samsung and 40% to 50% of DRAM chips put out by SK Hynix. In addition, it was profitable for global manufacturers to locate less technologically advanced but more labor-intensive production stages in China. For example, China still accounts for more than a quarter of the global chip testing and packaging market. Intel, Texas Instruments, and many others have located their respective facilities in China. Finally, China is the largest producer and supplier of rare metals (gallium, germanium, etc.) required for semiconductor production.

From globalization to technological sovereignty

Therefore, the semiconductor industry has become one of the most globally dispersed production sectors. At present, no country can ensure the production of a microchip from start to finish by solely relying on its own resources and production base. Until a certain time, such a global mode of division of labor had suited everyone. Moreover, Washington had long winked at China’s development of its military-industrial complex due to interpenetration of civilian and military capacity. The leakage of U.S. technologies to China occasionally worried the United States, only in the context of these technologies being transferred to Iran, which at that time had been under sanctions for decades. That negligence had lasted until the mid-2010s, when China first published its Made in China 2025 import substitution program for key technologies, followed by Next Generation Artificial Intelligence Development Plan, which recognized the leading role of emerging technologies, including artificial intelligence (AI), in achieving global dominance and developing the military potential. It was then that the issue of China’s technological development and threats to U.S. national interests came to the fore in America’s political and expert community.

Strict export control measures against Chinese technology companies were first adopted in 2018. At that time, the U.S. accused telecommunications company ZTE of supplying Iran with products containing U.S. semiconductor technology in circumvention of the U.S. sanctions. The United States banned ZTE from purchasing chips created with American technology. This brought the company to the brink of bankruptcy, as there were no other alternatives for ZTE: as was discussed above, U.S. technology is used in the production of any modern chip, one way or another. The ZTE case was settled rather quickly after personal talks between Chinese President Xi Jinping and U.S. President Donald Trump. The company was ordered to pay a $1.3 billion fine, replace its top management, and introduce U.S. Compliance officers into the team. It is important to realize that the U.S. sanctions against ZTE had been imposed even before a full-scale trade war between the U.S. and China broke out. Yet, this was a turning point for both the U.S. and China.

The Americans realized that they had a powerful leverage of technological pressure in their hands. China, in turn, realized its own vulnerability. At that same time, in 2018, Keji Zhibao, a newspaper affiliated with China’s Ministry of Science and Technology, began publishing a series of articles reviewing Beijing’s vulnerabilities in key fundamental technologies. Chinese officials also began to speak more frequently about the need to ensure technological sovereignty.

The U.S. has become more active in using the technological leverage for putting more pressure. In 2019, the Trump administration put Huawei on the U.S. Department of Commerce’s blacklist; among other things, the sale of U.S. chips and other components was banned, as well as the use of Android OS for Huawei. However, this measure did not have a serious impact on Huawei’s business.

First, the Trump administration immediately introduced temporary export permits for Huawei so as not to create economic shocks for American companies, which in 2018 alone supplied Huawei with products $11 billion worth.

Second, nothing could deter Huawei from procuring critical components in third countries. Just a few months later, the company announced its own Harmony OS as an alternative to Android. By the end of the year, the company reported revenue growth of 18%. The following year, the Trump administration extended the sanctions imposed on Huawei. The company fell under the so-called Foreign Direct Product Rule, which prohibits the supply of equipment and components, including from third countries, if they contain American technology. At that time, Huawei was cut off from receiving advanced chips in any way, because contractors from third countries refused to cooperate with this company, fearing secondary U.S. sanctions. As a result, Huawei was forced to sell Honor, its smartphone division.

In the meantime, the sanctions imposed on China by the Trump administration were fragmented. A grace period was introduced for most export restrictions, which, in fact, deferred the enforcement of sanctions for a long period. In addition, Trump’s technological crackdown on China was a pinpoint strike. Huawei, whose name was already on the rumor mill, including among U.S. political figures, was hit hard. However, other Chinese technology companies continued to grow relatively unhindered. Sales of Chinese chip makers and developers rose 18% to $150 billion in 2021. China’s largest contractor SMIC reported sales growth. Although SMIC did not escape being blacklisted by the U.S. Department of Commerce, this did not prevent the company from mastering the production of chips in the 14-nm process. In addition, according to some media reports, SMIC was able to master the 7nm process via reverse engineering of a chip from TSMC. Chinese memory chip maker YMTC has caught up with its American and Korean competitors. The company has developed its own fourth-generation 3D NAND chip, consisting of 232 layers. Apple was even going to make YMTC the exclusive supplier of memory chips for the iPhone.

War of technologies

All these factors forced the U.S. to take a new look at the technology standoff, given that China’s progress in semiconductors has since been tied directly to U.S. national interests. In its technology policy, Washington began to focus on two fronts simultaneously: first, restricting China’s access to advanced technologies as much as possible, and second, stepping up government support for its own innovations and encouraging the repatriation of production capacity to the American soil. Washington understands that semiconductors are the basis both for the development of civilian technologies (and hence economic growth) and for the development of modern weapons systems, i.e., ensuring the interests of national defense.

In October 2022, the Biden administration imposed unprecedented export restrictions on China. Under the new rules, U.S. companies are prohibited from supplying China with high-performance chips and computer goods containing such chips (e.g., GPUs used to develop AI systems). In addition, exports of components that are used in the manufacture of supercomputers or for the development of semiconductor manufacturing have been banned. Supplies of certain equipment for chip production are prohibited. The Foreign Direct Product Rule applies to 28 Chinese companies (this list includes China’s all leading technology companies as well as specialized research institutes). Finally, third-country companies operating in China require special licenses from the U.S. Department of Commerce to supply logic chips with FinFET (fin-shaped field-effect transistors) architecture – 14nm and below; with DRAM – 18nm and below; NAND FLASH – with 128 layers and more – if such products are manufactured using U.S.-developed technology. In addition, professionals with U.S. citizenship and green cards are prohibited to perform certain work that directly or indirectly supports the development and production of semiconductors at certain facilities in China.

In August 2023, the U.S. released a draft of new measures aimed at restricting the flow of U.S. capital into China’s technology sector. If these measures take effect, U.S. private and venture capital investors will be prohibited from investing in Chinese companies that are involved in quantum computing, AI and advanced semiconductors. This being said, a complete ban on investments in the AI industry, as follows from the draft decree of the U.S. President, will apply only to Chinese companies that supply products to enterprises of the military-industrial complex. In other cases, U.S. investors will only need to notify the relevant U.S. regulatory authorities of their intention to invest in respective Chinese companies.

In parallel with prohibitive measures against China, the U.S. authorities are introducing incentives to develop America’s own competencies in the semiconductor industry and to build up the national production base. In 2022, the CHPIS and Science Act was passed, which envisages the allocation of $52 billion in government subsidies for the development of production within the United States. These subsidies will be available to all companies, including the ones of foreign jurisdiction, that decide to develop semiconductor production in the United States. An important condition for receiving support: potential recipients must commit not to invest more than $100,000 in China over a 10-year period if these investments result in the expansion of existing production capacity in China by 5% or more. It is also now prohibited to introduce new product lines or expand existing production with mature technologies by more than 10%. Companies that fail to meet these terms will have to repay the subsidies provided to them within 10 years.

Living under sanctions

U.S. technology restrictions would have had a very limited impact upon China unless key semiconductor technology suppliers from other countries had joined them. Therefore, considerable efforts of U.S. diplomacy were aimed at convincing its partners, mainly the Netherlands, South Korea and Japan, to join the technology restrictions. To a certain extent, the U.S. succeeded in doing so. On July 23, 2023, Japan announced export restrictions on 23 types of equipment needed for semiconductor manufacturing. Moreover, unlike the U.S. sanctions, the Japanese barriers apply to the equipment required for the production of chips using more mature technologies starting with the 45-nm process. Following Japan, the Netherlands also joined the export control measures: as early as 2019 it did not only ban supplies of the equipment for ultra-deep ultraviolet (EUV) lithography, but, starting in June 2023, also some machines for deep ultraviolet (DUV) lithography. Taken together, these restrictions should deny China the opportunity to rapidly develop its own semiconductor industry.

Export bans imposed by the U.S. and its allies are seriously hampering China’s technological development. China is being deprived of the necessary equipment to produce chips. Chinese companies have managed to establish mass production of chips in the 28nm process and are actively mastering the 14nm process. Of course, China cannot produce the most advanced chips, which are used, for example, in the latest generation of smartphones. Nonetheless, the bulk of consumer demand for semiconductors falls on the chips of previous generations. It is important, however, that China still produces these chips using foreign equipment. For example, China bought lithography equipment from ASML even in the 28nm process. The development of such equipment is surely underway in China, but a domestic lithography machine for the 28-nm process can only be expected by the end of this year at best. Moreover, Chinese companies do not have sufficient competencies to create automation design tools for the latest generation of electronics. Huawei this year said it has developed its own EDA tools to create chips in the 14nm process. However, the experimental software and hardware still need to be scaled up for mass production, as well as to ensure interoperability and compatibility in process setup.

Consequently, to produce its own chips, even using mature technologies of previous generations, China needs to build the entire supply chain of raw materials, hardware and software support. No other country at the current stage of technology development has been able to accomplish this incredibly complex and costly exploit. China is certainly ready to invest huge amounts of money in semiconductor technology development, but this does not guarantee success. China’s State Semiconductor Development Fund, or the so-called Big Fund, has accumulated more than $30 billion, but it has not been able to grow a single technology startup into a competitive semiconductor company. For example, Wuhan Hongxin Semiconductor Manufacturing Co, which received almost $20 billion, including from the fund, had gone bankrupt before it could launch any production.

Restrictions on chip imports to China also affect the development of related technologies and related industries. In the first half of 2023, China’s semiconductor imports fell by 22%, while imports of chip-making equipment fell by 23%. Inspur, a leading Chinese manufacturer of server hardware used for AI development, has already warned investors about difficulties with chip supply. The company forecasts a 30% drop in revenue as a result of U.S. semiconductor restrictions. Leading U.S. chip makers have responded to the export restrictions by developing chips specifically for China that are not subject to the export ban. NVIDIA, for example, released the A800 and H800 GPUs for China instead of the banned A100 and H100. Chinese companies have purchased $4 billion worth of these processors to be delivered in 2024. However, the development of new AI products, including generative AI, requires more processing power. According to various estimates, a complex model with as many parameters as ChatGPT requires about 30,000 of the most powerful A100 GPUs. No Chinese company currently boasts such computing power. While American tech giants such as Microsoft, Google and Amazon are freely investing billions in artificial intelligence platforms, Chinese companies are bound by both technological and investment constraints.

Nevertheless, containing China does not guarantee the successful evolution of the U.S. semiconductor industry. First, $52 billion in subsidies for all companies in the semiconductor sector is a very insignificant amount. For example, the construction of only the first phase of the TSMC plant in Arizona is estimated at $12 billion, while the entire project is expected to exceed $40 billion. In the meantime, the economic feasibility of building semiconductor plants in the US is questionable. The plant in Arizona, according to the project, will be able to produce up to 600 thousand chips per year by 2026. TSMC put out more than 14 million chips last year. And by the time the Arizona plant is expected to set up the 3nm process in 2026, such chips will have already been produced in Taiwan for two years. It is not known whether massive government subsidies will ensure U.S. technological leadership and independence from Asian partners. In addition, China as a key supplier of raw materials for the semiconductor industry also has serious leverage. For example, China has introduced export licenses for gallium and germanium. With China accounting for about 80% of the world’s total gallium exports and 60% of the world’s germanium exports, restrictions on the export of these metals could already lead to a significant increase in the costs of chip production and will subsequently reduce the growth potential for the entire industry.

Conclusion

The semiconductor industry is one of the most dispersed global industries. No single country currently possesses the full range of manufacturing chains required to manufacture finished semiconductor products. China, as the largest market for semiconductors and the source of raw materials essential for their production, plays an important role in global supply chains. The U.S. and China standoff, mounting export restrictions, and providing incentives for artificial relocation of production facilities will inevitably lead to the transformation of global production chains. Both the pace of development of Chinese capabilities in this area and the economic well-being of U.S. partners depend on the intensity of new export restrictions introduced by the United States. Given that, according to various estimates, semiconductor companies around the world are losing from 15% to 40% of their revenue from the existing export restrictions, an increase in U.S. sanctions pressure may lead to the degradation of innovation potential, including among the world’s industry leaders due to a sharp decline in their income levels. On the other hand, dependence on the Chinese market creates strong incentives for companies to seek ways to circumvent existing sanctions, so their fragmentation may limit the effectiveness of U.S. technology policy toward the PRC. In the long run, China will increase investment in basic research and development to ensure technological independence. The U.S. faces the challenge of balancing its technology policy to keep a hold on the existing gap with China in semiconductors for generations to come but, on the other hand, not to destroy key drivers of growing technological competencies for itself and its allies. Yet, as Chinese technological capabilities further evolve, it will become increasingly difficult for the U.S. to keep the right balance.

From our partner RIAC

Continue Reading

Science & Technology

Artificial Intelligence and Advances in Physics in the Field of Gravitational Waves (I)

Avatar photo

Published

on

As an important branch of natural sciences, physics studies fundamental laws and phenomena such as matter, energy, mechanics and motion, thus providing an important theoretical basis for human beings to understand and explore the natural world. To be precise, physics models nature mathematically.

With the advancement of science and technology and the fast development of Artificial Intelligence, physics is facing new challenges and opportunities. The AI application is changing the research methods and development trajectory of physics, thus offering new possibilities for progress and innovation.

Artificial Intelligence can help physicists to build more accurate and complex models and to analyse and interpret experiments and data provided by observation. We must keep in mind algorithms such as machine learning, of which deep learning is a part.

The difference lies in the fact that deep learning is more advanced: a deep learning algorithm is not conditioned by the user’s experience. Just to make an example, in non-deep machine learning, to distinguish cats and dogs you have to tell “do it by ears, hair, etc…”, while in deep learning the distinguishing features are extracted by the code itself and, often or always, they are actually patterns that we humans would never be able to have!

It does this in the following way: you give it a set of training data and the expected results. The algorithm starts to do tests on this recognition until it reaches an acceptable accuracy value based on what it should come up with by using iterative mathematics (and obviously there is the human hand in the construction of the algorithm). When it has “adjusted”, you can use it on unknown pictures of cats and dogs, not used for learning, so that it classifies them to the human without the human having to do it himself/herself. Considering the above, Artificial Intelligence can discover hidden patterns and correlations from large amounts of data, thus helping physicists to understand and predict related phenomena.

Artificial Intelligence can be applied to theoretical physics and computational physics research to improve the efficiency and accuracy of computational models and methods. For example, Artificial Intelligence can help physicists develop numerical simulation methods since machine learning is not only for classification, but also for numerical prediction, which is especially useful in the financial field, as it is more efficient at speeding up experiments and calculations.

Artificial Intelligence also has broad applications in the fields of quantum physics and quantum computing. Quantum physics is a branch of science that studies the behaviour of microscopic particles and the laws of quantum mechanics, while quantum computing is an emerging field that utilises the characteristics of quantum mechanics for information processing and calculations. Artificial Intelligence can help physicists design more complex quantum systems and algorithms and promote the development and application of computer science.

The AI application in high-energy physics and particle physics experiments is also very important. High-energy physics studies the structure and interaction of microscopic particles, while particle physics studies the origin and evolution of the universe. Artificial Intelligence can help physicists analyse and process large amounts of experimental data and discover potential new particles and physical phenomena.

Al technology can improve the efficiency of physics research and accelerate the scientific research process. Physics research often requires large amounts of experimental data and complex computational models, and Artificial Intelligence can streamline the work of physicists in discovering hidden patterns and correlations in this data. Artificial Intelligence can also provide more accurate and detailed physics models, helping physicists solve even more complex scientific problems.

Traditional physics research often relies on existing theories and experiments, while Artificial Intelligence can help physicists discover new phenomena and physics laws. By bringing to light patterns and correlations from large amounts of data, Artificial Intelligence stimulates physicists to propose new hypotheses and theories, thus promoting development and innovation.

The AI application explores unknown fields and phenomena. By analysing and extracting information from large amounts of data, Artificial Intelligence expands the scope and depth of physics research.

The development of Artificial Intelligence offers new opportunities for the integration of physics with other disciplines. For example, the combination of Artificial Intelligence and biological sciences can help physicists study complex biological systems and related phenomena. The combination of Artificial Intelligence and chemistry can help physicists study molecular structure and chemical reactions.

Although AI technology has broad application prospects in physics research, it also has to face some challenges including the acquisition and processing of data as this is the main problem, especially when dealing with new issues for which databases are scarce; the creation and verification of the physical model; and the selection and optimisation of algorithms. In this regard, it must be said that the boom in deep learning has mainly been due to the increase in available data thanks to the Internet and the advancement of hardware. The networks that anyone uses can run on their laptops, albeit slowly, but this would have been unthinkable in the 1990s, when deep learning was already being thought of in a very vague way. It is not for nothing that we speak of the “democratisation of deep learning”.

Future development requires cooperation and exchanges between physicists and AI professionals to jointly resolve these challenges and better apply this new technology to physics research and applications.

As an emerging technology, Artificial Intelligence is revolutionising traditional physics. By applying Artificial Intelligence, physicists can build more accurate and complex models, analyse and explain physics experiments and observational data. Artificial Intelligence necessarily accelerates the research process in physics and promote the development and innovation of so-called traditional physics.

Artificial Intelligence, however, still has to face some challenges and problems in physics research, which require further study and exploration. In the future, AI technology will be further utilised in physics research and applications, thus providing more opportunities and challenges for development and innovation.

AI technology is also used in gravitational wave research, whose 2017 Nobel Prize in Physics was awarded to Rainer Weiss (Germany), Barry C. Barish (USA) and Kip S. Thorne (USA).

On 14 September 2015 this group of scientists detected the gravitational wave signal of a system of two black holes merging for the first time. At that moment, it triggered a revolution in the astrophysics community: the research group involved in the discovery of gravitational waves was listed as a candidate for the Nobel Prize in Physics ever since.

The two black holes are located about 1.8 billion light years from Earth. Their masses before the merger were equivalent to 31 and 25 suns in size, respectively. After the merger, the total mass was equivalent to 53 suns in size. Three suns were converted into energy and released in the form of gravitational waves.

For some time, gravitational waves have attracted the attention and curiosity not only of scientists, but also of ordinary citizens. Despite being a weak force – a child lifting a toy amply demonstrates this – gravitational interaction has always created questions: but what are gravitational waves?

To put it simply and briefly, this concept of gravitational waves comes from Einstein’s theory of general relativity. We all know that the theory of relativity always discusses the dialectical relationship between space-time and matter, and the viewpoint of gravitational waves is that matter causes ripples and bends into space-time. The curve propagates outwards from the radiation source in the form of a wave. This wave transmits energy as gravitational radiation and the speed of gravitational waves is close to that of light. An extreme case is a black hole. Its supermass causes a distortion of space-time; light cannot escape and slips into it.

Because our basic understanding of traditional physics is based on Newton’s theory of universal gravitation, it is assumed that all objects have a mutual attraction. The size of this force is proportional to the mass of each object. Einstein believed this theory to be superficial. The reason for what appears to be the effect of gravity is due to the distortion of space and time. Hence, if Newton’s law of universal gravitation is approximate, is our current knowledge based on traditional physics going astray? The question is an awkward one. Hence let us leave it to scientists to further study who is right and who is wrong.

Having said that, however, cosmic scientific research currently uses ever more AI techniques, such as the aforementioned detection and discovery of gravitational waves.

The biggest challenge in capturing gravitational waves is that the sampling rate of LIGO (Laser Interferometer Gravitational-Wave Observatory) data is extremely high, reaching a frequency higher than 16,000 times per second, with tens of thousands of sampling channels. Hence the amount of data is extremely large. It is then understood that with AI machine learning, etc. and state-of-the-art methods in the field of data processing, research efficiency can be improved. (1. continued)

Continue Reading

Trending