Artificial intelligence (AI), a subset of machine learning, has the potential to drastically impact a nation’s national security in various ways. Coined as the next space race, the race for AI dominance is both intense and necessary for nations to remain primary in an evolving global environment. As technology develops so does the amount of virtual information and the ability to operate at optimal levels when taking advantage of this data. Furthermore, the proper use and implementation of AI can facilitate a nation in the achievement of information, economic, and military superiority – all ingredients to maintaining a prominent place on the global stage. According to Paul Scharre, “AI today is a very powerful technology. Many people compare it to a new industrial revolution in its capacity to change things. It is poised to change not only the way we think about productivity but also elements of national power.”AI is not only the future for economic and commercial power, but also has various military applications with regard to national security for each and every aspiring global power.
While the U.S. is the birthplace of AI, other states have taken a serious approach to research and development considering the potential global gains. Three of the world’s biggest players, U.S., Russia, and China, are entrenched in non-kinetic battle to out-pace the other in AI development and implementation. Moreover, due to the considerable advantages artificial intelligence can provide it is now a race between these players to master AI and integrate this capability into military applications in order to assert power and influence globally. As AI becomes more ubiquitous, it is no longer a next-generation design of science fiction. Its potential to provide strategic advantage is clear. Thus, to capitalize on this potential strategic advantage, the U.S. is seeking to develop a deliberate strategy to position itself as the permanent top-tier of AI implementation.
The current AI reality is near-peer competitors are leading or closing the gap with the U.S. Of note, Allen and Husain indicate the problem is exacerbated by a lack of AI in the national agenda, diminishing funds for science and technology funding, and the public availability of AI research. The U.S. has enjoyed a technological edge that, at times, enabled military superiority against near-peers. However, there is argument that the U.S. is losing grasp of that advantage. As Flournoy and Lyons indicate, China and Russia are investing massively in research and development efforts to produce technologies and capabilities “specifically designed to blunt U.S. strengths and exploit U.S. vulnerabilities.”
The technological capabilities once unique to the U.S. are now proliferated across both nation-states and other non-state actors. As Allen and Chan indicate, “initially, technological progress will deliver the greatest advantages to large, well-funded, and technologically sophisticated militaries. As prices fall, states with budget-constrained and less technologically-advanced militaries will adopt the technology, as will non-state actors.” As an example, the American use of unmanned aerial vehicles in Iraq and Afghanistan provided a technological advantage in the battle space. But as prices for this technology drop, non-state actors like the Islamic State is making noteworthy use of remotely-controlled aerial drones in its military operations. While the aforementioned is part of the issue, more concerning is the fact that the Department of Defense (DoD) and U.S. defense industry are no longer the epicenter for the development of next-generation advancements. Rather, the most innovative development is occurring more with private commercial companies. Unlike China and Russia, the U.S. government cannot completely direct the activities of industry for purely governmental/military purposes. This has certainly been a major factor in closing the gap in the AI race.
Furthermore, the U.S. is falling short to China in the quantity of studies produced regarding AI, deep-learning, and big data. For example, the number of AI-related papers submitted to the International Joint Conferences on Artificial Intelligence (IJCAI) in 2017 indicated China totaled a majority 37 percent, whereas the U.S. took third position at only 18 percent. While quantity is not everything (U.S. researchers were awarded the most awards at IJCAI 2017, for example), China’s industry innovations were formally marked as “astonishing.”For these reasons, there are various strategic challenges the U.S. must seek to overcome to maintain its lead in the AI race.
Each of the three nations have taken divergent perspectives on how to approach and define this problem. However, one common theme among them is the understanding of AI’s importance as an instrument of international competitiveness as well as a matter of national security. Sadler writes, “failure to adapt and lead in this new reality risks the U.S. ability to effectively respond and control the future battlefield.” However, the U.S. can longer “spend its way ahead of these challenges.” The U.S. has developed what is termed the third offset, which Louth and Taylor defined as a policy shift that is a radical strategy to reform the way the U.S. delivers defense capabilities to meet the perceived challenges of a fundamentally changed threat environment. The continuous development and improvement of AI requires a comprehensive plan and partnership with industry and academia. To cage this issue two DOD-directed studies, the Defense Science Board Summer Study on Autonomy and the Long-Range Research and Development Planning Program, highlighted five critical areas for improvement: (1) autonomous deep-learning systems,(2) human-machine collaboration, (3) assisted human operations, (4) advanced human-machine combat teaming, and (5) network-enabled semi-autonomous weapons.
Similar to the U.S., Russian leadership has stated the importance of AI on the modern battlefield. Russian President Vladimir Putin commented, “Whoever becomes the leader in this sphere (AI) will become the ruler of the world.” Not merely rhetoric, Russia’s Chief of General Staff, General Valery Gerasimov, also predicted “a future battlefield populated with learning machines.” As a result of the Russian-Georgian war, Russia developed a comprehensive military modernization plan. Of note, a main staple in the 2008 modernization plan was the development of autonomous military technology and weapon systems. According to Renz, “The achievements of the 2008 modernization program have been well-documented and were demonstrated during the conflicts in Ukraine and Syria.”
China, understanding the global impact of this issue, has dedicated research, money, and education to a comprehensive state-sponsored plan. China’s State Council published a document in July of 2017 entitled, “New Generation Artificial Intelligence Development Plan.” It laid out a plan that takes a top-down approach to explicitly mapout the nation’s development of AI, including goals reaching all the way to 2030. Chinese leadership also highlights this priority as they indicate the necessity for AI development:
AI has become a new focus of international competition. AI is a strategic technology that will lead in the future; the world’s major developed countries are taking the development of AI as a major strategy to enhance national competitiveness and protect national security; intensifying the introduction of plans and strategies for this core technology, top talent, standards and regulations, etc.; and trying to seize the initiative in the new round of international science and technology competition. (China’s State Council 2017).
The plan addresses everything from building basic AI theory to partnerships with industry to fostering educational programs and building an AI-savvy society.
Recommendations to foster the U.S.’s AI advancement include focusing efforts on further proliferating Science, Technology, Engineering and Math (STEM)programs to develop the next generation of developers. This is similar to China’s AI development plan which calls to “accelerate the training and gathering of high-end AI talent.” This lofty goal creates sub-steps, one of which is to construct an AI academic discipline. While there are STEM programs in the U.S., according to the U.S. Department of Education, “The United States is falling behind internationally, ranking 29th in math and 22nd in science among industrialized nations.” To maintain the top position in AI, the U.S. must continue to develop and attract the top engineers and scientists. This requires both a deliberate plan for academic programs as well as funding and incentives to develop and maintain these programs across U.S. institutions. Perhaps most importantly, the United States needs to figure out a strategy to entice more top American students to invest their time and attention to this proposed new discipline. Chinese and Russian students easily outpace American students in this area, especially in terms of pure numbers.
Additionally, the U.S. must research and capitalize on the dual-use capabilities of AI. Leading companies such as Google and IBM have made enormous headway in the development of algorithms and machine-learning. The Department of Defense should levy these commercial advances to determine relevant defense applications. However, part of this partnership with industry must also consider the inherent national security risks that AI development can present, thus introducing a regulatory role for commercial AI development. Thus, the role of the U.S. government with AI industry cannot be merely as a consumer, but also as a regulatory agent. The dangerous risk, of course, is this effort to honor the principles of ethical and transparent development will not be mirrored in the competitor nations of Russia and China.
Due to the population of China and lax data protection laws, the U.S. has to develop innovative ways to overcome this challenge in terms of machine-learning and artificial intelligence. China’s large population creates a larger pool of people to develop as engineers as well as generates a massive volume of data to glean from its internet users. Part of this solution is investment. A White House report on AI indicated, “the entire U.S. government spent roughly $1.1 billion on unclassified AI research and development in 2015, while annual U.S. government spending on mathematics and computer science R&D is $3 billion.” If the U.S. government considers AI an instrument of national security, then it requires financial backing comparable to other fifth-generation weapon systems. Furthermore, innovative programs such as the DOD’s Project Maven must become a mainstay.
Project Maven, a pilot program implemented in April 2017, was mandated to produce algorithms to combat big data and provide machine-learning to eliminate the manual human burden of watching full-motion video feeds. The project was expected to provide algorithms to the battlefield by December of 2018 and required partnership with four unnamed startup companies. The U.S. must implement more programs like this that incite partnership with industry to develop or re-design current technology for military applications. To maintain its technological advantage far into the future the U.S. must facilitate expansive STEM programs, seek to capitalize on the dual-use of some AI technologies, provide fiscal support for AI research and development, and implement expansive, innovative partnership programs between industry and the defense sector. Unfortunately, at the moment, all of these aspects are being engaged and invested in only partially. Meanwhile, countries like Russia and China seem to be more successful in developing their own versions, unencumbered by ‘obstacles’ like democracy, the rule of law, and the unfettered free-market competition. The AI Race is upon us. And the future seems to be a wild one indeed.
Allen, Greg, and Taniel Chan. “Artificial Intelligence and National Security.” Publication. Belfer Center for Science and International Affairs, Harvard University. July 2017. Accessed April 9, 2018. https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf
Allen, John R., and Amir Husain. “The Next Space Race is Artificial Intelligence.” Foreign Policy. November 03, 2017. Accessed April 09, 2018. http://foreignpolicy.com/2017/11/03/the-next-space-race-is-artificial-intelligence-and-america-is-losing-to-china/.
China. State Council. Council Notice on the Issuance of the Next Generation Artificial Intelligence Development Plan. July 20, 2017. Translated by RogierCreemers, Graham Webster, Paul, Paul Triolo and Elsa Kania.
Doubleday, Justin. 2017. “Project Maven’ Sending First FMV Algorithms to Warfighters in December.” Inside the Pentagon’s Inside the Army 29 (44). Accessed April 1, 2018.https://search-proquest-com.ezproxy2.apus.edu/docview/1960494552?accountid=8289.
Flournoy, Michèle A., and Robert P. Lyons. “Sustaining and Enhancing the US Military’s Technology Edge.” Strategic Studies Quarterly 10, no. 2 (2016): 3-13. Accessed April 12, 2018. http://www.jstor.org/stable/26271502.
Gams, Matjaz. 2017. “Editor-in-chief’s Introduction to the Special Issue on “Superintelligence”, AI and an Overview of IJCAI 2017.” Accessed April 14, 2018. Informatica 41 (4): 383-386.
Louth, John, and Trevor Taylor. 2016. “The US Third Offset Strategy.” RUSI Journal 161 (3): 66-71. DOI: 10.1080/03071847.2016.1193360.
Sadler, Brent D. 2016. “Fast Followers, Learning Machines, and the Third Offset Strategy.” JFQ: Joint Force Quarterly no. 83: 13-18. Accessed April 13, 2018. Academic Search Premier, EBSCOhost.
Scharre, Paul, and SSQ. “Highlighting Artificial Intelligence: An Interview with Paul Scharre Director, Technology and National Security Program Center for a New American Security Conducted 26 September 2017.” Strategic Studies Quarterly 11, no. 4 (2017): 15-22. Accessed April 10, 2018.http://www.jstor.org/stable/26271632.
“Science, Technology, Engineering and Math: Education for Global Leadership.” Science, Technology, Engineering and Math: Education for Global Leadership. U.S. Department of Education. Accessed April 15, 2018. https://www.ed.gov/stem.
Towards A Better World: Our Senses and How Artificial Intelligence is Replicating Them
Our five senses help us perceive the world around us. The sense of touch, for example, can bring loved ones closer but, on a darker note, can also frustrate amputees. What bothers them particularly about their prosthetic arms is missing feedback. Is what they are touching hot or cold, liquid or solid, a rose or its thorn; an aspect so universal for the able-bodied that it is not given a second thought.
Though it has not escaped Artificial Intelligence (AI) researchers who are trying to replicate these senses (Engineering and Technology, August 2023). They have been busy developing artificial hands with softer fingers and embedded sensors. How long will it be before the problem is solved?
Well, a US company Atom Limb is expecting to release a mind-controlled prosthetic limb in 2024. In it the movement sensors in the hand section of the prosthesis send electronic signals to the wearer’s stump, where the neurons once connected to the amputated hand are still in place and capable of transmission to the brain.
Notice how we know at once when there is something crawling on our skin. In May 2022, researchers at Stanford University’s Bao Research Group announced the invention of artificial skin that is durable, paper thin and stretchable. This has the future potential of being wired into the wearer’s nervous system to give a real touch capability — namely, sensing temperature, pressure, vibration and location. Thus when the finger moves from the handle to the cup itself, you sense the change in temperature and distance.
Another sense, that of hearing or rather lack thereof, is not infrequently a source of humor. Possibly because the sufferers are able to compensate through other means. Beethoven suffered from Paget’s disease. It caused skull bone enlargement which pressed on the eighth cranial nerve associated with auditory function. The loss was gradual from the age 28 to 44 when he was quite deaf. While he could still hear a little, he would strap an ear trumpet on his head so he could conduct the orchestra with his hands. He also carried a notebook and pencil to jot down musical brainstorms but also to converse with friends.
Hence the somewhat morbid joke of someone seeing Beethoven sitting on his grave furiously erasing some sheet of music. “Maestro! Maestro! What are you doing,” the person asks, to which he gets the reply, “I am decomposing.”
Hearing loss when it is congenital is no joke, however. It can inhibit language learning and speech. Thus the words ‘deaf and dumb’ are often placed together with ‘dumb’ of late being replaced by the kinder ‘mute’.
Here again technology comes to the rescue. Cochlear implants have been around for quite a while. Invented in 1957, the first implant procedure is credited to Stanford University. A single-channel electrode was used but was found to be of limited utility for detecting speech. It took a further 20 years to get to the modern multi-channel type.
Hearing aids now are small enough to be barely visible. They work for most people and only those with profound hearing loss consider the implant option.
Our sense of sight helps us navigate the world around us, and enjoy its beauty. For some it may be taken away gradually through macular degeneration (AMD). It is a form of retinal deterioration that affects the sight of some 200 million people in the world. As the photoreceptors in the central retina degenerate, it impairs the ability to read or even recognize people.
The good news is that a prosthetic replacement is now being developed to replace the lost photoreceptors with photovoltaic pixels. These convert light into electricity which stimulates the neurons in the retina. While the present version leaves the recipient somewhat shortsighted, a newer one currently being tested in rats will restore 20/20 vision.
For the future, there is Science Eye, a device employing optogenetics. It uses gene therapy to restore optic nerve cells while an ultra-dense micro-LED display panel is inserted directly over the retina.
There are others in the field including Cortigent which is making headway with a system that does not require genetically modifying retinal cells because it is a direct cortical (brian layer) stimulator. Cortigent is in the process of designing a study to get their stimulator implant approved. They have already spent five years studying the safety and reliability of their devices.
Then there are our senses of smell and taste, to some extent linked. There is a good reason food seems bland and tasteless when a person has a bad cold — the sense of smell is absent. Thus when chefs talk about flavor, they imply both taste and smell.
Taste receptors in the mouth sense sweet, sour, salt, bitter and savory — the latter also known as umami. But try sucking a lemon flavored candy while pinching your nose. You will taste the sweetness, but not the lemon flavor. The tongue is, of course, also sensitive to cold and heat.
A promising approach to treatment for loss of smell is to train the olfactory nerve through inhaling a set of odors (originally rose, lemon, clove and eucalyptus) twice daily for three months. It was found to help the nerve to regenerate.
Taste has been with humans forever. Long before scientists and their experiments, humans knew to avoid plants that tasted bitter — it signified something harmful. Yet there are people unfortunate enough to be without this sense.
Having all the senses is so commonplace that we rarely ponder their absence. So let the next gustatory and olfactory experience, or the music we hear, or the walk we take in a park where we can also smell the flowers, be all the more meaningful for valuing our senses. Harnessing them and adding that subconscious sense of perception to enhance our understanding of the world as it is, and we need only imagination to observe the world as it could be … to be ready to take the first step on the journey to a better one, a world at peace.
Development of Metaverse in China: Strategies, Potential, and Challenges Part I
In the rapid era of digitalization, the metaverse has become a hot topic among developed countries. While many nations focus on the entertainment aspect of the metaverse, China appears to have a different perspective. Adopting a more industry-oriented approach, can the Bamboo Curtain country lead the next metaverse revolution?
Why Does China Opt for an Industrial Approach?
There are several reasons why an industrial approach might be China’s key to success in the metaverse:
The metaverse, with its virtual simulation technology, has immense potential to revolutionize various industries like manufacturing, urban planning, and healthcare. In manufacturing, the metaverse can facilitate product design optimization, in-depth employee training, and manufacturing process optimization through virtual prototypes and real-time simulations, enabling instant collaboration between designers and engineers, thus reducing time and costs associated with physical prototypes.
Regarding urban planning, the metaverse can be applied for the visualization of city layouts and infrastructure in a 3D virtual environment, allowing urban planners to make better-informed decisions about urban development. Moreover, it enables public participation in urban development projects, offering citizens the chance to explore and provide input on proposed designs, and promoting sustainable development through the environmental impact analysis of urban designs.
In the healthcare sector, the metaverse can be used for medical training, patient rehabilitation, and remote consultations. Medical students can practice medical procedures in a risk-free virtual environment, while patients can undergo intensive virtual medical consultations and rehabilitation therapy. This technology can enhance the skills and confidence of prospective doctors and expedite patient recovery processes.
Overall, the metaverse offers innovative and interactive solutions that can address the specific needs of various industrial sectors, allowing enhanced learning, better design, optimized processes, and innovative solutions, which will ultimately contribute to the progress of these industries.
Metaverse technology, with its capabilities in virtual and augmented reality, opens new doors in industrial efficiency and innovation. It enables industries to develop prototypes in virtual environments, accelerating product development cycles and reducing costs associated with physical models. For instance, in the automotive industry, designers and engineers can collaborate in a 3D environment to test and modify car designs in real-time, allowing a faster response to market needs.
Additionally, the metaverse plays a crucial role in employee training and development. In the manufacturing sector, virtual simulations can be utilized for operational machine and production process training, reducing risks and enhancing employee skills while saving training time and costs. This also contributes to increased safety and reduction of incidents in the workplace.
The metaverse also supports industrial process optimization through real-time simulations and data analysis. Companies can visualize and optimize workflows, plant layouts, and production schedules to enhance productivity and reduce operational costs. This enables quicker and more accurate identification and resolution of inefficiencies and obstacles.
In conclusion, the implementation of the metaverse in industries promises a revolution in product design, employee training, and daily operations. The ability to integrate and optimize various operational aspects in a virtual environment offers opportunities for sustainable innovation and heightened competitiveness in the modern industrial era.
History shows that the Chinese government often provides full support to strategic sectors. With financial support and progressive regulations, the metaverse industry in China has the potential to grow rapidly. China is actively exploring the metaverse, with Beijing planning to create a ‘Digital Identity System’ for the metaverse and Web 3.0, following Shanghai’s footsteps. Even though known for being cautious in adopting advanced technologies, China sees significant potential in the metaverse and strives to establish a regulatory framework to develop virtual reality and the metaverse as part of the digital economy, as detailed in the Virtual Reality Development Action Plan released in November 2022. The proposed digital identity system aims to control user anonymity and identify individual characteristics in the metaverse, allowing regulated and controlled use of this technology. The regulatory proposal discussions are underway at the International Telecommunication Union (ITU), with the involvement of technology experts and Chinese telecom operators like China Mobile. This demonstrates China’s commitment to developing the metaverse in a safe and orderly manner, aligning with the high interest of its citizens, where 78% of Chinese citizens have expressed interest in the metaverse.
Differentiation from Competitors:
In the midst of tight global competition in the metaverse, China’s focus on industrial applications could be a key differentiator. While many other countries might be focusing more on entertainment aspects, China can lead in industrial applications of the metaverse. China chooses to focus on the industrial applications of the metaverse as part of its strategy to become a global leader in technology and innovation, aligning with the country’s ambitions to build competitive advantage in high technology and strengthen its trading position on the international stage. With its broad and diverse industrial sector, from manufacturing to healthcare, the integration of the metaverse allows for significant innovation and economic growth across various fields, enabling the development of specific and value-added solutions for industrial needs. Additionally, by focusing on industrial applications, China can also address challenges and risks associated with the metaverse, such as privacy and data security issues. The development of regulatory frameworks and technical standards for the industrial metaverse will ensure that this technology is developed and used safely, responsibly, and in line with national priorities and sustainable development goals.
Domestic Technology Development
China has been making substantial investments in cutting-edge technologies such as AI, 5G, and semiconductors, integrating these technologies with the metaverse to build a strong and competitive ecosystem. The adoption of 5G technology is key to the implementation of the metaverse, and I once read a book about the Metaverse written by experts in China, stating that mass adoption of the Metaverse will occur when 60% of the population has adopted 5G. Therefore, China currently seems to be focusing on developing 5G connectivity, hoping that the industrial sector will be the first adopter of this technology due to its higher purchasing power and more specific and limited scope. In the B2B context, this is considered a realistic step before introducing this technology to the end consumer. According to data from the GSMA The Mobile Economy Report China 2023, 5G penetration in China in 2023 is 45% of the population and is expected to reach 70% by 2027. Observing this data, it can be hypothesized that the evolution of Metaverse utilization in China will experience significant progress around the year 2027. Hence, in the coming years, we can observe how the integration between AI, 5G, semiconductors, and the metaverse can form a more synergistic and competitive technology ecosystem, where the industrial sector will play a key role in early adoption, before the metaverse truly enters and is accepted by the general consumers in China.
With an approach focused on the industry, China has the opportunity to lead the metaverse revolution in the future. However, as with any innovation, there will be challenges to face. With government support, investment in R&D, and a clear vision, China is on the right track to leverage the full potential of the metaverse for industrial interests and national economic growth.
Artificial Intelligence and Advances in Chemistry (II)
As previously seen, chemical representation types have developed several sub-types over recent years. Unfortunately, however, there is no clear answer as to which representation is the most efficient for a particular problem. For example, matrix representations are often the first choice for attribute prediction but, in recent years, graphs have also emerged as strong alternatives. It is also important to note that we can combine several types of representations depending on the problem.
Hence how (and which) representations can be used to explore chemical space? We have already said that string representations are suitable for generative modelling. Initially, graphical representations were not easy to model by using generative models, but more recently their combination with the Varational Autoencoder (VAE) has made them a very attractive factor.
In machine learning a variational autoencododer is an artificial neural network architecture introduced by Diederik P. Kingma e Max Welling. It is part of the families of probabilistic graphical models and variational Baysenian methods (i.e. family of methods for the approximation of integrals).
VAEs have proved particularly useful since they enable us to have a more machine-readable continuous representation. A study used VAEs to show that both string and graph representations can be encoded and decoded in a space where molecules are no longer discrete, but can be decoded into continuous vectors with real values of molecule representations. The Euclidean distance between different vectors will correspond to chemical similarity. Another model is added between the encoder and the decoder to predict the attribute to be reached at any point in space.
But while generating molecules per se is a simple task – we can take any generative model and apply it to the representation we desire – generating structures that are chemically valid and display the properties we desire is a much more challenging issue.
The initial approaches to achieve this goal imply models on existing data sets and their subsequent use for transfer to learning. The model is fine-tuned through a calibration data set to enable the generation of structures oriented towards specific properties, which can then be further calibrated using various algorithms. Many examples of this imply the use of string representations or graphs. However, difficulties are encountered with respect to the chemical validity or desired properties when these are not successfully obtained. Furthermore, the fact of relying on data sets limits the search space and introduces potentially undesirable biases.
An attempt at improvement is to use Markov Decision Process (MDP) to ensure the validity of chemical structures and optimise the MDP itself to achieve the desired properties through deep Q-learning (a model-free reinforcement learning algorithm to derive the value of an action in a particular state). In mathematics, an MDP is a discrete-time stochastic control process (a function or signal, with values given at a chosen set of times in the integer domain). It provides a mathematical framework for modelling the decision-making process in situations where outcomes are partly random and partly under the control of a decision-maker. MDPs are useful for studying optimisation problems solved by means of programming. They are used in many disciplines, including robotics, automatic control, economics and manufacturing. The MDP is named after the Russian mathematician Andrej Andreevič Markov (1856-1922).
A particular advantage of this model is that it enables users to visualise the preference of different actions: (a) to visualise the degree of preference for certain actions (1 being the highest preference, 0 the least preferred); and (b) take steps to maximise the quantitative estimation of the drug similarity to the starting molecule.
Although still in its infancy, the use of Artificial Intelligence to explore the chemical space is already showing great promise. It provides us with a new paradigm to explore the chemical space and a new way to test theories and hypotheses. Although empiricism is not as accurate as experimental research, computationally-based methods will remain an active research area for the foreseeable future and will already be part of any research group.
So far we have seen how Artificial Intelligence can help discover new chemicals more quickly by exploiting generative algorithms to search the chemical space. Although this is one of the most noteworthy use cases, there are also others. Artificial Intelligence is being applied to many other problems in chemistry, including:
1. Automated work in laboratory. Machine learning techniques can be used to speed up synthesis workflows. An approach uses self-driving laboratories to automate routine tasks, optimise resource expenditure and save time. A relatively new but noteworthy example is the use of the Ada robotic platform to automate the synthesis, processing and characterisation of materials. Ada tools are developed to provide predictions and models to automate repetitive processes, using machine learning and AI technologies to collect, understand and process data, so that resources can be dedicated to more value-added activities.
Ada is basically a laboratory that discovers and develops new organic thin-film materials without any human supervision. Its productivity is making most recent graduates uncomfortable. The entire thin-film fabrication cycle, from the mixing of chemical precursors, through deposition and thermal annealing, to the final electrical and optical characterisation, takes only twenty minutes. An additional aid is the use of a mobile chemical robot that can operate tools and perform measurements on 688 experiments over eight days.
2. Chemical reaction prediction. Classification models can be used to predict the type of reaction that will occur, or simplify the problem and predict whether a certain chemical reaction will occur.
3. Chemical data mining. Chemistry, like many other disciplines, has an extensive scientific literature for the study of trends and correlations. A notable example is the data mining of the vast amounts of information provided by the Human Genome Project to identify trends in genomic data.
4. Finally, although the new data-driven trend is developing rapidly and has had a great impact, it also entails many new challenges, including the gap between computation and experiment. Although computational methods aim to help achieve the experiment goals, the results of the former are not always transferable to the latter. For example, when using machine learning to find candidate molecules, we have to bear in mind that molecules are rarely unique in their synthetic pathways, and it is often difficult to know whether an unexplored chemical reaction will work in practice. Even if it works, there are problems with the yield, purity and isolation of the compound under study.
5. The gap between computational and experimental work becomes even wider, as computational methods use metrics that are not always transferable to the latter, such as Quantum Electrodynamics (QED), which describes all phenomena involving charged particles interacting by means of the electromagnetic force, so that its experimental verification may not be feasible. There is also the need for a better database. However, the problem of the lack of benchmarks arises. Since the entire chemical space is infinite, it is hoped to have a sufficiently large sample which may help in subsequent generalisation. Nevertheless, most of today’s databases are designed for different purposes and often use different file formats. Some of them have no validation procedures for submissions or are not designed for AI tasks. It should also be said that most of the databases available have a limited scope of chemical data: they only contain certain types of molecules. Furthermore, most tasks involving the use of Artificial Intelligence for chemical predictions have no reference platform, thus making the comparisons between many different studies impracticable.
One of the main reasons for the success of AlphaFold – which, as already seen, is an AI programme developed by DeepMind (Alphabet/Google) to predict the 3D structure of proteins – lies in the fact that it has provided all of the above as part of the critical evaluation of Protein Structure Prediction, i.e. the inference of a protein 3D structure from its amino acid sequence, e.g. the prediction of its secondary and tertiary structure from its primary structure. This evaluation demonstrates the need for organised efforts to streamline, simplify and improve other tasks involving chemical prediction.
In conclusion, as we continue to advance in the digital age, new algorithms and more powerful hardware will continue to lift the veil on previously intractable problems. The integration of Artificial Intelligence into chemical discovery is still in its infancy, but it is already a commonplace to hear the term “data-driven discovery”.
Many companies, whether pharmaceutical giants or newly founded start-ups, have adopted many of the above technologies and brought greater automation, efficiency and reproducibility to chemistry. Artificial Intelligence enables us to conduct science on an unprecedented scale and in recent years this has generated many initiatives and attracted funding that will continue to lead us further into an era of autonomous scientific discovery. (2. continued).
Economy4 days ago
International Forum for China’s Belt and Road and the Six Economic Corridors Projects
Americas4 days ago
Quad foreign ministers meet in New York for the third time
World News4 days ago
India’s Canadian riddle
Green Planet3 days ago
Sustainability in the Age of Climate Change: Demography, Resources, and Action
Defense3 days ago
Pakistan-Turkey Defense Ties and Policy Options
World News4 days ago
UN: A divided world faces a huge number of problems
Terrorism3 days ago
Al-Assad -Xi Jinping: Confronting Turkestan Islamic Party and its relations with ISIS
Economy3 days ago
Uniqlo vs. Indonesia: A Battle of Bargaining Power Position