Connect with us

Science & Technology

5G: A Geostrategic sector for Algorithmic finance

Published

on

The last ones were days of increasing tensions between the two biggest economic superpowers, the USA and China. The geopolitical crescendo seems to become always more intense, and the two giants are trying to build up two strong alignments against one another in a competition that Bloomberg defines a “Cold War 2.0.” or a “Tech War”. The implementation of 5G technologies plays a fundamental role in this “rush to the infrastructures” also due to their linkages with the “High-Frequency Trading” world; the sector of contemporary finance based on always faster algorithms and huge Data Centres that require strong software and the analysis of tons and tons of information to predict stocks fluctuations hence “to do God’s work” as Lloyd Blankfein (the actual Senior Chairman of Goldman Sachs) said in 2009 [1].

In the article “Digital Cold War” Marc Champion describes his strongly polarized vision of the global scenario in which the conflict would take place like two technological ecospheres; with half of the world where people are carried around by driverless cars created by Baidu using Huawei 5G’s, chatting and paying with WeChat and buying on Alibaba with an internet connection strictly controlled and limited by the Great Firewall; while on the other part of the world people with a less controlled internet connection buy on Amazon and use other dominating companies e.g. Google, Tesla, Ericsson, and Facebook. The latter presented scenario is always more tangible, indeed it is enough to consider that the People’s Republic of China has equipped itself with an alternative system to the GPS (the instrument that following the theory of the PRC caused the downfall of two Chinese missiles sent during the conflict with Formosa), created by Baidu on the 23rd July 2020.

The presentation of scenarios in which the 5G plays a crucial role makes it necessary to give a closer look at what 5G technologies technically are. 5G (Fifth-generation) stands for the next major phase of mobile telecommunications standards beyond the 4G/IMT Advanced standards. Since the first generation, that was introduced in 1982, it is observable a remarkable growth of cellular communication of about 40% per year; these issues led mobile service providers to research new technologies and improved services, basing on the evidence that wireless communication networks have become much more pervasive. Therefore, aiming to fulfill the growing need of human being, 5G will be the network for millions of devices and not just for smartphones, hence it grants connectivity between sensors, vehicles, robots and drones; and it provides data speed up to 1 to 10 Gbps, and a faster connection for more people in a km2, thus the creation of smart cities.

It is now more evident that the implementation of the fifth-generation technologies offers strategic slides of power and control to the companies, and the linked geopolitical actors, that manage the infrastructures and the network band. Therefore, 5G technologies play a crucial geopolitical role, being inter alia fundamental for strategic sectors, such as the high-frequency trading (that we are going to discuss later), that sustain and orientate the world’s economy. This rush to the infrastructure, hence to the technological supremacy led to a crescendo of reprisals among the world’s most influential countries. If we give a closer look at the relations between the USA and China, the last years were characterized by increasing tensions, in the commercial relations, in military ones linked to the Indo-Pacific area and the Xinjiang, and lastly, tensions concerning the approach to face the COVID-19 threat. USA and China, as Kishore Mahbubani says seems no longer partners either in business; but to fully understand the actual situation in terms of 5G the concrete measures and imposed bans are going to be presented. The fact that Chinese companies, in particular Huawei and ZTE, began focusing on acquiring a lead in 5G intellectual property well before their global competitors (with an expense, indicated in their annual report, of about $600 million between 2009 and 2013; and a planned one of about $800 million in 2019), being now leading ones in the implementation of the technology, led the US to be more consternated about their national security and global influence. Therefore, in the geopolitical logic of 5G, the US keep acting to opt against China as a country that “exploit data”, indeed Mike Pompeo in an interview in 2019 said “We can’t forget these systems were designed by- with the express (desire to) work alongside the Chinese PLS, their military in China”; while on the other side China has responded with a campaign that blends propaganda, persuasion, and incentives with threats and economic coercion, offering massive investments plans, aiming to reach the now well known “Belt and Road Initiative”. The Trump administration effectively banned executive agencies from using or procuring Huawei and ZTE telecommunication equipment with the National Defense Authorization Act signed in 2018, a ban that was challenged by Huawei in the court and obtained a favourable verdict; a ban that was later re-proposed in May 2019 with an executive order; that was followed by the US Commerce Department placing Huawei and 68 affiliates on an Entity List, a document that conditions the sale or transfer of American technology to that entities unless they have a special license; however the latter restrictions were imposed just for 90 days after the failure of the 11th round of trade talks between China and the US. Canada is another country with deteriorated relations with Beijing, after the arrest of Meng, who was extradited from the US territory. Furthermore, in recent days, as revealed by The Wall Street Journal, the UK announced that is going to ban Huawei 5G technologies from 2027, following the US imposition, and Beijing responded considering a possible ban on Chinese elements for the Finnish Nokia and the Swedish Ericsson. While the European Union keeps struggling to face the situation as a Union, and political reprisals between these two States occur e.g. the closure of the Chinese consulates in Huston and San Francisco and the closure of the US’ one in Chengdu;  in a geopolitical context, the US are trying to build a strong anti-Chinese alignment in the Indo-Pacific area, with the support of countries like the Philippines, Singapore, Taiwan, South Korea, Japan, Australia, and New Zealand; also following the logic that in a strategic scenario the geopolitical actors, between two competitors States tends to choose the side of the farthest one. Another actor that could tip the balance in this global scenario is India; that following a government study of August 2018 could hit the national income of about $1 trillion by 2035 with the implementation of 5G technologies, improving the governance capacity, and enabling healthcare delivery, energy grid management and urban planning. However, high levels of automation and dependence on a communication network, if it would follow the investment plan proposed by Huawei (but also an extreme inclination to the US), could bring security threats and lack of supremacy, hence “voice” in a global scenario.

After having analysed the geopolitical patterns of 5G implementation, it is time to analyse a strategic sector linked to the fifth-generation technologies, which is the “engine” of the World’s economy, the finance. There are some milestones that have made national markets global ones; within what is called the “rebellion of the machines” that led the financial world to be totally based on algorithms hence on speed. The first one was the introduction of the Telegraph, introduced in 1848, and that both with the new Galena and Chicago Railroad promoted the born of the Chicago Board of Trade. The telegraph carried anthropological changes hence it was fundamental for the division between the price and the goods; and it seems to have carried with itself big changes in the finance world, the same thing 5G will do in our scenario. Among all the events that led to the second phase of the “rebellion” there is what happened in 2000, when after merging with other European markets, thanks to SuperCAC, the Paris stock exchange took the name EURONEXT. Later in 2007, the second phase of the rebellion took place in an increasingly globalized scenario; where the tech was already part of the finance, and there were a lot of digitalized platforms to trade-in. Therefore, following the development of the digital world The Chicago Mercantile Exchange created his own platform Globex, which in 2007 merged with the CBOT’s one Aurora that was based on a weak band network of about 19,2 kb. The banks created black boxes so dark that it would not allow them to be in control anymore; a very different situation from the conditions established by the Buttonwood agreement of 1792, the act at the basis of the birth of the second world market after that of Philadelphia which provided for the sale of securities between traders without going through intermediaries. Subsequently, the steps that favoured the rise of trading platforms, the development of adaptive algorithms based on the laws of physics and, mathematics and biology, were multiple, which therefore led to the development of what is called phynanza. In the 2000s the most influential banking groups, Goldman Sachs, Crédit Suisse, BNP Paribas, Barclays, Deutsche Bank, Morgan Stanley, Citigroup, through a strong deregulation and lobbying activities have directed the markets towards their deeper turning point; in an era in which the headquarters of the stock exchanges are not physical and the core bodies of the  exchange markets are in the suburbs where large spaces and the technological infrastructures of network and data transmissions allow the creation of huge data centers, where powerful software, cooling systems and adaptive algorithms give life to the daily oscillations of global finance. Algorithms like Iceberg, that splits a large volume of orders into small portions, so that the entirety of the initial volume escapes the “nose of the hounds”; or Shark that identifies orders shipped in small quantities to glimpse the big order that Is hiding behind; or Dagger, a Citibank algorithm launched in 2012 that like Stealth, Deutsche Bank’s algorithm, is looking for more liquid values, and also Sumo of Knight Capital, a high frequency trading company that alone trades an amount of about $ 20 billion a day; and there are many others, from Sonar, Aqua, Ninja and Guerrilla.

It is clear that to support such an articulated financial apparatus it is necessary to connect and analyze data with microsecond accuracy. Therefore, another example of 5G geostrategy in finance is Coriolis 2, an oceanographic ship created in 2010 by Seaforth Geosurveys that offers maritime engineering solutions. Notably, among their clients there is Hibernia Atlantic; an underwater communication network, that connects North America to Europe, created in 2000 at a cost of 1 billion. The New Jersey office manufactures transatlantic cables that rent to telecommunications companies like Google and Facebook, obviously not to improve the circulation of stupid comments on social networks. The ship is preparing the construction of “dark fiber” cables, and the technical management and the end-use are by Hibernia who may not share the band with anyone. The peculiar thing is that who ordered the cable, Hibernia, was created specifically for financial market operators and it is part of the Global Financial Network (GFN), which manages 24.000Km of optical fiber that connects more than 120 markets. This new fiber at the cost of 300 million, will allow to gain 6 milliseconds, a time that a USA-UE investment fund can use to earn £100 million dollars more per year. The transmission networks are fundamental in guaranteeing trading and in high frequency and the motto has changed from “time is money” to “speed is money”.

Bibliography

[1] Laumonier A., 2018. 6/5, Not, Nero collection, Roma.
[2] Kewalramani M., Kanisetti A. 5G, Huawei & Geopolitics: An Indian Roadmap. 2019, Takshashila institution; Discussion document.

From our partner International Affairs

Continue Reading
Comments

Science & Technology

Space Exploration: The Unification of Past, Present and Future

Avatar photo

Published

on

An accreting SMBH in a fairly local galaxy with very large and extended radio jets. © R. Timmerman; LOFAR & Hubble Space Telescope

The enchanting realm of space exploration continues to unfold new wonders with every passing day, sparking a growing interest among individuals to embark on their own cosmic journeys. While exploring space with the aid of private companies that charge fortunes is a privilege usually reserved for billionaire adventurers, there are occasional exceptions that captivate our attention.

Just a few days ago on 8th September, Virgin Galactic’s third spaceflight set out on a brief mission that seized the spotlight due to some interesting details. Three private explorers, Ken Baxter, Timothy Nash, and Adrian Reynard, two pilots and one instructor, were onboard ‘VSS Unity’. However, the presence of two different and unique passengers added a twist to the journey: fossils of our ancient human ancestors. The fossil remains of two ancient species, two-million-years-old Australopithecus sediba and 250,000 years old Homo naledi,  held in carbon fiber, emblazoned with the South African flag,  were part of the Virgin Galactic’s spacecraft ‘crew’ for a one-hour ride, making them the oldest human species to visit space. Australopithecus sediba’s clavicle (collarbone) and Homo naledi’s thumb bone were chosen for the voyage. Both fossil remains were discovered in the Cradle of Humankind – home to human ancestral remains in South Africa.

The episode undoubtedly prompts questions regarding the underlying reason behind sending these fossil remains into the vast expanse of space in the first place. It profoundly underscores the immense power of symbols, speaking to us in ways words cannot. This voyage was not just a journey through space, but a soulful homage to our ancestors. Their invaluable contributions have sown the seeds of innovation and growth, propelling us to unimaginable heights. Now, as we stretch our hands towards the heavens, we remember them – and in this gesture, we symbolise our eternal gratitude and awe for the path they paved, allowing humanity to quite literally aim for the skies. As Timothy Nash said, ‘It was a moment to contemplate the enterprising spirit of our earliest ancestors, who had embarked on a journey toward exploration and innovation years ago.’

Moreover, the clavicle of the Australopithecus sediba was deliberately chosen given that it was discovered by nine-year-old Mathew Berger, son of Lee Berger, a National Geographic Society explorer, who played a major role in discovering both species and handed over the remains to Timothy Nash for the journey. This story serves as a touching testament to the boundless potential of youth, showing us that even the young can be torchbearers in the realm of science, lighting the path of discovery with their boundless curiosity. The unearthing of Homo naledi in 2013 wasn’t just about finding bones; it was a window into our past. This ancient ancestor, with its apelike shoulders and human-like feet, hands, and brain, wasn’t just a distant relative. They were artists and inventors, leaving behind symbols and tools in their cave homes as a silent testament to their legacy. This led to the discovery of more than 1,500 specimens from one of the biggest excavations in Africa’s history. It wasn’t just about digging up the past; it was about piecing together the jigsaw of our very essence, deepening our understanding of the roots and journey of our kind, especially in the heartland of South Africa. Each discovery, each bone, whispered tales of our shared journey, of beginnings, growth, and the undying spirit of exploration.

For those involved in the venture, the occasion was awe-inspiring as it connected our ancient roots to space exploration. However, not everyone is pleased. The event has sparked criticism from  archaeologists and palaeoanthropologists, many of whom have called it a mere publicity stunt and raised serious concerns over such an act given that it poses risks to the care of the precious fossils. It was further argued that the act was ethically wrong, and lacked  any concrete scientific justifications.

Setting aside this debate, the episode connects chronicles of our past with the boundless potential of humankind’s future. It celebrates the age-old quest for exploration shared across millennia. This journey, captivating in its essence, elevates space exploration to a sacred place where fossils, once cradled by the Earth’s soil, now dance among the stars. Just as with pivotal moments in space history, it is also a compelling cue to states that are currently lagging in this race to timely embrace the possibilities of this frontier. Countries, like Pakistan, should draw inspiration from such milestones to fervently chart their own celestial courses.

Upon their return to South Africa, the relics would be displayed in museums and other institutions, offering a chance to the public to view them and draw inspiration. As we witness the rise of commercial space travel, this unique journey provides glimpses of the multifaceted nature of space exploration – one that prompts us to reflect on our past, engage actively with the present and anticipate the future that awaits us. Something Pakistan’s national poet Allama Iqbal eloquently captured in one his verses, translated as: I see my tomorrow (future) in the mirror of my yesterday (past).

Continue Reading

Science & Technology

Artificial Intelligence and Advances in Chemistry (I)

Avatar photo

Published

on

With the advent of Artificial Intelligence technology in the field of chemistry, traditional methods based on experiments and physical models are gradually being supplemented with data-driven machine learning paradigms. Ever more data representations are developed for computer processing, which are constantly being adapted to statistical models that are primarily generative.

Although engineering, finance and business will greatly benefit from the new algorithms, the advantages do not stem only from algorithms. Large-scale computing has been an integral part of physical science tools for decades, and some recent advances in Artificial Intelligence have begun to change the way scientific discoveries are made.

There is great enthusiasm for the outstanding achievements in physical sciences, such as the use of machine learning to reproduce images of black holes or the contribution of AlphaFold, an AI programme developed by DeepMind (Alphabet/Google) to predict the 3D structure of proteins.

One of the main goals of chemistry is to understand matter, its properties and the changes it can undergo. For example, when looking for new superconductors, vaccines or any other material with the properties we desire, we turn to chemistry.

We traditionally think chemistry as being practised in laboratories with test tubes, Erlenmeyer flasks (generally graduated containers with a flat bottom, a conical body and a cylindrical neck) and gas burners. In recent years, however, it has also benefited from developments in the fields of computer science and quantum mechanics, both of which became important in the mid-20th century. Early applications included the use of computers to solve calculations of formulas based on physics, or simulations of chemical systems (albeit far from perfect) by combining theoretical chemistry with computer programming. That work eventually developed into the subgroup now known as computational chemistry. This field began to develop in the 1970s, and Nobel Prizes in chemistry were awarded in 1998 to Britain’s John A. Pople (for his development of computational methods in quantum chemistry: the Pariser-Parr-Pople method), and in 2013 to Austria’s Martin Karplus, South Africa’s Michael Levitt, and Israel’s Arieh Warshel for the development of multiscale models for complex chemical systems.

Indeed, although computational chemistry has gained increasing recognition in recent decades, it is far less important than laboratory experiments, which are the cornerstone of discovery.

Nevertheless, considering the current advances in Artificial Intelligence, data-centred technologies and ever-increasing amounts of data, we may be witnessing a shift whereby computational methods are used not only to assist laboratory experiments, but also to guide and orient them.

Hence how does Artificial Intelligence achieve this transformation? A particular development is the application of machine learning to materials discovery and molecular design, which are two fundamental problems in chemistry.

In traditional methods the design of molecules is roughly divided into several stages. It is important to note that each stage can take several years and many resources, and success is by no means guaranteed. The phases of chemical discovery are the following: synthesis, isolation and testing, validation, approval, commercialisation and marketing.

The discovery phase is based on theoretical frameworks developed over centuries to guide and orient molecular design. However, when looking for “useful” materials (e.g. petroleum gel [Vaseline], polytetrafluoroethylene [Teflon], penicillin, etc.), we must remember that many of them come from compounds commonly found in nature. Moreover, the usefulness of these compounds is often discovered only at a later stage. In contrast, targeted research is a more time-consuming and resource-intensive undertaking (and even in this case it may be necessary to use known “useful” compounds as a starting point). Just to give you an idea, the pharmacologically active chemical space (i.e. the number of molecules) has been estimated at 1060! Even before the testing and sizing phases, manual research in such a space can be time-consuming and resource-intensive. Hence how can Artificial Intelligence get into this and speed up the discovery of the chemical substance?

First of all, machine learning improves the existing methods of simulating chemical environments. We have already mentioned that computational chemistry enables to partially avoid laboratory experiments. Nevertheless, computational chemistry calculations simulating quantum-mechanical processes are poor in terms of both computational cost and accuracy of chemical simulations.

A central problem in computational chemistry is solving the 1926 equation of physicist Erwin Schrödinger’s (1887-1961). The scientist described the behaviour of an electron orbiting the nucleus as that of a standing wave. He therefore proposed an equation, called the wave equation, with which to represent the wave associated with the electron. In this respect, the equation is for complex molecules, i.e. given the positions of a set of nuclei and the total number of electrons, the properties of interest must be calculated. Exact solutions are only possible for single-electron systems, while for other systems we must rely on “good enough” approximations. Furthermore, many common methods for approximating the Schrödinger equation scale exponentially, thus making forced solutions difficult to solve. Over time, many methods have been developed to speed up calculations without sacrificing precision too much. However, even some “cheaper” methods can cause computational bottlenecks.

A way in which Artificial Intelligence can accelerate these calculations is by combining them with machine learning. Another approach fully ignores the modelling of physical processes by directly mapping molecular representations onto desired properties. Both methods enable chemists to more efficiently examine databases for various properties, such as nuclear charge, ionisation energy, etc.

While faster calculations are an improvement, they do not solve the issue that we are still confined to known compounds, which account for only a small part of the active chemical space. We still have to manually specify the molecules we want to analyse. How can we reverse this paradigm and design an algorithm to search the chemical space and find suitable candidate substances? The answer may lie in applying generative models to molecular discovery problems.

But before addressing this topic, it is worth talking about how to represent chemical structures numerically (and what can be used for generative modelling). Many representations have been developed in recent decades, most of which fall into one of the four following categories: strings, text files, matrices and graphs.

Chemical structures can obviously be represented as matrices. Matrix representations of molecules were initially used to facilitate searches in chemical databases. In the early 2000s, however, a new matrix representation called Extended Connectivity Fingerprint (ECFP) was introduced. In computer science, the fingerprint or fingerprint of a file is an alphanumeric sequence or string of bits of a fixed length that identifies that file with the intrinsic characteristics of the file itself. The ECFP was specifically designed to capture features related to molecular activity and is often considered one of the first characterisations in the attempts to predict molecular properties.

Chemical structure information can also be transferred into a text file, a common output of quantum chemistry calculations. These text files can contain very rich information, but are generally not very useful as input for machine learning models. On the other hand, the string representation encodes a lot of information in its syntax. This makes them particularly suitable for generative modelling, just like text generation. Finally, the graph-based representation is more natural. It not only enables us to encode specific properties of the atom in the node embeddings, but also captures chemical bonds in the edge embeddings. Furthermore, when combined with message exchange, graph-based representation enables us to interpret (and configure) the influence of one node on another node by its neighbours, which reflects the way atoms in a chemical structure interact with each other. These properties make graph-based representations the preferred type of input representation for deep learning models. (1. continued)

Continue Reading

Science & Technology

The Artificial Intelligence which looks back to the past: The development of contemporary archaeology

Avatar photo

Published

on

In recent years the advent of Artificial Intelligence has revolutionised the field of archaeology. This cutting-edge technology is reshaping the way we discover and interpret the secrets of the past, thus enabling the analysis of vast amounts of data in a fraction of time, which would once have taken human researchers years or even decades. The rise of Artificial Intelligence in archaeology not only accelerates the discovery process, but also enable us to gain new insights into history.

Archaeological research has traditionally been an extremely time-consuming process for archaeologists excavating sites in detail. With the introduction of Artificial Intelligence, however, researchers can now process and analyse data at unprecedented speed. Machine learning algorithms can sift through thousands of artefacts, thus identifying patterns and connections that humans cannot detect. This significantly reduces not only the time needed for discovery, but also the cost of exploration, thus enabling us to unravel the secrets of the past much faster.

Furthermore, Artificial Intelligence not only speeds up the process of archaeological research, but also improves the accuracy of discoveries. Machine learning algorithms can analyse data with an accuracy that far exceeds human capabilities. They can detect minute patterns and anomalies that might go unnoticed by humans, thus enabling more accurate and detailed interpretations of archaeological data. This increased accuracy helps to gain a deeper understanding of our history, thus providing new information about our ancestors and related civilisations.

Besides accelerating research and improving accuracy, Artificial Intelligence is opening up new avenues of exploration in archaeology. A case in point is predictive modelling: this is a technique that uses Artificial Intelligence to predict the location of archaeological sites based on patterns in existing data. It is revolutionising the way in which new sites are discovered. This method has already led to the detection of many previously unknown sites, thus expanding our knowledge of the past.

Moreover, Artificial Intelligence has been used to reconstruct historical environments and events. Using data from archaeological sites, Artificial Intelligence can generate realistic 3D models of ancient cities or simulate historical events, thus giving us unprecedented glimpses into the past. These virtual reconstructions not only provide a fascinating window into history, but also serve as valuable educational tools, thus enabling scholars, students and the public to have a first-hand experience of life in the past.

The rise of Artificial Intelligence in archaeology is undoubtedly a revolution. However, it is important to remember that Artificial Intelligence does not replace human researchers, but is rather a tool that will enhance our abilities, thus enabling us to explore our history more deeply and gain a better understanding.

As we continue to explore the AI potential in archaeology, it is clear that this technology will play an important role in better understanding the evolution of history, thus giving new prestige and lustre to research and paving the way for future discoveries.

As we move ever deeper into the digital age, Artificial Intelligence is revealing the future of archaeological discoveries, thus revolutionising the way we understand and interpret the past.

As mentioned above in terms of costs, archaeological excavations have traditionally been labour-intensive, time-consuming and often prone to human error. The process of studying ancestors’ dirt, fossilised excrement, petrified organic waste and droppings, etc., as well as detailing the results and interpreting the data can be time-consuming. The advent of Artificial Intelligence has greatly accelerated this process as well.

The role of Artificial Intelligence in archaeology is manifold. It enables archaeologists to more accurately identify potential excavation sites. By analysing large amounts of data, including geographical information, historical documents and previous archaeological finds, Artificial Intelligence can predict where important archaeological artefacts are likely to be found. This not only saves time and resources, but also reduces the potential damage to artefacts to be discovered.

In addition to predictive modelling, Artificial Intelligence is changing the way archaeological finds are analysed and interpreted. Machine learning algorithms can identify patterns and connections in data that are barely perceptible to humans. For example, Artificial Intelligence can analyse ancient pottery or pictograms in search of stylistic elements, thus identifying small similarities and differences – that would escape the human eye – and providing insights into cultural exchanges, human migration and social change.

Furthermore, Artificial Intelligence is revolutionising the way archaeological finds are preserved and displayed. Artificial Intelligence-based digital preservation technology can create detailed 3D views of artefacts, buildings and even entire archaeological sites. These models can be studied and explored virtually for a more immersive and interactive experience. This not only increases our understanding of the past, but also makes archaeology more accessible to the public.

Despite these advances, the application of Artificial Intelligence in archaeology faces challenges. The accuracy of Artificial Intelligence predictions and analyses is highly dependent on the quality and quantity of the data acquired. Incomplete or distorted data can lead to misleading results. Furthermore, although Artificial Intelligence can speed up the process of archaeological discovery, it cannot replace the meticulous understanding and interpretation that human archaeologists bring to the field.

We have to take ethical considerations into account. The use of Artificial Intelligence in archaeology raises questions about who has access to and control over archaeological data and research results. As Artificial Intelligence becomes increasingly pervasive in archaeology, it will be crucial to ensure that it is used responsibly and that the benefits are shared.

As a whole, Artificial Intelligence is leading to a new era in archaeology, thus opening up exciting possibilities for discovering, analysing and preserving the past.

As we continue to advance ever further into the digital age, we must address the challenges and ethical considerations associated with it. In this way, we can harness the AI power to deepen our understanding of human history and enrich our cultural heritage. The future of archaeological discovery lies not only in the ground, but also in the digital realm, where Artificial Intelligence will play an increasingly important role.

The future development of archaeology will see more refined and standardised methodological systems in science and technology. A number of trends such as archaeology and archaeological science are moving towards integration. AI technology is showing its talents and its core development elements are advancing internationally.

It has to be said, however, that the probability of Artificial Intelligence replacing archaeologists over the next ten years is only 0.7 per cent, because the work of archaeologists requires the identification of highly complex models, and it is not extremely profitable to have this work done by AI not yet specialized for this very high task. It is unlikely that companies or governments will make the necessary investment to automate archaeological tasks in a technological sense, at least in the short term. (1. continued).

Continue Reading

Trending