Zero emission cars fuelled by hydrogen and computer chips that mimic the human brain are among the technological breakthroughs recognized as the Top 10 Emerging Technologies of 2015.
The WEF’s Meta-Council on Emerging Technologies, compiles the list each year to help raise attention for those technologies its members believe possess the greatest potential for addressing chronic global challenges. The purpose is also to initiate a debate on any human, societal, economic or environmental risks the technologies pose, with the aim of addressing any concerns before adoption becomes widespread.
This year’s list offers a glimpse of the power of innovation to improve lives, transform industries and safeguard the planet:
1. Fuel cell vehicles
Zero-emission cars that run on hydrogen
“Fuel cell” vehicles have been long promised, as they potentially offer several major advantages over electric and hydrocarbon-powered vehicles. However, the technology has only now begun to reach the stage where automotive companies are planning to launch them for consumers. Initial prices are likely to be in the range of $70,000, but should come down significantly as volumes increase within the next couple of years.
Unlike batteries, which must be charged from an external source, fuel cells generate electricity directly, using fuels such as hydrogen or natural gas. In practice, fuel cells and batteries are combined, with the fuel cell generating electricity and the batteries storing this energy until demanded by the motors that drive the vehicle. Fuel cell vehicles are therefore hybrids, and will likely also deploy regenerative braking – a key capability for maximizing efficiency and range.
Unlike battery-powered electric vehicles, fuel cell vehicles behave as any conventionally fuelled vehicle. With a long cruising range – up to 650 km per tank (the fuel is usually compressed hydrogen gas) – a hydrogen fuel refill only takes about three minutes. Hydrogen is clean-burning, producing only water vapour as waste, so fuel cell vehicles burning hydrogen will be zero-emission, an important factor given the need to reduce air pollution.
There are a number of ways to produce hydrogen without generating carbon emissions. Most obviously, renewable sources of electricity from wind and solar sources can be used to electrolyse water – though the overall energy efficiency of this process is likely to be quite low. Hydrogen can also be split from water in high-temperature nuclear reactors or generated from fossil fuels such as coal or natural gas, with the resulting CO2 captured and sequestered rather than released into the atmosphere.
As well as the production of cheap hydrogen on a large scale, a significant challenge is the lack of a hydrogen distribution infrastructure that would be needed to parallel and eventually replace petrol and diesel filling stations. Long distance transport of hydrogen, even in a compressed state, is not considered economically feasible today. However, innovative hydrogen storage techniques, such as organic liquid carriers that do not require high-pressure storage, will soon lower the cost of long-distance transport and ease the risks associated with gas storage and inadvertent release.
Mass-market fuel cell vehicles are an attractive prospect, because they will offer the range and fuelling convenience of today’s diesel and petrol-powered vehicles while providing the benefits of sustainability in personal transportation. Achieving these benefits will, however, require the reliable and economical production of hydrogen from entirely low-carbon sources, and its distribution to a growing fleet of vehicles (expected to number in the many millions within a decade).
2. Next-generation robotics
Rolling away from the production line
The popular imagination has long foreseen a world where robots take over all manner of everyday tasks.
This robotic future has stubbornly refused to materialize, however, with robots still limited to factory assembly lines and other controlled tasks. Although heavily used (in the automotive industry, for instance) these robots are large and dangerous to human co-workers; they have to be separated by safety cages.
Advances in robotics technology are making human-machine collaboration an everyday reality. Better and cheaper sensors make a robot more able to understand and respond to its environment. Robot bodies are becoming more adaptive and flexible, with designers taking inspiration from the extraordinary flexibility and dexterity of complex biological structures, such as the human hand. And robots are becoming more connected, benefiting from the cloud-computing revolution by being able to access instructions and information remotely, rather than having to be programmed as a fully autonomous unit.
The new age of robotics takes these machines away from the big manufacturing assembly lines, and into a wide variety of tasks. Using GPS technology, just like smartphones, robots are beginning to be used in precision agriculture for weed control and harvesting. In Japan, robots are being trialled in nursing roles: they help patients out of bed and support stroke victims in regaining control of their limbs. Smaller and more dextrous robots, such as Dexter Bot, Baxter and LBR iiwa, are designed to be easily programmable and to handle manufacturing tasks that are laborious or uncomfortable for human workers.
Indeed, robots are ideal for tasks that are too repetitive or dangerous for humans to undertake, and can work 24 hours a day at a lower cost than human workers. In reality, new-generation robotic machines are likely to collaborate with humans rather than replace them. Even considering advances in design and artificial intelligence, human involvement and oversight will remain essential.
There remains the risk that robots may displace human workers from jobs, although previous generations of automation have tended to lead to higher productivity and growth with benefits throughout the economy. Decades-old fears of networked robots running out of control may become more salient with next generation robotics linked into the web – but more likely familiarisation as people employ domestic robots to do household chores will reduce fears rather than fan them. And new research into social robots – that know how to collaborate and build working alliances with humans – means that a future where robots and humans work together, each to do what it does best – is a strong likelihood. Nevertheless, however, the next generation of robotics poses novel questions for fields from philosophy to anthropology about the human relationship to machines.
3. Recyclable thermoset plastics
A new kind of plastic to cut landfill waste
Plastics are divided into thermoplastics and thermoset plastics. The former can be heated and shaped many times, and are ubiquitous in the modern world, comprising everything from children’s toys to lavatory seats. Because they can be melted down and reshaped, thermoplastics are generally recyclable. Thermoset plastics however can only be heated and shaped once, after which molecular changes mean that they are “cured”, retaining their shape and strength even when subject to intense heat and pressure.
Due to this durability, thermoset plastics are a vital part of our modern world, and are used in everything from mobile phones and circuit boards to the aerospace industry. But the same characteristics that have made them essential in modern manufacturing also make them impossible to recycle. As a result, most thermoset polymers end up as landfill. Given the ultimate objective of sustainability, there has long been a pressing need for recyclability in thermoset plastics.
In 2014 critical advances were made in this area, with the publication of a landmark paper in the journal Science announcing the discovery of new classes of thermosetting polymers that are recyclable. Called poly(hexahydrotriazine)s, or PHTs, these can be dissolved in strong acid, breaking apart the polymer chains into component monomers that can then be reassembled into new products. Like traditional unrecyclable thermosets, these new structures are rigid, resistant to heat and tough, with the same potential applications as their unrecyclable forerunners.
Although no recycling is 100% efficient, this innovation – if widely deployed – should speed up the move towards a circular economy with a big reduction in landfill waste from plastics. We expect recyclable thermoset polymers to replace unrecyclable thermosets within five years, and to be ubiquitous in newly manufactured goods by 2025.
4. Precise genetic-engineering techniques
A breakthrough offers better crops with less controversy
Conventional genetic engineering has long caused controversy. However, new techniques are emerging that allow us to directly “edit” the genetic code of plants to make them, for example, more nutritious or better able to cope with a changing climate.
Currently, the genetic engineering of crops relies on the bacterium agrobacterium tumefaciens to transfer desired DNA into the target genome. The technique is proven and reliable, and despite widespread public fears, there is a consensus in the scientific community that genetically modifying organisms using this technique is no more risky than modifying them using conventional breeding. However, while agrobacterium is useful, more precise and varied genome-editing techniques have been developed in recent years.
These include ZFNs, TALENS and, more recently, the CRISPR-Cas9 system, which evolved in bacteria as a defence mechanism against viruses. CRISPR-Cas9 system uses an RNA molecule to target DNA, cutting to a known, user-selected sequence in the target genome. This can disable an unwanted gene or modify it in a way that is functionally indistinguishable from a natural mutation. Using “homologous recombination”, CRISPR can also be used to insert new DNA sequences, or even whole genes, into the genome in a precise way.
Another aspect of genetic engineering that appears poised for a major advance is the use of RNA interference (RNAi) in crops. RNAi is effective against viruses and fungal pathogens, and can also protect plants against insect pests, reducing the need for chemical pesticides. Viral genes have been used to protect papaya plants against the ringspot virus, for example, with no sign of resistance evolving in over a decade of use in Hawaii. RNAi may also benefit major staple-food crops, protecting wheat against stem rust, rice against blast, potato against blight and banana against fusarium wilt.
Many of these innovations will be particularly beneficial to smaller farmers in developing countries. As such, genetic engineering may become less controversial, as people recognize its effectiveness at boosting the incomes and improving the diets of millions of people. In addition, more precise genome editing may allay public fears, especially if the resulting plant or animal is not considered transgenic because no foreign genetic material is introduced.
Taken together, these techniques promise to advance agricultural sustainability by reducing input use in multiple areas, from water and land to fertilizer, while also helping crops to adapt to climate change.
5. Additive manufacturing
The future of making things, from printable organs to intelligent clothes
As the name suggests, additive manufacturing is the opposite of subtractive manufacturing. The latter is how manufacturing has traditionally been done: starting with a larger piece of material (wood, metal, stone, etc), layers are removed, or subtracted, to leave the desired shape. Additive manufacturing instead starts with loose material, either liquid or powder, and then builds it into a three-dimensional shape using a digital template.
3D products can be highly customized to the end user, unlike mass-produced manufactured goods. An example is the company Invisalign, which uses computer imaging of customers’ teeth to make near-invisible braces tailored to their mouths. Other medical applications are taking 3D printing in a more biological direction: by directly printing human cells, it is now possible to create living tissues that may find potential application in drug safety screening and, ultimately, tissue repair and regeneration. An early example of this bioprinting is Organovo’s printed liver-cell layers, which are aimed at drug testing, and may eventually be used to create transplant organs. Bioprinting has already been used to generate skin and bone, as well as heart and vascular tissue, which offer huge potential in future personalized medicine.
An important next stage in additive manufacturing would be the 3D printing of integrated electronic components, such as circuit boards. Nano-scale computer parts, like processors, are difficult to manufacture this way because of the challenges of combining electronic components with others made from multiple different materials. 4D printing now promises to bring in a new generation of products that can alter themselves in response to environmental changes, such as heat and humidity. This could be useful in clothes or footwear, for example, as well as in healthcare products, such as implants designed to change in the human body.
Like distributed manufacturing, additive manufacturing is potentially highly disruptive to conventional processes and supply chains. But it remains a nascent technology today, with applications mainly in the automotive, aerospace and medical sectors. Rapid growth is expected over the next decade as more opportunities emerge and innovation in this technology brings it closer to the mass market.
6. Emergent artificial intelligence
What happens when a computer can learn on the job?
Artificial intelligence (AI) is, in simple terms, the science of doing by computer the things that people can do. Over recent years, AI has advanced significantly: most of us now use smartphones that can recognize human speech, or have travelled through an airport immigration queue using image-recognition technology. Self-driving cars and automated flying drones are now in the testing stage before anticipated widespread use, while for certain learning and memory tasks, machines now outperform humans. Watson, an artificially intelligent computer system, beat the best human candidates at the quiz game Jeopardy.
Artificial intelligence, in contrast to normal hardware and software, enables a machine to perceive and respond to its changing environment. Emergent AI takes this a step further, with progress arising from machines that learn automatically by assimilating large volumes of information. An example is NELL, the Never-Ending Language Learning project from Carnegie Mellon University, a computer system that not only reads facts by crawling through hundreds of millions of web pages, but attempts to improve its reading and understanding competence in the process in order to perform better in the future.
Like next-generation robotics, improved AI will lead to significant productivity advances as machines take over – and even perform better – at certain tasks than humans. There is substantial evidence that self-driving cars will reduce collisions, and resulting deaths and injuries, from road transport, as machines avoid human errors, lapses in concentration and defects in sight, among other problems. Intelligent machines, having faster access to a much larger store of information, and able to respond without human emotional biases, might also perform better than medical professionals in diagnosing diseases. The Watson system is now being deployed in oncology to assist in diagnosis and personalized, evidence-based treatment options for cancer patients.
Long the stuff of dystopian sci-fi nightmares, AI clearly comes with risks – the most obvious being that super-intelligent machines might one day overcome and enslave humans. This risk, while still decades away, is taken increasingly seriously by experts, many of whom signed an open letter coordinated by the Future of Life Institute in January 2015 to direct the future of AI away from potential pitfalls. More prosaically, economic changes prompted by intelligent computers replacing human workers may exacerbate social inequalities and threaten existing jobs. For example, automated drones may replace most human delivery drivers, and self-driven short-hire vehicles could make taxis increasingly redundant.
On the other hand, emergent AI may make attributes that are still exclusively human – creativity, emotions, interpersonal relationships – more clearly valued. As machines grow in human intelligence, this technology will increasingly challenge our view of what it means to be human, as well as the risks and benefits posed by the rapidly closing gap between man and machine.
7. Distributed manufacturing
The factory of the future is online – and on your doorstep
Distributed manufacturing turns on its head the way we make and distribute products. In traditional manufacturing, raw materials are brought together, assembled and fabricated in large centralized factories into identical finished products that are then distributed to the customer. In distributed manufacturing, the raw materials and methods of fabrication are decentralized, and the final product is manufactured very close to the final customer.
In essence, the idea of distributed manufacturing is to replace as much of the material supply chain as possible with digital information. To manufacture a chair, for example, rather than sourcing wood and fabricating it into chairs in a central factory, digital plans for cutting the parts of a chair can be distributed to local manufacturing hubs using computerized cutting tools known as CNC routers. Parts can then be assembled by the consumer or by local fabrication workshops that can turn them into finished products. One company already using this model is the US furniture company AtFAB.
Current uses of distributed manufacturing rely heavily on the DIY “maker movement”, in which enthusiasts use their own local 3D printers and make products out of local materials. There are elements of open-source thinking here, in that consumers can customize products to their own needs and preferences. Instead of being centrally driven, the creative design element can be more crowdsourced; products may take on an evolutionary character as more people get involved in visualizing and producing them.
Distributed manufacturing is expected to enable a more efficient use of resources, with less wasted capacity in centralized factories. It also lowers the barriers to market entry by reducing the amount of capital required to build the first prototypes and products. Importantly, it should reduce the overall environmental impact of manufacturing: digital information is shipped over the web rather than physical products over roads or rails, or on ships; and raw materials are sourced locally, further reducing the amount of energy required for transportation.
If it becomes more widespread, distributed manufacturing will disrupt traditional labour markets and the economics of traditional manufacturing. It does pose risks: it may be more difficult to regulate and control remotely manufactured medical devices, for example, while products such as weapons may be illegal or dangerous. Not everything can be made via distributed manufacturing, and traditional manufacturing and supply chains will still have to be maintained for many of the most important and complex consumer goods.
Distributed manufacturing may encourage broader diversity in objects that are today standardized, such as smartphones and automobiles. Scale is no object: one UK company, Facit Homes, uses personalized designs and 3D printing to create customized houses to suit the consumer. Product features will evolve to serve different markets and geographies, and there will be a rapid proliferation of goods and services to regions of the world not currently well served by traditional manufacturing.
8. ‘Sense and avoid’ drones
Flying robots to check power lines or deliver emergency aid
Unmanned aerial vehicles, or drones, have become an important and controversial part of military capacity in recent years. They are also used in agriculture, for filming and multiple other applications that require cheap and extensive aerial surveillance. But so far all these drones have had human pilots; the difference is that their pilots are on the ground and fly the aircraft remotely.
The next step with drone technology is to develop machines that fly themselves, opening them up to a wider range of applications. For this to happen, drones must be able to sense and respond to their local environment, altering their height and flying trajectory in order to avoid colliding with other objects in their path. In nature, birds, fish and insects can all congregate in swarms, each animal responding to its neighbour almost instantaneously to allow the swarm to fly or swim as a single unit. Drones can emulate this.
With reliable autonomy and collision avoidance, drones can begin to take on tasks too dangerous or remote for humans to carry out: checking electric power lines, for example, or delivering medical supplies in an emergency. Drone delivery machines will be able to find the best route to their destination, and take into account other flying vehicles and obstacles. In agriculture, autonomous drones can collect and process vast amounts of visual data from the air, allowing precise and efficient use of inputs such as fertilizer and irrigation.
In January 2014, Intel and Ascending Technologies showcased prototype multi-copter drones that could navigate an on-stage obstacle course and automatically avoid people who walked into their path. The machines use Intel’s RealSense camera module, which weighs just 8g and is less than 4mm thick. This level of collision avoidance will usher in a future of shared airspace, with many drones flying in proximity to humans and operating in and near the built environment to perform a multitude of tasks. Drones are essentially robots operating in three, rather than two, dimensions; advances in next-generation robotics technology will accelerate this trend.
Flying vehicles will never be risk-free, whether operated by humans or as intelligent machines. For widespread adoption, sense and avoid drones must be able to operate reliably in the most difficult conditions: at night, in blizzards or dust storms. Unlike our current digital mobile devices (which are actually immobile, since we have to carry them around), drones will be transformational as they are self-mobile and have the capacity of flying in the three-dimensional world that is beyond our direct human reach. Once ubiquitous, they will vastly expand our presence, productivity and human experience.
9. Neuromorphic technology
Computer chips that mimic the human brain
Even today’s best supercomputers cannot rival the sophistication of the human brain. Computers are linear, moving data back and forth between memory chips and a central processor over a high-speed backbone. The brain, on the other hand, is fully interconnected, with logic and memory intimately cross-linked at billions of times the density and diversity of that found in a modern computer. Neuromorphic chips aim to process information in a fundamentally different way from traditional hardware, mimicking the brain’s architecture to deliver a huge increase in a computer’s thinking and responding power.
Miniaturization has delivered massive increases in conventional computing power over the years, but the bottleneck of shifting data constantly between stored memory and central processors uses large amounts of energy and creates unwanted heat, limiting further improvements. In contrast, neuromorphic chips can be more energy efficient and powerful, combining data-storage and data-processing components into the same interconnected modules. In this sense, the system copies the networked neurons that, in their billions, make up the human brain.
Neuromorphic technology will be the next stage in powerful computing, enabling vastly more rapid processing of data and a better capacity for machine learning. IBM’s million-neuron TrueNorth chip, revealed in prototype in August 2014, has a power efficiency for certain tasks that is hundreds of times superior to a conventional CPU (Central Processing Unit), and more comparable for the first time to the human cortex. With vastly more compute power available for far less energy and volume, neuromorphic chips should allow more intelligent small-scale machines to drive the next stage in miniaturization and artificial intelligence.
Potential applications include: drones better able to process and respond to visual cues, much more powerful and intelligent cameras and smartphones, and data-crunching on a scale that may help unlock the secrets of financial markets or climate forecasting. Computers will be able to anticipate and learn, rather than merely respond in pre-programmed ways.
10. Digital genome
Healthcare for an age when your genetic code is on a USB stick
While the first sequencing of the 3.2 billion base pairs of DNA that make up the human genome took many years and cost tens of millions of dollars, today your genome can be sequenced and digitized in minutes and at the cost of only a few hundred dollars. The results can be delivered to your laptop on a USB stick and easily shared via the internet. This ability to rapidly and cheaply determine our individual unique genetic make-up promises a revolution in more personalized and effective healthcare.
Many of our most intractable health challenges, from heart disease to cancer, have a genetic component. Indeed, cancer is best described as a disease of the genome. With digitization, doctors will be able to make decisions about a patient’s cancer treatment informed by a tumour’s genetic make-up. This new knowledge is also making precision medicine a reality by enabling the development of highly targeted therapies that offer the potential for improved treatment outcomes, especially for patients battling cancer.
Like all personal information, a person’s digital genome will need to be safeguarded for privacy reasons. Personal genomic profiling has already raised challenges, with regard to how people respond to a clearer understanding of their risk of genetic disease, and how others – such as employers or insurance companies – might want to access and use the information. However, the benefits are likely to outweigh the risks, because individualized treatments and targeted therapies can be developed with the potential to be applied across all the many diseases that are driven or assisted by changes in DNA.
Interesting archaeological discovery in Israel
An ancient scarab from three thousand years ago was surprisingly discovered during a school trip to Azor, near Tel Aviv, Israel. The scene depicted on the scarab probably represents the conferral of legitimate power and authority on a local ruler.
“We were wandering around, when I saw something that looked like a small toy on the ground,” told Gilad Stern of the Education Centre of the Israeli Antiquities Authorityntre, who was leading the school trip. “An inner voice told me: ‘Pick it up and turn it over.’ I was amazed: it was a scarab with a clearly engraved scene, the dream of every amateur archaeologist. The pupils were really enthusiastic!”.
The visit of the Rabin Middle School eight graders took place as part of a tour guide course organised by the Education Centre of the Israel Antiquities Authority for the third consecutive year. The course enables students to teach the residents of Azor about the local archaeological heritage.
The scarab was designed in the shape of the common dung beetle. The ancient Egyptians saw in the gesture of the tiny scarab, which rolls a ball of dung twice its size where it stores its future offspring, the embodiment of creation and regeneration, similar to the gesture of the Creator God.
According to Dr. Amir Golani, an expert of the Israeli Antiquities Authority specialized in the Bronze Age period, “the scarab was used as a seal and was a symbol of power and status. It could be inserted into a necklace or a ring. It is made of silicate earthenware covered with a bluish-green glaze. It could have fallen from the hands of an important and influential personage passing through the area, or it could have been deliberately buried in the ground with other objects and after thousands of years returned to the surface. It is difficult to determine the precise original context.”
The lower, flat part of the scarab seal depicts a figure seated on a chair in front of a standing figure, whose arm is raised above that of the seated person. The standing figure has an elongated head, which seems to represent the crown of an Egyptian pharaoh. It is possible that we are seeing here a snapshot of a scene in which the Egyptian pharaoh confers power and authority on a local Canaanite.
“This scene fundamentally reflects the geopolitical reality that prevailed in the Land of Canaan during the Late Bronze Age (approx. 1500-1000 BC), when local Canaanite rulers lived under Egypt’s political and cultural hegemony (and sometimes rebelled against it)” – said Dr. Golani. “It is therefore very likely that the seal dates back to the Late Bronze Age, when the local Canaanites were ruled by the Egyptian Empire”.
Scarab seals are indeed distinctly Egyptian, but their widespread use extended beyond the borders of ancient Egypt. Hundreds of scarabs were discovered in the Land of ancient Israel, mostly in tombs, but also in settlement layers. Some of them were imported from Egypt, many others were imitated in ancient Israel by local craftsmen under Egyptian influence. The level of workmanship of the particular scarab found is not typical of Egypt and may be a product of local craftsmen.
Towards Efficient Matrix Multiplication
Algorithms have, over the years, helped mathematicians/scientists solve numerous fundamental operations. From the early use of simple algorithms by Egyptian, Greek, and Persian mathematicians to the shift towards more robust AI-enabled algorithms, their evolution has manifested incredible progress in the technological realm. While Artificial Intelligence (AI) and Machine Learning (ML) are extending their reach and contributions in various military and civilian domains, it is interesting to witness the application of the technology on itself, i.e., using ML to improve the effectiveness of its underlying algorithms.
Despite the increased familiarisation with algorithms over time, it remains fairly strenuous to find new algorithms that can prove reliable and accurate. Interestingly, ‘Discovering faster matrix multiplication algorithms with reinforcement learning,’ a recent study by DeepMind, a British AI subsidiary in London, published in Nature, has demonstrated some interesting findings in this regard. It revealed new shortcuts simulated by AI for faster mathematical calculations vis-à-vis matrix multiplication.
DeepMind developed an AI system called ‘AlphaTensor’, to expedite matrix multiplication. Matrix multiplication – which uses two grids of numbers multiplied together – is a simple algebraic expression often taught in high school. However, its ubiquitous use in the digital world and computing has considerable influence.
‘AlphaTensor’ was tasked with creating novel, correct, and efficient algorithms to carry out matrix multiplication with the least number of steps possible. The algorithm discovery process was treated as a single-player game. It used AlphaZero – the same AI agent which gained global traction when it displayed extraordinary intelligence in board games like Chess and Go.
AlphaTensor conceptualised the board into a 3-D array of numbers which, through a limited number of moves, tried to find the correct multiplication algorithms. It uses reinforcement learning, where the neural networks interact with the environment toward a specific goal. If the results are favourable, the internal parameters are updated. It also uses Tree Search, in which the ML explores the results of branching possibilities to choose the next action. It seeks to identify the most promising action at each step. The outcomes are used to sharpen neural networks, further helping the tree search, and providing more successes to learn from.
As per the paper’s findings, AlphaTensor discovered thousands of algorithms for various sizes for multiplication matrices, some of which were able to break decades-long computational efficiency records of the previously existing algorithms. They overshadowed the towering complexity of the best-known Strassen’s two-level algorithm for multiplying matrix. For example, AlphaTensor found an algorithm for solving a 4 x 4 matrice in 47 steps overperforming the Strassen algorithm, which used 49 steps for the same operation. Similarly, if a set of matrices was solved using 80 multiplication steps, AlphaTensor reduced it to only 76 steps. This development has caused quite a stir in the tech world as it is being claimed that a fifty-year old record has been broken in Computer Science.
However, the episode underlines some important implications. Given that matrix multiplication is a core component of the digital world, companies around the world have invested considerable time and resources in computer hardware for matrix multiplication. Since it is used across a wide range of domains, including computing, processing images, generating graphics, running simulations, digital communication, and neural networks etc. – to name a few, even minor improvements in matrix multiplication’s efficiency could have a notable and widespread impact in the concerned fields.
The findings manifest the potential of ML to solve even more complicated mathematical problems. The automatic discovery of algorithms via ML offers new capacities to surpass the existing best human-designed algorithms. It introduces new ML techniques, which have the potential to increase computing speed by 20 percent leading to much more feasible timelines. It is pertinent to mention that a lesser number of operations lead to not only lesser time but also less amount of energy spent.
The finding has presented a model to gamify ML to solve mathematical operations. It exhibited that AlphaZero is a potent algorithm that could be used beyond winning traditional games and be applied to solving complex mathematical operations/tasks.
This DeepMind discovery can pave the way for future research on understanding matrix multiplication algorithms and be an inspiration to use AI for algorithm discovery for other computing tasks and set the stage for a possible breakthrough in the field.
The increased efficiency of matrix multiplication has once again brought into light the ever-expanding potential of AI. To be fair, such developments do not infer that human programmers would be out of the job soon; rather, at least for now, it should be seen as an addition of an optimisation tool in the coder’s arsenal, which could lead to more innovative discoveries in the future with remarkable implications for the world.
Kissinger and the current situation considering the development of Artificial Intelligence and the Ukrainian crisis
Kissinger has recently published some reflections on the course of world politics in recent decades, with references to the return of the 20th century conflicts brought to light by the development of new weaponry and strategic scenarios mediated by Artificial Intelligence. Kissinger has also referred to the situation in Ukraine and the equilibria between the United States, Russia and China.
Kissinger has stated that instant communication and the technological revolution have combined to provide new meaning and urgency to two crucial issues that leaders must address:
1) what is essential for national security?
2) what is necessary for peaceful international coexistence?
Although a plethora of empires existed, aspirations for world order were confined by geography and technology to specific regions. This was also true for the Roman and Chinese empires, which encompassed a wide range of societies and cultures. These were regional orders that co-evolved as world orders.
From the 16th century onwards, the development of technology, medicine and economic and political organisation expanded Europe’s ability to project its power and government systems around the world. From the mid-17th century, the Westphalian system was based on respect for sovereignty and international law. Later that system took root throughout the world and, after the end of traditional colonialism, it led to the emergence of States which – largely formally abandoned by the former motherlands – insisted on defining, and even defying, the rules of the established world order – at least the countries that really got rid of imperialistic domination, such as the People’s Republic of China, the Democratic People’s Republic of Korea, etc.
Since the end of World War II, mankind has lived in a delicate balance between relative security and legitimacy. In no previous period of history would the consequences of an error in this balance have been more severe or catastrophic. The contemporary age has introduced a level of destructiveness that potentially enables mankind to self-destruct. Advanced systems of mutual destruction were aimed at pursuing not ultimate victory but rather at preventing others’ attack.
This is the reason why shortly after the Japanese nuclear tragedy of 1945, the deployment of nuclear weapons began to become incalculable, unconstrained by consequences and based on the certainty of security systems.
For seventy-six years (1946-2022) while advanced weapons grew in power, complexity and accuracy, no country was convinced to actually use them, even in conflict with non-nuclear countries. Both the United States of America and the Soviet Union that accepted defeat at the hands of non-nuclear countries without resorting to their own most lethal weapons: as in the case of the Korean War, Vietnam, Afghanistan (both the Soviets and the Americans in that case).
To this day, such nuclear dilemmas have not disappeared, but have instead changed as more States have developed more refined weapons than the “nuclear bomb” and the essentially bipolar distribution of destructive capabilities of the former Cold War has been replaced by very high-tech options – a topic addressed in my various articles.
Cyber weapons and artificial intelligence applications (such as autonomous weapon systems) greatly complicate the current dangerous war prospects. Unlike nuclear weapons, cyber weapons and artificial intelligence are ubiquitous, relatively inexpensive to develop and easy to use.
Cyber weapons combine the capacity for massive impact with the ability to obscure the attribution of attacks, which is crucial when the attacker is no longer a precise reference but becomes a “quiz”.
As we have often pointed out, artificial intelligence can also overcome the need for human operators, and enable weapons to launch themselves based on their own calculations and their ability to choose targets with almost absolute precision and accuracy.
Because the threshold for their use is so low and their destructive ability so great, the use of such weapons – or even their mere threat – can turn a crisis into a war or turn a limited war into a nuclear war through unintentional or uncontrollable escalation. To put it in simple terms, there will no longer be the need to drop the “bomb” first, as it would be downgraded to a weapon of retaliation against possible and not certain enemies. On the contrary, with the help of artificial intelligence, third parties could make sure that the first cyber-attack is attributed to those who have never attacked.
The impact of this technology makes its application a cataclysm, thus making its use so limited that it becomes unmanageable.
No diplomacy has yet been invented to explicitly threaten its use without the risk of an anticipated response. So much so that arms control Summits seem to have been played down by these uncontrollable novelties, ranging from unmarked drone attacks to cyberattacks from the depths of the Net.
Technological developments are currently accompanied by a political transformation. Today we are witnessing the resurgence of rivalry between the great powers, amplified by the spread and advancement of surprising technologies. When in the early 1970s the People’s Republic of China embarked on its re-entry into the international diplomatic system at the initiative of Zhou Enlai and, at the end of that decade, on its full re-entry into the international arena thanks to Deng Xiaoping, its human and economic potential was vast, but its technology and actual power were relatively limited.
Meanwhile, China’s growing economic and strategic capabilities have forced the United States of America to confront –
for the first time in its history – a geopolitical competitor whose resources are potentially comparable to its own.
Each side sees itself as a unicum, but in a different way. The United States of America acts on the assumption that its values are universally applicable and will eventually be adopted everywhere. The People’s Republic of China, instead, expects that the uniqueness of its ultra-millennial civilisation and the impressive economic leap forward will inspire other countries to emulate it to break free from imperialist domination and show respect for Chinese priorities.
Both the US “manifest destiny” missionary impulse and the Chinese sense of grandeur and cultural eminence – of China as such, including Taiwan – imply a kind of subordination-fear of each other. Due to the nature of their economies and high technology, each country is affecting what the other has so far considered its core interests.
In the 21st century China seems to have embarked on playing an international role to which it considers itself entitled by its achievements over the millennia. The United States of America, on the other hand, is taking action to project power, purpose, and diplomacy around the world to maintain a global equilibrium established in its post-war experience, responding to tangible and imagined challenges to this world order.
For the leadership on both sides, these security requirements seem self-evident. They are supported by their respective citizens. Yet security is only part of the wide picture. The fundamental issue for the planet’s existence is whether the two giants can learn to combine the inevitable strategic rivalry with a concept and practice of coexistence.
Russia – unlike the United States of America and China – lacks the market power, demographic clout and diversified industrial base.
Spanning eleven time zones and enjoying few natural defensive demarcations, Russia has acted according to its own geographical and historical imperatives. Russia’s foreign policy represents a mystical patriotism in a Third Rome-style imperial law, with a lingering perception of insecurity essentially stemming from the country’s long-standing vulnerability to invasion across the plains of Eastern Europe.
For centuries, its leaders from Peter the Great to Stalin – who, by the way, was not even Russian, but felt he was so in the internationalist spirit that led to the creation of the USSR on 30 December 1922 – have sought to isolate Russia’s vast territory with a safety belt imposed around its diffuse border. Today Kissinger tells us that the same priority is manifested once again in the attack on Ukraine – and we add that few people understand and many others pretend not to understand this.
The mutual impact of these societies has been shaped by their strategic assessments, which stem from their history. The Ukrainian conflict is a case in point. After the dissolution of the Warsaw Pact, and the turning of its Member States (Bulgaria, Czechoslovakia, German Democratic Republic, Poland, Romania, Hungary) into “Western” countries, the whole territory – from the security line established in central Europe up to Russia’s national border – has opened up to a new strategic design. Stability depended on the fact that the Warsaw Pact in itself – especially after the Conference on Security and Cooperation in Europe held in Helsinki in 1975 – allayed Europe’s traditional fears of Russian domination (indeed, Soviet domination, at the time), and assuaged Russia’s traditional concerns about Western offensives – from the Swedes to Napoleon until Hitler. Hence, the strategic geography of Ukraine embodies these concerns emerging again in Russia. If Ukraine were to join NATO, the security line between Russia and the West would be placed within just over 500 kilometres of Moscow, actually eliminating the traditional buffer that saved Russia when Sweden, France and Germany tried to occupy it in previous centuries.
If the security border were to be established on the Western side of Ukraine, Russian forces would be within easy reach of Budapest and Warsaw. The February 2022 invasion of Ukraine is a flagrant violation of the international law mentioned above, and is thus largely a consequence of a failed or otherwise inadequately undertaken strategic dialogue. The experience of two nuclear entities confronting each other militarily – although not resorting to their destructive weapons – underlines the urgency of the fundamental problem, as Ukraine is only a tool of the West. Dario Fo once said that China was an invention of Albania to scare the Soviet Union. We can say that Ukraine is currently an invention of the West to scare Russia – and this is not a joke. An invention for which Ukrainians and Russians are paying with their blood.
Hence the triangular relationship between the United States of America, the People’s Republic of China, and the Russian Federation will eventually resume, even if Russia will be weakened by the demonstration of its intended military limitations in Ukraine, the widespread rejection of its conduct, and the scope and impact of sanctions against it. But it will retain nuclear and cyber capabilities for doomsday scenarios.
In the US-Chinese relationship, instead, the conundrum is whether two different concepts of national greatness can learn to peacefully coexist side by side and how. In the case of Russia, the challenge is whether the country can reconcile its vision of itself with the self-determination and security of the countries in what it has long called its “near abroad” (mainly Central Asia and Eastern Europe), and do so as part of an international system rather than through domination.
It now seems possible that an order based on universal rules, however worthy in its conception, will be replaced in practice, for an indefinite period of time, by an at least partially decoupled world. Such a division encourages a search at its margins for spheres of influence. In such a case, how will countries that do not agree on global rules of conduct be able to operate within an agreed equilibrium design? Will the quest for domination overwhelm the analysis of coexistence?
In a world of increasingly formidable technology that can either elevate or dismantle human civilisation, there is no definitive solution to the competition between great powers, let alone a military one. An unbridled technological race, justified by the foreign policy ideology in which each side is convinced of the other’s malicious intent, risks creating a catastrophic cycle of mutual suspicion like the one that triggered World War I, but with incomparably greater consequences.
All sides are therefore now obliged to re-examine their first principles of international behaviour and relate them to the possibilities of coexistence. For the leaders of high-tech companies, there is a moral and strategic imperative to pursue – both within their own countries and with potential adversary countries – an ongoing discussion on the implications of technology and how its military applications could be limited.
The topic is too important to be neglected until crises arise. The arms control dialogues that helped toning down and showing restraint during the nuclear age, as well as the high-level research on the consequences of emerging technologies, could prompt reflection and promote habits of mutual strategic self-restraint.
An irony of the current world is that one of its glories – the revolutionary explosion of technology – has emerged so quickly, and with such optimism, that it has outgrown its dangers, and inadequate systematic efforts have been made to understand its capabilities.
Technologists develop amazing devices, but have had few opportunities to explore and evaluate their comparative implications within a historical framework. As I pointed out in a previous article, political leaders too often lack adequate understanding of the strategic and philosophical implications of the machines and algorithms available to them. At the same time, the technological revolution is eroding human consciousness and perceptions of the nature of reality. The last great transformation – the Enlightenment – replaced the age of faith with repeatable experiments and logical deductions. Now it is supplanted by dependence on algorithms, which work in the opposite direction, offering results in search of an explanation. Exploring these new frontiers will require considerable efforts on the part of national leaders to reduce, and ideally bridge, the gaps between the worlds of technology, politics, history and philosophy.
The leaders of current great powers need not immediately develop a detailed vision of how to solve the dilemmas described here. Kissinger warns that, however, they must be clear about what is to be avoided and what cannot be tolerated. The wise must anticipate challenges before they manifest themselves as crises. Lacking a moral and strategic vision, the current era is unbridled. The extent of our future still defies understanding not so much of what will happen but of what has already happened.
Russia-Africa Summit: Sergey Lavrov Embarks on Courtship and Assessment Tour
Behind lofty summit declarations, several bilateral agreements and thousands of decade-old undelivered pledges, Russia has been at the crossroad due...
The Indignant Politics of America’s Mass Shootings
Why do mass shootings garner the lead stories in the news cycle? Could it be the sudden cluster of deaths...
It Is Possible To Live Peacefully In The Caucasus
The Caucasus is a geographical area inhabited by a number of peoples. This region with its beautiful nature has experienced...
Small Business, Big Problem: New Report Says 67% of SMEs Worldwide Are Fighting for Survival
Small- and medium-sized enterprises (SMEs) and mid-sized companies are the backbone of the global economy. They create close to 70%...
Ukraine Crisis: International Security and Foreign Policy Option for Pakistan
Impact on International Security: When Russia invaded Ukraine on 24 February 2022, Russia presented it as a matter of its...
What “Victory” and “Defeat” Would Mean in Ukraine’s War
In order to be able accurately to define “victory” in the war in Ukraine, the pre-requisite is to define whom...
Hungary’s Victor Orban uses soccer to project Greater Hungary and racial exclusivism
Hungary didn’t qualify for the Qatar World Cup, but that hasn’t stopped Prime Minister Victor Orban from exploiting the world’s...
Eastern Europe4 days ago
Debunking Lies About the War in Ukraine
Defense3 days ago
Internet of Military Things (IoMT) and the Future of Warfare
Southeast Asia4 days ago
Why does the Indonesian government opt for China but ignore Japan in the Jakarta-Bandung high-speed rail project?
East Asia4 days ago
Unmasked by Qatar World Cup, China’s Nationalism Is Transforming Itself to Internationalism
Europe4 days ago
The Economist: “Europe looks like… a sucker”
Religion3 days ago
Pakistan On Its Way to Promote Interfaith Harmony
Energy News4 days ago
Best Practice: Why Going Green Is Best for Business
Americas4 days ago
The Silicon Valley’s ‘Code Peasants’ and ‘Code Overlords’