Zero emission cars fuelled by hydrogen and computer chips that mimic the human brain are among the technological breakthroughs recognized as the Top 10 Emerging Technologies of 2015.
The WEF’s Meta-Council on Emerging Technologies, compiles the list each year to help raise attention for those technologies its members believe possess the greatest potential for addressing chronic global challenges. The purpose is also to initiate a debate on any human, societal, economic or environmental risks the technologies pose, with the aim of addressing any concerns before adoption becomes widespread.
This year’s list offers a glimpse of the power of innovation to improve lives, transform industries and safeguard the planet:
1. Fuel cell vehicles
Zero-emission cars that run on hydrogen
“Fuel cell” vehicles have been long promised, as they potentially offer several major advantages over electric and hydrocarbon-powered vehicles. However, the technology has only now begun to reach the stage where automotive companies are planning to launch them for consumers. Initial prices are likely to be in the range of $70,000, but should come down significantly as volumes increase within the next couple of years.
Unlike batteries, which must be charged from an external source, fuel cells generate electricity directly, using fuels such as hydrogen or natural gas. In practice, fuel cells and batteries are combined, with the fuel cell generating electricity and the batteries storing this energy until demanded by the motors that drive the vehicle. Fuel cell vehicles are therefore hybrids, and will likely also deploy regenerative braking – a key capability for maximizing efficiency and range.
Unlike battery-powered electric vehicles, fuel cell vehicles behave as any conventionally fuelled vehicle. With a long cruising range – up to 650 km per tank (the fuel is usually compressed hydrogen gas) – a hydrogen fuel refill only takes about three minutes. Hydrogen is clean-burning, producing only water vapour as waste, so fuel cell vehicles burning hydrogen will be zero-emission, an important factor given the need to reduce air pollution.
There are a number of ways to produce hydrogen without generating carbon emissions. Most obviously, renewable sources of electricity from wind and solar sources can be used to electrolyse water – though the overall energy efficiency of this process is likely to be quite low. Hydrogen can also be split from water in high-temperature nuclear reactors or generated from fossil fuels such as coal or natural gas, with the resulting CO2 captured and sequestered rather than released into the atmosphere.
As well as the production of cheap hydrogen on a large scale, a significant challenge is the lack of a hydrogen distribution infrastructure that would be needed to parallel and eventually replace petrol and diesel filling stations. Long distance transport of hydrogen, even in a compressed state, is not considered economically feasible today. However, innovative hydrogen storage techniques, such as organic liquid carriers that do not require high-pressure storage, will soon lower the cost of long-distance transport and ease the risks associated with gas storage and inadvertent release.
Mass-market fuel cell vehicles are an attractive prospect, because they will offer the range and fuelling convenience of today’s diesel and petrol-powered vehicles while providing the benefits of sustainability in personal transportation. Achieving these benefits will, however, require the reliable and economical production of hydrogen from entirely low-carbon sources, and its distribution to a growing fleet of vehicles (expected to number in the many millions within a decade).
2. Next-generation robotics
Rolling away from the production line
The popular imagination has long foreseen a world where robots take over all manner of everyday tasks.
This robotic future has stubbornly refused to materialize, however, with robots still limited to factory assembly lines and other controlled tasks. Although heavily used (in the automotive industry, for instance) these robots are large and dangerous to human co-workers; they have to be separated by safety cages.
Advances in robotics technology are making human-machine collaboration an everyday reality. Better and cheaper sensors make a robot more able to understand and respond to its environment. Robot bodies are becoming more adaptive and flexible, with designers taking inspiration from the extraordinary flexibility and dexterity of complex biological structures, such as the human hand. And robots are becoming more connected, benefiting from the cloud-computing revolution by being able to access instructions and information remotely, rather than having to be programmed as a fully autonomous unit.
The new age of robotics takes these machines away from the big manufacturing assembly lines, and into a wide variety of tasks. Using GPS technology, just like smartphones, robots are beginning to be used in precision agriculture for weed control and harvesting. In Japan, robots are being trialled in nursing roles: they help patients out of bed and support stroke victims in regaining control of their limbs. Smaller and more dextrous robots, such as Dexter Bot, Baxter and LBR iiwa, are designed to be easily programmable and to handle manufacturing tasks that are laborious or uncomfortable for human workers.
Indeed, robots are ideal for tasks that are too repetitive or dangerous for humans to undertake, and can work 24 hours a day at a lower cost than human workers. In reality, new-generation robotic machines are likely to collaborate with humans rather than replace them. Even considering advances in design and artificial intelligence, human involvement and oversight will remain essential.
There remains the risk that robots may displace human workers from jobs, although previous generations of automation have tended to lead to higher productivity and growth with benefits throughout the economy. Decades-old fears of networked robots running out of control may become more salient with next generation robotics linked into the web – but more likely familiarisation as people employ domestic robots to do household chores will reduce fears rather than fan them. And new research into social robots – that know how to collaborate and build working alliances with humans – means that a future where robots and humans work together, each to do what it does best – is a strong likelihood. Nevertheless, however, the next generation of robotics poses novel questions for fields from philosophy to anthropology about the human relationship to machines.
3. Recyclable thermoset plastics
A new kind of plastic to cut landfill waste
Plastics are divided into thermoplastics and thermoset plastics. The former can be heated and shaped many times, and are ubiquitous in the modern world, comprising everything from children’s toys to lavatory seats. Because they can be melted down and reshaped, thermoplastics are generally recyclable. Thermoset plastics however can only be heated and shaped once, after which molecular changes mean that they are “cured”, retaining their shape and strength even when subject to intense heat and pressure.
Due to this durability, thermoset plastics are a vital part of our modern world, and are used in everything from mobile phones and circuit boards to the aerospace industry. But the same characteristics that have made them essential in modern manufacturing also make them impossible to recycle. As a result, most thermoset polymers end up as landfill. Given the ultimate objective of sustainability, there has long been a pressing need for recyclability in thermoset plastics.
In 2014 critical advances were made in this area, with the publication of a landmark paper in the journal Science announcing the discovery of new classes of thermosetting polymers that are recyclable. Called poly(hexahydrotriazine)s, or PHTs, these can be dissolved in strong acid, breaking apart the polymer chains into component monomers that can then be reassembled into new products. Like traditional unrecyclable thermosets, these new structures are rigid, resistant to heat and tough, with the same potential applications as their unrecyclable forerunners.
Although no recycling is 100% efficient, this innovation – if widely deployed – should speed up the move towards a circular economy with a big reduction in landfill waste from plastics. We expect recyclable thermoset polymers to replace unrecyclable thermosets within five years, and to be ubiquitous in newly manufactured goods by 2025.
4. Precise genetic-engineering techniques
A breakthrough offers better crops with less controversy
Conventional genetic engineering has long caused controversy. However, new techniques are emerging that allow us to directly “edit” the genetic code of plants to make them, for example, more nutritious or better able to cope with a changing climate.
Currently, the genetic engineering of crops relies on the bacterium agrobacterium tumefaciens to transfer desired DNA into the target genome. The technique is proven and reliable, and despite widespread public fears, there is a consensus in the scientific community that genetically modifying organisms using this technique is no more risky than modifying them using conventional breeding. However, while agrobacterium is useful, more precise and varied genome-editing techniques have been developed in recent years.
These include ZFNs, TALENS and, more recently, the CRISPR-Cas9 system, which evolved in bacteria as a defence mechanism against viruses. CRISPR-Cas9 system uses an RNA molecule to target DNA, cutting to a known, user-selected sequence in the target genome. This can disable an unwanted gene or modify it in a way that is functionally indistinguishable from a natural mutation. Using “homologous recombination”, CRISPR can also be used to insert new DNA sequences, or even whole genes, into the genome in a precise way.
Another aspect of genetic engineering that appears poised for a major advance is the use of RNA interference (RNAi) in crops. RNAi is effective against viruses and fungal pathogens, and can also protect plants against insect pests, reducing the need for chemical pesticides. Viral genes have been used to protect papaya plants against the ringspot virus, for example, with no sign of resistance evolving in over a decade of use in Hawaii. RNAi may also benefit major staple-food crops, protecting wheat against stem rust, rice against blast, potato against blight and banana against fusarium wilt.
Many of these innovations will be particularly beneficial to smaller farmers in developing countries. As such, genetic engineering may become less controversial, as people recognize its effectiveness at boosting the incomes and improving the diets of millions of people. In addition, more precise genome editing may allay public fears, especially if the resulting plant or animal is not considered transgenic because no foreign genetic material is introduced.
Taken together, these techniques promise to advance agricultural sustainability by reducing input use in multiple areas, from water and land to fertilizer, while also helping crops to adapt to climate change.
5. Additive manufacturing
The future of making things, from printable organs to intelligent clothes
As the name suggests, additive manufacturing is the opposite of subtractive manufacturing. The latter is how manufacturing has traditionally been done: starting with a larger piece of material (wood, metal, stone, etc), layers are removed, or subtracted, to leave the desired shape. Additive manufacturing instead starts with loose material, either liquid or powder, and then builds it into a three-dimensional shape using a digital template.
3D products can be highly customized to the end user, unlike mass-produced manufactured goods. An example is the company Invisalign, which uses computer imaging of customers’ teeth to make near-invisible braces tailored to their mouths. Other medical applications are taking 3D printing in a more biological direction: by directly printing human cells, it is now possible to create living tissues that may find potential application in drug safety screening and, ultimately, tissue repair and regeneration. An early example of this bioprinting is Organovo’s printed liver-cell layers, which are aimed at drug testing, and may eventually be used to create transplant organs. Bioprinting has already been used to generate skin and bone, as well as heart and vascular tissue, which offer huge potential in future personalized medicine.
An important next stage in additive manufacturing would be the 3D printing of integrated electronic components, such as circuit boards. Nano-scale computer parts, like processors, are difficult to manufacture this way because of the challenges of combining electronic components with others made from multiple different materials. 4D printing now promises to bring in a new generation of products that can alter themselves in response to environmental changes, such as heat and humidity. This could be useful in clothes or footwear, for example, as well as in healthcare products, such as implants designed to change in the human body.
Like distributed manufacturing, additive manufacturing is potentially highly disruptive to conventional processes and supply chains. But it remains a nascent technology today, with applications mainly in the automotive, aerospace and medical sectors. Rapid growth is expected over the next decade as more opportunities emerge and innovation in this technology brings it closer to the mass market.
6. Emergent artificial intelligence
What happens when a computer can learn on the job?
Artificial intelligence (AI) is, in simple terms, the science of doing by computer the things that people can do. Over recent years, AI has advanced significantly: most of us now use smartphones that can recognize human speech, or have travelled through an airport immigration queue using image-recognition technology. Self-driving cars and automated flying drones are now in the testing stage before anticipated widespread use, while for certain learning and memory tasks, machines now outperform humans. Watson, an artificially intelligent computer system, beat the best human candidates at the quiz game Jeopardy.
Artificial intelligence, in contrast to normal hardware and software, enables a machine to perceive and respond to its changing environment. Emergent AI takes this a step further, with progress arising from machines that learn automatically by assimilating large volumes of information. An example is NELL, the Never-Ending Language Learning project from Carnegie Mellon University, a computer system that not only reads facts by crawling through hundreds of millions of web pages, but attempts to improve its reading and understanding competence in the process in order to perform better in the future.
Like next-generation robotics, improved AI will lead to significant productivity advances as machines take over – and even perform better – at certain tasks than humans. There is substantial evidence that self-driving cars will reduce collisions, and resulting deaths and injuries, from road transport, as machines avoid human errors, lapses in concentration and defects in sight, among other problems. Intelligent machines, having faster access to a much larger store of information, and able to respond without human emotional biases, might also perform better than medical professionals in diagnosing diseases. The Watson system is now being deployed in oncology to assist in diagnosis and personalized, evidence-based treatment options for cancer patients.
Long the stuff of dystopian sci-fi nightmares, AI clearly comes with risks – the most obvious being that super-intelligent machines might one day overcome and enslave humans. This risk, while still decades away, is taken increasingly seriously by experts, many of whom signed an open letter coordinated by the Future of Life Institute in January 2015 to direct the future of AI away from potential pitfalls. More prosaically, economic changes prompted by intelligent computers replacing human workers may exacerbate social inequalities and threaten existing jobs. For example, automated drones may replace most human delivery drivers, and self-driven short-hire vehicles could make taxis increasingly redundant.
On the other hand, emergent AI may make attributes that are still exclusively human – creativity, emotions, interpersonal relationships – more clearly valued. As machines grow in human intelligence, this technology will increasingly challenge our view of what it means to be human, as well as the risks and benefits posed by the rapidly closing gap between man and machine.
7. Distributed manufacturing
The factory of the future is online – and on your doorstep
Distributed manufacturing turns on its head the way we make and distribute products. In traditional manufacturing, raw materials are brought together, assembled and fabricated in large centralized factories into identical finished products that are then distributed to the customer. In distributed manufacturing, the raw materials and methods of fabrication are decentralized, and the final product is manufactured very close to the final customer.
In essence, the idea of distributed manufacturing is to replace as much of the material supply chain as possible with digital information. To manufacture a chair, for example, rather than sourcing wood and fabricating it into chairs in a central factory, digital plans for cutting the parts of a chair can be distributed to local manufacturing hubs using computerized cutting tools known as CNC routers. Parts can then be assembled by the consumer or by local fabrication workshops that can turn them into finished products. One company already using this model is the US furniture company AtFAB.
Current uses of distributed manufacturing rely heavily on the DIY “maker movement”, in which enthusiasts use their own local 3D printers and make products out of local materials. There are elements of open-source thinking here, in that consumers can customize products to their own needs and preferences. Instead of being centrally driven, the creative design element can be more crowdsourced; products may take on an evolutionary character as more people get involved in visualizing and producing them.
Distributed manufacturing is expected to enable a more efficient use of resources, with less wasted capacity in centralized factories. It also lowers the barriers to market entry by reducing the amount of capital required to build the first prototypes and products. Importantly, it should reduce the overall environmental impact of manufacturing: digital information is shipped over the web rather than physical products over roads or rails, or on ships; and raw materials are sourced locally, further reducing the amount of energy required for transportation.
If it becomes more widespread, distributed manufacturing will disrupt traditional labour markets and the economics of traditional manufacturing. It does pose risks: it may be more difficult to regulate and control remotely manufactured medical devices, for example, while products such as weapons may be illegal or dangerous. Not everything can be made via distributed manufacturing, and traditional manufacturing and supply chains will still have to be maintained for many of the most important and complex consumer goods.
Distributed manufacturing may encourage broader diversity in objects that are today standardized, such as smartphones and automobiles. Scale is no object: one UK company, Facit Homes, uses personalized designs and 3D printing to create customized houses to suit the consumer. Product features will evolve to serve different markets and geographies, and there will be a rapid proliferation of goods and services to regions of the world not currently well served by traditional manufacturing.
8. ‘Sense and avoid’ drones
Flying robots to check power lines or deliver emergency aid
Unmanned aerial vehicles, or drones, have become an important and controversial part of military capacity in recent years. They are also used in agriculture, for filming and multiple other applications that require cheap and extensive aerial surveillance. But so far all these drones have had human pilots; the difference is that their pilots are on the ground and fly the aircraft remotely.
The next step with drone technology is to develop machines that fly themselves, opening them up to a wider range of applications. For this to happen, drones must be able to sense and respond to their local environment, altering their height and flying trajectory in order to avoid colliding with other objects in their path. In nature, birds, fish and insects can all congregate in swarms, each animal responding to its neighbour almost instantaneously to allow the swarm to fly or swim as a single unit. Drones can emulate this.
With reliable autonomy and collision avoidance, drones can begin to take on tasks too dangerous or remote for humans to carry out: checking electric power lines, for example, or delivering medical supplies in an emergency. Drone delivery machines will be able to find the best route to their destination, and take into account other flying vehicles and obstacles. In agriculture, autonomous drones can collect and process vast amounts of visual data from the air, allowing precise and efficient use of inputs such as fertilizer and irrigation.
In January 2014, Intel and Ascending Technologies showcased prototype multi-copter drones that could navigate an on-stage obstacle course and automatically avoid people who walked into their path. The machines use Intel’s RealSense camera module, which weighs just 8g and is less than 4mm thick. This level of collision avoidance will usher in a future of shared airspace, with many drones flying in proximity to humans and operating in and near the built environment to perform a multitude of tasks. Drones are essentially robots operating in three, rather than two, dimensions; advances in next-generation robotics technology will accelerate this trend.
Flying vehicles will never be risk-free, whether operated by humans or as intelligent machines. For widespread adoption, sense and avoid drones must be able to operate reliably in the most difficult conditions: at night, in blizzards or dust storms. Unlike our current digital mobile devices (which are actually immobile, since we have to carry them around), drones will be transformational as they are self-mobile and have the capacity of flying in the three-dimensional world that is beyond our direct human reach. Once ubiquitous, they will vastly expand our presence, productivity and human experience.
9. Neuromorphic technology
Computer chips that mimic the human brain
Even today’s best supercomputers cannot rival the sophistication of the human brain. Computers are linear, moving data back and forth between memory chips and a central processor over a high-speed backbone. The brain, on the other hand, is fully interconnected, with logic and memory intimately cross-linked at billions of times the density and diversity of that found in a modern computer. Neuromorphic chips aim to process information in a fundamentally different way from traditional hardware, mimicking the brain’s architecture to deliver a huge increase in a computer’s thinking and responding power.
Miniaturization has delivered massive increases in conventional computing power over the years, but the bottleneck of shifting data constantly between stored memory and central processors uses large amounts of energy and creates unwanted heat, limiting further improvements. In contrast, neuromorphic chips can be more energy efficient and powerful, combining data-storage and data-processing components into the same interconnected modules. In this sense, the system copies the networked neurons that, in their billions, make up the human brain.
Neuromorphic technology will be the next stage in powerful computing, enabling vastly more rapid processing of data and a better capacity for machine learning. IBM’s million-neuron TrueNorth chip, revealed in prototype in August 2014, has a power efficiency for certain tasks that is hundreds of times superior to a conventional CPU (Central Processing Unit), and more comparable for the first time to the human cortex. With vastly more compute power available for far less energy and volume, neuromorphic chips should allow more intelligent small-scale machines to drive the next stage in miniaturization and artificial intelligence.
Potential applications include: drones better able to process and respond to visual cues, much more powerful and intelligent cameras and smartphones, and data-crunching on a scale that may help unlock the secrets of financial markets or climate forecasting. Computers will be able to anticipate and learn, rather than merely respond in pre-programmed ways.
10. Digital genome
Healthcare for an age when your genetic code is on a USB stick
While the first sequencing of the 3.2 billion base pairs of DNA that make up the human genome took many years and cost tens of millions of dollars, today your genome can be sequenced and digitized in minutes and at the cost of only a few hundred dollars. The results can be delivered to your laptop on a USB stick and easily shared via the internet. This ability to rapidly and cheaply determine our individual unique genetic make-up promises a revolution in more personalized and effective healthcare.
Many of our most intractable health challenges, from heart disease to cancer, have a genetic component. Indeed, cancer is best described as a disease of the genome. With digitization, doctors will be able to make decisions about a patient’s cancer treatment informed by a tumour’s genetic make-up. This new knowledge is also making precision medicine a reality by enabling the development of highly targeted therapies that offer the potential for improved treatment outcomes, especially for patients battling cancer.
Like all personal information, a person’s digital genome will need to be safeguarded for privacy reasons. Personal genomic profiling has already raised challenges, with regard to how people respond to a clearer understanding of their risk of genetic disease, and how others – such as employers or insurance companies – might want to access and use the information. However, the benefits are likely to outweigh the risks, because individualized treatments and targeted therapies can be developed with the potential to be applied across all the many diseases that are driven or assisted by changes in DNA.
At Last A Malaria Vaccine and How It All Began
This week marked a signal achievement. A group from Oxford University announced the first acceptable vaccine ever against malaria. One might be forgiven for wondering why it has taken so long when the covid-19 vaccines have taken just over a year … even whether it is a kind of economic apartheid given that malaria victims reside in the poorest countries of the world.
It turns out that the difficulties of making a malaria vaccine have been due to the complexity of the pathogen itself. The malarial parasite has thousands of genes; by way of comparison, the coronavirus has about a dozen. It means malaria requires a very high immune response to fight it off.
A trial of the vaccine in Burkina Faso has yielded an efficacy of 77 percent for subjects given a high dose and 71 percent for the low-dose recipients. The World Health Organization (WHO) had specified a goal of 75 percent for effective deployment in the population. A previous vaccine demonstrated only 55 percent effectiveness. The seriousness of the disease can be ascertained from the statistics. In 2019, 229 million new malaria infections were recorded and 409 thousand people died. Moreover, many who recover can be severely debilitated by recurring bouts of the disease.
Vaccination has an interesting history. The story begins with Edward Jenner. A country doctor with a keen and questioning mind, he had observed smallpox as a deadly and ravaging disease. He also noticed that milkmaids never seemed to get it. However, they had all had cowpox, a mild variant which at some time or another they would have caught from the cows they milked.
It was 1796 and Jenner desperate for a smallpox cure followed up his theory, of which he was now quite certain, with an experiment. On May14, 1796 Jenner inoculated James Phipps, the eight-year-old son of Jenner’s gardener. He used scraped pus from cowpox blisters on the hands of Sarah Nelmes, a milkmaid who had caught cowpox from a cow named Blossom. Blossom’s hide now hangs in the library of St. George’s Hospital, Jenner’s alma mater.
Phipps was inoculated on both arms with the cowpox material. The result was a mild fever but nothing serious. Next he inoculated Phipps with variolous material, a weakened form of smallpox bacteria often dried from powdered scabs. No disease followed, even on repetition. He followed this experiment with 23 additional subjects (for a round two dozen) with the same result. They were all immune to smallpox. Then he wrote about it.
Not new to science, Edward Jenner had earlier published a careful study of the cuckoo and its habit of laying its eggs in others’ nests. He observed how the newly hatched cuckoo pushed hatchlings and other eggs out of the nest. The study was published resulting in his election as a Fellow of the Royal Society. He was therefore well-suited to spread the word about immunization against smallpox through vaccination with cowpox.
Truth be told, inoculation was not new. People who had traveled to Constantinople reported on its use by Ottoman physicians. And around Jenner’s time, there was a certain Johnny Notions, a self-taught healer, who used it in the Shetland Isles then being devastated by a smallpox epidemic. Others had even used cowpox earlier. But Jenner was able to rationally formalize and explain the procedure and to continue his efforts even though The Royal Society did not accept his initial paper. Persistence pays and finally even Napoleon, with whom Britain was at war, awarded him a medal and had his own troops vaccinated.
The Dark Ghosts of Technology
Last many decades, if accidently, we missed the boat on understanding equality, diversity and tolerance, nevertheless, how obediently and intentionally we worshiped the technology no matter how dark or destructive a shape it morphed into; slaved to ‘dark-technology’ our faith remained untarnished and faith fortified that it will lead us as a smarter and successful nation.
How wrong can we get, how long in the spell, will we ever find ourselves again?
The dumb and dumber state of affairs; extreme and out of control technology has taken human-performances on ‘real-value-creation’ as hostage, crypto-corruption has overtaken economies, shiny chandeliers now only cast giant shadows, tribalism nurturing populism and socio-economic-gibberish on social media narratives now as new intellectualism.
Only the mind is where critical thinking resides, not in some app.
The most obvious missing link, is theabandonment of own deeper thinking. By ignoring critical thinking, and comfortably accepting our own programming, labeled as ‘artificial intelligence’ forgetting in AI there is nothing artificial just our own ‘ignorance’ repackaged and branded. AI is not some runaway train; there is always a human-driver in the engine room, go check. When ‘mechanized-programming, sensationalized by Hollywood as ‘celestially-gifted-artificial-intelligence’ now corrupting global populace in assuming somehow we are in safe hands of some bionic era of robotized smartness. All designed and suited to sell undefined glittering crypto-economies under complex jargon with illusions of great progress. The shiny towers of glittering cities are already drowning in their own tent-cities.
A century ago, knowing how to use a pencil sharpener, stapler or a filing cabinet got us a job, today with 100+ miscellaneous, business or technology related items, little or nothing considered as big value-added gainers. Nevertheless, Covidians, the survivors of the covid-19 cruelties now like regimented disciples all lining up at the gates. There never ever was such a universal gateway to a common frontier or such massive assembly of the largest mindshare in human history.
Some of the harsh lessons acquired while gasping during the pandemic were to isolate techno-logy with brain-ology. Humankind needs humankind solutions, where progress is measured based on common goods. Humans will never be bulldozers but will move mountains. Without mind, we become just broken bodies, in desperate search for viagra-sunrises, cannabis-high-afternoons and opioid-sunsets dreaming of helicopter-monies.
Needed more is the mental-infrastructuring to cope with platform economies of global-age and not necessarily cemented-infrastructuring to manage railway crossings. The new world already left the station a while ago. Chase the brain, not the train. How will all this new thinking affect the global populace and upcoming of 100 new National Elections, scheduled over the next 500 days? The world of Covidians is in one boat; the commonality of problems bringing them closer on key issues.
Newspapers across the world dying; finally, world-maps becoming mandatory readings of the day
Smart leadership must develop smart economies to create the real ‘need’ of the human mind and not just jobs, later rejected only as obsolete against robotization. Across the world, damaged economies are visible. Lack of pragmatic support to small medium businesses, micro-mega exports, mini-micro-manufacturing, upskilling, and reskilling of national citizenry are all clear measurements pointing as national failures. Unlimited rainfall of money will not save us, but the respectable national occupationalism will. Study ‘population-rich-nations’ and new entrapments of ‘knowledge-rich-nations’ on Google and also join Expothon Worldwide on ‘global debate series’ on such topics.
Emergency meetings required; before relief funding expires, get ready with the fastest methodologies to create national occupationalism, at any costs, or prepare for fast waves of populism surrounded by almost broken systems. Bold nations need smart play; national debates and discussions on common sense ideas to create local grassroots prosperity and national mobilization of hidden talents of the citizenry to stand up to the global standard of competitive productivity of national goods and services.
The rest is easy
China and AI needs in the security field
On the afternoon of December 11, 2020, the Political Bureau of the Central Committee of the Communist Party of China (CPC) held the 26th Collective Study Session devoted to national security. On that occasion, the General Secretary of the CPC Central Committee, Xi Jinping, stressed that the national security work was very important in the Party’s management of State affairs, as well as in ensuring that the country was prosperous and people lived in peace.
In view of strengthening national security, China needs to adhere to the general concept of national security; to seize and make good use of an important and propitious period at strategic level for the country’s development; to integrate national security into all aspects of the CPC and State’s activity and consider it in planning economic and social development. In other words, it needs to builda security model in view of promoting international security and world peace and offering strong guarantees for the construction of a modern socialist country.
In this regard, a new cycle of AI-driven technological revolution and industrial transformation is on the rise in the Middle Empire. Driven by new theories and technologies such as the Internet, mobile phone services, big data, supercomputing, sensor networks and brain science, AI offers new capabilities and functionalities such as cross-sectoral integration, human-machine collaboration, open intelligence and autonomous control. Economic development, social progress, global governance and other aspects have a major and far-reaching impact.
In recent years, China has deepened the AI significance and development prospects in many important fields. Accelerating the development of a new AI generation is an important strategic starting point for rising up to the challenge of global technological competition.
What is the current state of AI development in China? How are the current development trends? How will the safe, orderly and healthy development of the industry be oriented and led in the future?
The current gap between AI development and the international advanced level is not very wide, but the quality of enterprises must be “matched” with their quantity. For this reason, efforts are being made to expand application scenarios, by enhancing data and algorithm security.
The concept of third-generation AI is already advancing and progressing and there are hopes of solving the security problem through technical means other than policies and regulations-i.e. other than mere talk.
AI is a driving force for the new stages of technological revolution and industrial transformation. Accelerating the development of a new AI generation is a strategic issue for China to seize new opportunities in the organisation of industrial transformation.
It is commonly argued that AI has gone through two generations so far. AI1 is based on knowledge, also known as “symbolism”, while AI2 is based on data, big data, and their “deep learning”.
AI began to be developed in the 1950s with the famous Test of Alan Turing (1912-54), and in 1978 the first studies on AI started in China. In AI1, however, its progress was relatively small. The real progress has mainly been made over the last 20 years – hence AI2.
AI is known for the traditional information industry, typically Internet companies. This has acquired and accumulated a large number of users in the development process, and has then established corresponding patterns or profiles based on these acquisitions, i.e. the so-called “knowledge graph of user preferences”. Taking the delivery of some products as an example, tens or even hundreds of millions of data consisting of users’ and dealers’ positions, as well as information about the location of potential buyers, are incorporated into a database and then matched and optimised through AI algorithms: all this obviously enhances the efficacy of trade and the speed of delivery.
By upgrading traditional industries in this way, great benefits have been achieved. China is leading the way and is in the forefront in this respect: facial recognition, smart speakers, intelligent customer service, etc. In recent years, not only has an increasing number of companies started to apply AI, but AI itself has also become one of the professional directions about which candidates in university entrance exams are worried.
According to statistics, there are 40 AI companies in the world with a turnover of over one billion dollars, 20 of them in the United States and as many as 15 in China. In quantitative terms, China is firmly ranking second. It should be noted, however, that although these companies have high ratings, their profitability is still limited and most of them may even be loss-making.
The core AI sector should be independent of the information industry, but should increasingly open up to transport, medicine, urban fabric and industries led independently by AI technology. These sectors are already being developed in China.
China accounts for over a third of the world’s AI start-ups. And although the quantity is high, the quality still needs to be improved. First of all, the application scenarios are limited. Besides facial recognition, security, etc., other fields are not easy to use and are exposed to risks such as 1) data insecurity and 2) algorithm insecurity. These two aspects are currently the main factors limiting the development of the AI industry, which is in danger of being prey to hackers of known origin.
With regard to data insecurity, we know that the effect of AI applications depends to a large extent on data quality, which entails security problems such as the loss of privacy (i.e. State security). If the problem of privacy protection is not solved, the AI industry cannot develop in a healthy way, as it would be working for ‘unknown’ third parties.
When we log into a webpage and we are told that the most important thing for them is the surfers’ privacy, this is a lie as even teenage hackers know programs to violate it: at least China tells us about the laughableness of such politically correct statements.
The second important issue is the algorithm insecurity. The so-called insecure algorithm is a model that is used under specific conditions and will not work if the conditions are different. This is also called unrobustness, i.e. the algorithm vulnerability to the test environment.
Taking autonomous driving as an example, it is impossible to consider all scenarios during AI training and to deal with new emergencies when unexpected events occur. At the same time, this vulnerability also makes AI systems permeable to attacks, deception and frauds.
The problem of security in AI does not lie in politicians’ empty speeches and words, but needs to be solved from a technical viewpoint. This distinction is at the basis of AI3.
It has a development path that combines the first generation knowledge-based AI and the second generation data-driven AI. It uses the four elements – knowledge, data, algorithms and computing power – to establish a new theory and interpretable and robust methods for a safe, credible and reliable technology.
At the moment, the AI2 characterised by deep learning is still in a phase of growth and hence the question arises whether the industry can accept the concept of AI3 development.
As seen above, AI has been developing for over 70 years and now it seems to be a “prologue’.
Currently most people are not able to accept the concept of AI3 because everybody was hoping for further advances and steps forward in AI2. Everybody felt that AI could continue to develop by relying on learning and not on processing. The first steps of AI3 in China took place in early 2015 and in 2018.
The AI3 has to solve security problems from a technical viewpoint. Specifically, the approach consists in combining knowledge and data. Some related research has been carried out in China over the past four or five years and the results have also been applied at industrial level. The RealSecure data security platform and the RealSafe algorithm security platform are direct evidence of these successes.
What needs to be emphasised is that these activities can only solve particular security problems in specific circumstances. In other words, the problem of AI security has not yet found a fundamental solution, and it is likely to become a long-lasting topic without a definitive solution since – just to use a metaphor – once the lock is found, there is always an expert burglar. In the future, the field of AI security will be in a state of ongoing confrontation between external offence and internal defence – hence algorithms must be updated constantly and continuously.
The progression of AI3 will be a natural long-term process. Fortunately, however, there is an important AI characteristic – i.e. that every result put on the table always has great application value. This is also one of the important reasons why all countries attach great importance to AI development, as their national interest and real independence are at stake.
With changes taking place around the world and a global economy in deep recession due to Covid-19, the upcoming 14th Five-Year Plan (2021-25) of the People’s Republic of China will be the roadmap for achieving the country’s development goals in the midst of global turmoil.
As AI is included in the aforementioned plan, its development shall also tackle many “security bottlenecks”. Firstly, there is a wide gap in the innovation and application of AI in the field of network security, and many scenarios are still at the stage of academic exploration and research.
Secondly, AI itself lacks a systematic security assessment and there are severe risks in all software and hardware aspects. Furthermore, the research and innovation environment on AI security is not yet at its peak and the relevant Chinese domestic industry not yet at the top position, seeking more experience.
Since 2017, in response to the AI3 Development Plan issued by the State Council, 15 Ministries and Commissions including the Ministry of Science and Technology, the Development and Reform Commission, etc. have jointly established an innovation platform. This platform is made up of leading companies in the industry, focusing on open innovation in the AI segment.
At present, thanks to this platform, many achievements have been made in the field of security. As first team in the world to conduct research on AI infrastructure from a system implementation perspective, over 100 vulnerabilities have been found in the main machine learning frameworks and dependent components in China.
The number of vulnerabilities make Chinese researchers rank first in the world. At the same time, a future innovation plan -developed and released to open tens of billions of security big data – is being studied to promote the solution to those problems that need continuous updates.
The government’s working report promotes academic cooperation and pushes industry and universities to conduct innovative research into three aspects: a) AI algorithm security comparison; 2) AI infrastructure security detection; 3) AI applications in key cyberspace security scenarios.
By means of state-of-the-art theoretical and basic research, we also need to provide technical reserves for the construction of basic AI hardware and open source software platforms (i.e. programmes that are not protected by copyright and can be freely modified by users) and AI security detection platforms, so as to reduce the risks inherent in AI security technology and ensure the healthy development of AI itself.
With specific reference to security, on March 23 it was announced that the Chinese and Russian Foreign Ministers had signed a joint statement on various current global governance issues.
The statement stresses that the continued spread of the Covid-19 pandemic has accelerated the evolution of the international scene, has caused a further imbalance in the global governance system and has affected the process of economic development while new global threats and challenges have emerged one after another and the world has entered a period of turbulent changes. The statement appeals to the international community to put aside differences, build consensus, strengthen coordination, preserve world peace and geostrategic stability, as well as promote the building of a more equitable, democratic and rational multipolar international order.
In view of ensuring all this, the independence enshrined by international law is obviously not enough, nor is the possession of nuclear deterrent. What is needed, instead, is the country’s absolute control of information security, which in turn orients and directs the weapon systems, the remote control of which is the greedy prey to the usual suspects.
Relations between Azerbaijan and the European Union
The crises, revolutions, and wars of the first half of the 20th century led to serious geopolitical upheavals, economic crises,...
When diplomacy cannot get the best of geopolitics: Cyprus’s lack of a way forward
On April 24, people from both sides gather in proximity of the demarcation line splitting the capital, Nicosia, in two....
Asian Ports Dominate Global Container Port Performance Index
Asian container ports are the most efficient in the world, dominating the Top 50 spots according to the new global...
First Aid: How Russia and the West Can Help Syrians in Idlib
Authors: Andrey Kortunov and Julien Barnes-Dacey* The next international showdown on Syria is quickly coming into view. After ten years...
World Bank Supports Serbia’s Move Toward Greener, More Resilient, and Inclusive Growth
Serbia is making strides toward accelerating economic growth that is more green, resilient, and inclusive, by implementing a series of...
Secrets to Successful Selling Online
Online commerce has opened a new revenue for people to earn money with very little overhead costs. It’s much easier...
Religion Freedom Index of Bangladesh: Current Developments and Government Responses
Aid to the Church in Need (ACN) recently published its annual Religious Freedom in the World 2021 Report (RFR) that...
Economy3 days ago
Russia and France to strengthen economic cooperation
Reports3 days ago
Germany still leads the world in industrial competitiveness, but China is inching closer
Southeast Asia2 days ago
Is Quad 2.0 transforming into a Pentad?
South Asia2 days ago
Feasible Outcomes after Withdrawal of US Troops from Afghanistan
East Asia3 days ago
Post COVID-19, Can China Emerge as the New Global Power?
Reports2 days ago
Lithuania is well placed to lead on clean energy and energy security in the Baltic region
Europe3 days ago
Marine Le Pen Has the Strongest Chance to Succeed, Of All Progressive Political Leaders in the World Today
Middle East2 days ago
Iran’s Impunity Will Grow if Evidence of Past Crimes is Fully Destroyed