Connect with us

Science & Technology

The Top 10 Emerging Technologies of 2016

Published

on

A diverse range of breakthrough technologies, including batteries capable of providing power to whole villages, “socially aware” artificial intelligence and new generation solar panels, could soon be playing a role in tackling the world’s most pressing challenges.

“Technology has a critical role to play in addressing each of the major challenges the world faces, yet it also poses significant economic and social risks. As we enter the Fourth Industrial Revolution, it is vital that we develop shared norms and protocols to ensure that technology serves humanity and contributes to a prosperous and sustainable future,” said Jeremy Jurgens, Chief Information and Interaction Officer, Member of the Executive Committee, World Economic Forum.

The Top 10 Emerging Technologies 2016 list, compiled by the Forum’s Meta-Council on Emerging Technologies and published in collaboration with Scientific American, highlights technological advances its members believe have the power to improve lives, transform industries and safeguard the planet. It also provides an opportunity to debate any human, societal, economic or environmental risks and concerns that the technologies may pose prior to widespread adoption.

“Horizon scanning for emerging technologies is crucial to staying abreast of developments that can radically transform our world, enabling timely expert analysis in preparation for these disruptors. The global community needs to come together and agree on common principles if our society is to reap the benefits and hedge the risks of these technologies,” said Dr Bernard Meyerson, Chief Innovation Officer of IBM and Chair of the Meta-Council on Emerging Technologies.

One of the criteria used by council members during their deliberations was the likelihood that 2016 represents a tipping point in the deployment of each technology. Thus, the list includes some technologies that have been known for a number of years, but are only now reaching a level of maturity where their impact can be meaningfully felt.

The top 10 technologies to make this year’s list:

1.Nanosensors and the Internet of Nanothings – With the Internet of Things expected to comprise 30 billion connected devices by 2020, one of the most exciting areas of focus today is now on nanosensors capable of circulating in the human body or being embedded in construction materials. Once connected, this Internet of Nanothings could have a huge impact on the future of medicine, architecture, agriculture and drug manufacture.

2. Next Generation Batteries – One of the greatest obstacles holding renewable energy back is matching supply with demand, but recent advances in energy storage using sodium, aluminium and zinc based batteries makes mini-grids feasible that can provide clean, reliable, round the clock energy sources to entire villages.

3. The Blockchain – Much already has been made of the distributed electronic ledger behind the online currency Bitcoin. With related venture investment exceeding $1 billion in 2015 alone, the economic and social impact of blockchain’s potential to fundamentally change the way markets and governments work is only now emerging.

4. 2D Materials – Graphene may be the best-known, single-atom layer material, but it is by no means the only one. Plummeting production costs mean that such 2D materials are emerging in a wide range of applications, from air and water filters to new generations of wearables and batteries.

5. Autonomous Vehicles – Self-driving cars may not yet be fully legal in most geographies, but their potential for saving lives, cutting pollution, boosting economies, and improving quality of life for the elderly and other segments of society has led to rapid deployment of key technology forerunners along the way to full autonomy.

6. Organs-on-chips – Miniature models of human organs – the size of a memory stick – could revolutionize medical research and drug discovery by allowing researchers to see biological mechanism behaviours in ways never before possible.

7. Perovskite Solar Cells – This new photovoltaic material offers three improvements over the classic silicon solar cell: it is easier to make, can be used virtually anywhere and, to date, keeps on generating power more efficiently.

8. Open AI Ecosystem – Shared advances in natural language processing and social awareness algorithms, coupled with an unprecedented availability of data, will soon allow smart digital assistants help with a vast range of tasks, from keeping track of one’s finances and health to advising on wardrobe choice.

9. Optogenetics – The use of light and colour to record the activity of neurons in the brain has been around for some time, but recent developments mean light can now be delivered deeper into brain tissue, something that could lead to better treatment for people with brain disorders.

10. Systems Metabolic Engineering – Advances in synthetic biology, systems biology and evolutionary engineering mean that the list of building block chemicals that can be manufactured better and more cheaply by using plants rather than fossil fuels is growing every year.

To compile this list, the World Economic Forum’s Meta-Council on Emerging Technologies, a panel of global experts, drew on the collective expertise of the Forum’s communities to identify the most important recent technological trends. By doing so, the Meta-Council aims to raise awareness of their potential and contribute to closing gaps in investment, regulation and public understanding that so often thwart progress.

Continue Reading
Comments

Science & Technology

Stopping COVID-19 in Its Tracks: Science Gets the Upper Hand

Dr. James M. Dorsey

Published

on

Science has knocked religion and traditional healing methods out of the ring in the battle between rival approaches towards getting the coronavirus pandemic under control.

Men like Indian Prime Minister Narendra Modi, Iranian Supreme Leader Ayatollah Ali Khamenei, Pakistani Prime Minister Imran Khan, and Israeli Health Minister Yaakov Litzman have finally joined much of the world in imposing science-driven degrees of lockdowns, social distancing, and the search for medical cures and protections after initially opting for political expediency or advocacy of traditional healing methods and/or religious precepts.

Remarkably, Saudi Crown Prince Mohammed bin Salman, the de facto ruler of a kingdom that was founded and shaped by an ultra-conservative strand of Islam, was one leader who was not held back by religion when he suspended the Umrah (the smaller pilgrimage to Mecca), announced that this year’s Hajj could be cancelled, and locked down the holy city as well as its counterpart, Medina.

“What if this year’s Hajj was under Imran Khan rather than Mohammed bin Salman? Would he have waffled there as indeed he has in Pakistan?” asked Pakistani nuclear scientist, political analyst, and human rights activist Pervez Hoodbhoy.

Mr. Hoodbhoy noted that Pakistan has yet to import Saudi dates touted as cure for all diseases by Maulana Tariq Jameel, Pakistan’s most popular preacher and a staunch ally of Mr. Khan.

Mr. Hoodbhoy also took note of the fact Mr. Modi had not fallen back on Hindutava or Hindu nationalism’s advocacy of the therapeutic powers of cow urine, Ayurveda, a medical system rooted in Indian history, and yoga.

Mr. Khamenei has similarly dropped his resistance to the closure of shrines in the holy cities of Qom and Mashhad. His government has closed schools and universities and urged the public to stay at home while announcing that “low-risk” economic activity would be allowed to resume next week.

The consequences of science-based approaches for civilizationalists who advocate policies inspired by religion or the supremacy of one religious group over another could go far beyond what should shape public health policies.

They could threaten the foundations of their religious support base as well as their discriminatory policies towards religious or ethnic minorities. Israel is a case in point in terms of both Prime Minister Benjamin Netanyahu’s religious support base as well as his policies towards Israeli nationals of Palestinian descent.

With ultra-orthodox Jewish neighborhoods and cities emerging as the communities most affected by the coronavirus, some Israeli commentators argue that the pandemic could undermine rabbinical authority on a scale not seen since the Holocaust when large numbers left ultra-orthodoxy after rabbinical advice to remain in Europe proved devastating.

Ultra-orthodox rabbis, including Mr. Litzman, the health minister, who together with his wife and an ultra-orthodox adviser to Mr. Netanyahu, has tested positive, have had to reverse themselves in recent days as the virus ate its way through their communities in Jerusalem and other Israeli cities.

“Torah no longer saves from death. The coronavirus has dealt an unimaginable blow to the rabbinical authority – and worldview – that ultra-Orthodox Jews previously regarded as infallible and eternal,” said prominent Israeli journalist Anshel Pfeffer, who authored an acclaimed biography of Mr. Netanyahu.

The non-discriminatory nature of the coronavirus forced the Israeli government last week to ramp up testing in communities of Israeli Palestinians which had been described by public health experts as a ticking time bomb.

The experts warned that Israeli Palestinians were an at-risk group, many of whom suffer from chronic diseases, live in crowded conditions, and are socially and economically disadvantaged.

“In terms of public health, due to the present situation, the Arab communities are likely to become epicenters of the coronavirus outbreak, which will threaten the health of the entire population,” said Dr. Nihaya Daoud, a public health lecturer at Ben Gurion University of the Negev.

Increased testing of Israel Palestinians tackles Israel’s immediate problem of attempting to stymie the spread of the virus. It doesn’t, however, address the longer-term structural threat to public health posed by imbalances in health infrastructure in Israeli Jewish and Israeli Palestinian communities, a lesson many Israelis could draw from the coronavirus crisis.

Drawing that lesson would challenge a pillar of Israeli policy with far-reaching consequences.

By the same token, the return home of some 45,000 Palestinian workers to the West Bank for this week’s Passover holiday is likely to create bottlenecks in both Israel and the Palestinian territory after the Israeli government decided that they would not be allowed to return because of health concerns.

The decision threatens to create a labor shortage in Israel, increase economic pressure on an already weakened Palestine Authority, and facilitate the spread of the virus on the West Bank given the administration’s inability to test all returnees.

“Because the two populations are so intertwined, curbing the virus only in one society is impossible,” said Ofer Zalzberg of the International Crisis Group.

It’s a lesson that applies universally, not just to Israelis and Palestinians in the West Bank and Israel. That is no truer than in Syrian and Palestinian refugee camps that dot the eastern Mediterranean.

The global pandemic also casts a glaring spotlight on the risks of looking the other way when hospitals and health infrastructure are deliberately destroyed in war-torn countries like in Syria, where President Bashar al-Assad’s forces deliberately targeted hospitals, and in Yemen, where the Saudi-UAE-led coalition did the same.

No doubt, it is a lesson that anti-globalists and civilizationalists prefer not to hear.

Yet, whether anti-globalists and civilizationalists like it or not, the coronavirus is global and universal. So is the science that will ultimately help get control of the pandemic and eventually stop it in its tracks.

Author’s note: This story was first published in Inside Arabia

Continue Reading

Science & Technology

The World After COVID-19: Does Transparent Mean Healthy?

Maria Gurova

Published

on

The insanity of despair and primaeval fear for one’s health (and today, no matter how ironic and paradoxical it sounds, this may be the state of mind that brings many of us together) will most likely give rise to a new global formation that will then become a global reality. It is still hard to say what it will be like exactly, but it is clear that the world will become more transparent. And I do not mean in the usual sense of anti-corruption measures, but rather in the original sense of the word – the world will become more “see-through.” Our temperature will be monitored. Smartphones with built-in sensors will collect precise data not only about our clicks and likes, but about our physical and possibly emotional state. The world and the people that live in it will undergo a number of changes once the current coronavirus pandemic is over, and many of those changes will be accompanied by a leap in technological development.

For many experts and scientists, the events that are unfolding today are reminiscent of what happened in 2003, when the SARS virus presented the first large-scale threat to human health of the new millennium. Unlike today’s unbidden crowned guest, SARS was not so virulent, yet it caused major concerns for a number of countries, particularly in East and Southeast Asia. Hence the deadly lessons learned in Singapore and partly in Taiwan, where the government has for two decades now been successfully using a system of mass surveillance of the everyday life of its citizens – a system that has received the approval of the people. This system is part of Singapore’s cybersecurity strategy and allows the physical condition of large masses of people to be monitored, thereby preventing diseases from spreading and escalating into epidemics. This, combined with their ability to enforce extremely strict quarantine measures and carry out mass testing instead of the selective testing currently practised in Europe and Russia, has allowed Singapore and Taiwan to contain the spread of the disease and prevent it from turning into an epidemic. Of course, their compact territories have certainly played a part here. Other countries, for instance, Israel and Russia, have already followed this example and approved a monitoring system that uses mobile data and geolocation in order to trace the movements of persons with confirmed infection. We have to assume that one of the first steps after the COVID-19 pandemic will be to embed this surveillance system even deeper into the public life. Most likely, this step will be met with approval instead of protests and street rallies.

I would not wish to speak for everyone, but it seems to me that the choice between health and privacy is a no-brainer. The pandemic will end, and what the world emerging from the pandemic will look like is an interesting question worthy of discussion. To quote the Deputy Minister of Health of Iran, who had COVID-19, we can note that the coronavirus came to us from a relatively safe country and, contrary to recent rumours, it does not only affect those of Asian heritage: quite the opposite, it is very democratic in its choice of victims, which is to say, it affects everyone.

Hence the question: by self-isolating, we are buying doctors and scientists time to find a cure to the virus and test vaccines, but what are we going to do in the event of a new pandemic? Here, humanity faces two choices. The first is to give free rein to nationalists who are already jubilant and triumphant over the failures of globalization and the inability of liberal democratic countries to shut down their borders to viruses and undesirable immigrants. The second is to move to a radically new formation where we will become even more mutually dependent and open to our societies and governments, because this will be a mandatory condition for moving about and doing business, and perhaps even starting a family. Personal secrets will become a thing of the past, a fairy tale we tell our grandchildren. In fact, the issue is far more serious, with multiple additions and ensuing consequences.

Following the COVID-19 pandemic, consensus and mutual understanding between states will be relevant like never before, especially since the problems of disarmament, nuclear warheads, defence budgets propped up by taxpayer money, international sanctions, etc., that appeared and developed during the presidencies of Nikita Khrushchev and Ronald Reagan may finally move into the background. Instead, world leaders, especially given that most of them are at an age that makes them particularly vulnerable to the coronavirus, should start thinking about new plans for investing in healthcare, socioeconomic aspects of life and technological development, because those will be intrinsically linked with the other aspects of improving the state mentioned above. Will this represent a new social contract between the government, the public and the citizen? Probably. Will it represent a new pact between governments? One would hope so. Perhaps the coronavirus pandemic will break down the old world and give rise to the new one that so many expected to appear in the 1990s. But what was to await us back then was proxy wars and a confrontation through sanctions that split societies from within and raised barriers between states. Maybe this new world will be one where surveillance cameras and sensors will first prompt a feeling of relief and then become an integral part of the picture. Perhaps it will be a world where life without external surveillance and control will appear unsafe and unnatural.

From our partner RIAC

Continue Reading

Science & Technology

Future Goals in the AI Race: Explainable AI and Transfer Learning

Published

on

Recent years have seen breakthroughs in neural network technology: computers can now beat any living person at the most complex game invented by humankind, as well as imitate human voices and faces (both real and non-existent) in a deceptively realistic manner. Is this a victory for artificial intelligence over human intelligence? And if not, what else do researchers and developers need to achieve to make the winners in the AI race the “kings of the world?”

Background

Over the last 60 years, artificial intelligence (AI) has been the subject of much discussion among researchers representing different approaches and schools of thought. One of the crucial reasons for this is that there is no unified definition of what constitutes AI, with differences persisting even now. This means that any objective assessment of the current state and prospects of AI, and its crucial areas of research, in particular, will be intricately linked with the subjective philosophical views of researchers and the practical experience of developers.

In recent years, the term “general intelligence,” meaning the ability to solve cognitive problems in general terms, adapting to the environment through learning, minimizing risks and optimizing the losses in achieving goals, has gained currency among researchers and developers. This led to the concept of artificial general intelligence (AGI), potentially vested not in a human, but a cybernetic system of sufficient computational power. Many refer to this kind of intelligence as “strong AI,” as opposed to “weak AI,” which has become a mundane topic in recent years.

As applied AI technology has developed over the last 60 years, we can see how many practical applications – knowledge bases, expert systems, image recognition systems, prediction systems, tracking and control systems for various technological processes – are no longer viewed as examples of AI and have become part of “ordinary technology.” The bar for what constitutes AI rises accordingly, and today it is the hypothetical “general intelligence,” human-level intelligence or “strong AI,” that is assumed to be the “real thing” in most discussions. Technologies that are already being used are broken down into knowledge engineering, data science or specific areas of “narrow AI” that combine elements of different AI approaches with specialized humanities or mathematical disciplines, such as stock market or weather forecasting, speech and text recognition and language processing.

Different schools of research, each working within their own paradigms, also have differing interpretations of the spheres of application, goals, definitions and prospects of AI, and are often dismissive of alternative approaches. However, there has been a kind of synergistic convergence of various approaches in recent years, and researchers and developers are increasingly turning to hybrid models and methodologies, coming up with different combinations.

Since the dawn of AI, two approaches to AI have been the most popular. The first, “symbolic” approach, assumes that the roots of AI lie in philosophy, logic and mathematics and operate according to logical rules, sign and symbolic systems, interpreted in terms of the conscious human cognitive process. The second approach (biological in nature), referred to as connectionist, neural-network, neuromorphic, associative or subsymbolic, is based on reproducing the physical structures and processes of the human brain identified through neurophysiological research. The two approaches have evolved over 60 years, steadily becoming closer to each other. For instance, logical inference systems based on Boolean algebra have transformed into fuzzy logic or probabilistic programming, reproducing network architectures akin to neural networks that evolved within the neuromorphic approach. On the other hand, methods based on “artificial neural networks” are very far from reproducing the functions of actual biological neural networks and rely more on mathematical methods from linear algebra and tensor calculus.

Are There “Holes” in Neural Networks?

In the last decade, it was the connectionist, or subsymbolic, approach that brought about explosive progress in applying machine learning methods to a wide range of tasks. Examples include both traditional statistical methodologies, like logistical regression, and more recent achievements in artificial neural network modelling, like deep learning and reinforcement learning. The most significant breakthrough of the last decade was brought about not so much by new ideas as by the accumulation of a critical mass of tagged datasets, the low cost of storing massive volumes of training samples and, most importantly, the sharp decline of computational costs, including the possibility of using specialized, relatively cheap hardware for neural network modelling. The breakthrough was brought about by a combination of these factors that made it possible to train and configure neural network algorithms to make a quantitative leap, as well as to provide a cost-effective solution to a broad range of applied problems relating to recognition, classification and prediction. The biggest successes here have been brought about by systems based on “deep learning” networks that build on the idea of the “perceptron” suggested 60 years ago by Frank Rosenblatt. However, achievements in the use of neural networks also uncovered a range of problems that cannot be solved using existing neural network methods.

First, any classic neural network model, whatever amount of data it is trained on and however precise it is in its predictions, is still a black box that does not provide any explanation of why a given decision was made, let alone disclose the structure and content of the knowledge it has acquired in the course of its training. This rules out the use of neural networks in contexts where explainability is required for legal or security reasons. For example, a decision to refuse a loan or to carry out a dangerous surgical procedure needs to be justified for legal purposes, and in the event that a neural network launches a missile at a civilian plane, the causes of this decision need to be identifiable if we want to correct it and prevent future occurrences.

Second, attempts to understand the nature of modern neural networks have demonstrated their weak ability to generalize. Neural networks remember isolated, often random, details of the samples they were exposed to during training and make decisions based on those details and not on a real general grasp of the object represented in the sample set. For instance, a neural network that was trained to recognize elephants and whales using sets of standard photos will see a stranded whale as an elephant and an elephant splashing around in the surf as a whale. Neural networks are good at remembering situations in similar contexts, but they lack the capacity to understand situations and cannot extrapolate the accumulated knowledge to situations in unusual settings.

Third, neural network models are random, fragmentary and opaque, which allows hackers to find ways of compromising applications based on these models by means of adversarial attacks. For example, a security system trained to identify people in a video stream can be confused when it sees a person in unusually colourful clothing. If this person is shoplifting, the system may not be able to distinguish them from shelves containing equally colourful items. While the brain structures underlying human vision are prone to so-called optical illusions, this problem acquires a more dramatic scale with modern neural networks: there are known cases where replacing an image with noise leads to the recognition of an object that is not there, or replacing one pixel in an image makes the network mistake the object for something else.

Fourth, the inadequacy of the information capacity and parameters of the neural network to the image of the world it is shown during training and operation can lead to the practical problem of catastrophic forgetting. This is seen when a system that had first been trained to identify situations in a set of contexts and then fine-tuned to recognize them in a new set of contexts may lose the ability to recognize them in the old set. For instance, a neural machine vision system initially trained to recognize pedestrians in an urban environment may be unable to identify dogs and cows in a rural setting, but additional training to recognize cows and dogs can make the model forget how to identify pedestrians, or start confusing them with small roadside trees.

Growth Potential?

The expert community sees a number of fundamental problems that need to be solved before a “general,” or “strong,” AI is possible. In particular, as demonstrated by the biggest annual AI conference held in Macao, “explainable AI” and “transfer learning” are simply necessary in some cases, such as defence, security, healthcare and finance. Many leading researchers also think that mastering these two areas will be the key to creating a “general,” or “strong,” AI.

Explainable AI allows for human beings (the user of the AI system) to understand the reasons why a system makes decisions and approve them if they are correct, or rework or fine-tune the system if they are not. This can be achieved by presenting data in an appropriate (explainable) manner or by using methods that allow this knowledge to be extracted with regard to specific precedents or the subject area as a whole. In a broader sense, explainable AI also refers to the capacity of a system to store, or at least present its knowledge in a human-understandable and human-verifiable form. The latter can be crucial when the cost of an error is too high for it only to be explainable post factum. And here we come to the possibility of extracting knowledge from the system, either to verify it or to feed it into another system.

Transfer learning is the possibility of transferring knowledge between different AI systems, as well as between man and machine so that the knowledge possessed by a human expert or accumulated by an individual system can be fed into a different system for use and fine-tuning. Theoretically speaking, this is necessary because the transfer of knowledge is only fundamentally possible when universal laws and rules can be abstracted from the system’s individual experience. Practically speaking, it is the prerequisite for making AI applications that will not learn by trial and error or through the use of a “training set,” but can be initialized with a base of expert-derived knowledge and rules – when the cost of an error is too high or when the training sample is too small.

How to Get the Best of Both Worlds?

There is currently no consensus on how to make an artificial general intelligence that is capable of solving the abovementioned problems or is based on technologies that could solve them.

One of the most promising approaches is probabilistic programming, which is a modern development of symbolic AI. In probabilistic programming, knowledge takes the form of algorithms and source, and target data is not represented by values of variables but by a probabilistic distribution of all possible values. Alexei Potapov, a leading Russian expert on artificial general intelligence, thinks that this area is now in a state that deep learning technology was in about ten years ago, so we can expect breakthroughs in the coming years.

Another promising “symbolic” area is Evgenii Vityaev’s semantic probabilistic modelling, which makes it possible to build explainable predictive models based on information represented as semantic networks with probabilistic inference based on Pyotr Anokhin’s theory of functional systems.

One of the most widely discussed ways to achieve this is through so-called neuro-symbolic integration – an attempt to get the best of both worlds by combining the learning capabilities of subsymbolic deep neural networks (which have already proven their worth) with the explainability of symbolic probabilistic modelling and programming (which hold significant promise). In addition to the technological considerations mentioned above, this area merits close attention from a cognitive psychology standpoint. As viewed by Daniel Kahneman, human thought can be construed as the interaction of two distinct but complementary systems: System 1 thinking is fast, unconscious, intuitive, unexplainable thinking, whereas System 2 thinking is slow, conscious, logical and explainable. System 1 provides for the effective performance of run-of-the-mill tasks and the recognition of familiar situations. In contrast, System 2 processes new information and makes sure we can adapt to new conditions by controlling and adapting the learning process of the first system. Systems of the first kind, as represented by neural networks, are already reaching Gartner’s so-called plateau of productivity in a variety of applications. But working applications based on systems of the second kind – not to mention hybrid neuro-symbolic systems which the most prominent industry players have only started to explore – have yet to be created.

This year, Russian researchers, entrepreneurs and government officials who are interested in developing artificial general intelligence have a unique opportunity to attend the first AGI-2020 international conference in St. Petersburg in late June 2020, where they can learn about all the latest developments in the field from the world’s leading experts.

From our partner RIAC

Continue Reading

Publications

Latest

Trending