Connect with us

Science & Technology

Future Goals in the AI Race: Explainable AI and Transfer Learning

Published

on

Recent years have seen breakthroughs in neural network technology: computers can now beat any living person at the most complex game invented by humankind, as well as imitate human voices and faces (both real and non-existent) in a deceptively realistic manner. Is this a victory for artificial intelligence over human intelligence? And if not, what else do researchers and developers need to achieve to make the winners in the AI race the “kings of the world?”

Background

Over the last 60 years, artificial intelligence (AI) has been the subject of much discussion among researchers representing different approaches and schools of thought. One of the crucial reasons for this is that there is no unified definition of what constitutes AI, with differences persisting even now. This means that any objective assessment of the current state and prospects of AI, and its crucial areas of research, in particular, will be intricately linked with the subjective philosophical views of researchers and the practical experience of developers.

In recent years, the term “general intelligence,” meaning the ability to solve cognitive problems in general terms, adapting to the environment through learning, minimizing risks and optimizing the losses in achieving goals, has gained currency among researchers and developers. This led to the concept of artificial general intelligence (AGI), potentially vested not in a human, but a cybernetic system of sufficient computational power. Many refer to this kind of intelligence as “strong AI,” as opposed to “weak AI,” which has become a mundane topic in recent years.

As applied AI technology has developed over the last 60 years, we can see how many practical applications – knowledge bases, expert systems, image recognition systems, prediction systems, tracking and control systems for various technological processes – are no longer viewed as examples of AI and have become part of “ordinary technology.” The bar for what constitutes AI rises accordingly, and today it is the hypothetical “general intelligence,” human-level intelligence or “strong AI,” that is assumed to be the “real thing” in most discussions. Technologies that are already being used are broken down into knowledge engineering, data science or specific areas of “narrow AI” that combine elements of different AI approaches with specialized humanities or mathematical disciplines, such as stock market or weather forecasting, speech and text recognition and language processing.

Different schools of research, each working within their own paradigms, also have differing interpretations of the spheres of application, goals, definitions and prospects of AI, and are often dismissive of alternative approaches. However, there has been a kind of synergistic convergence of various approaches in recent years, and researchers and developers are increasingly turning to hybrid models and methodologies, coming up with different combinations.

Since the dawn of AI, two approaches to AI have been the most popular. The first, “symbolic” approach, assumes that the roots of AI lie in philosophy, logic and mathematics and operate according to logical rules, sign and symbolic systems, interpreted in terms of the conscious human cognitive process. The second approach (biological in nature), referred to as connectionist, neural-network, neuromorphic, associative or subsymbolic, is based on reproducing the physical structures and processes of the human brain identified through neurophysiological research. The two approaches have evolved over 60 years, steadily becoming closer to each other. For instance, logical inference systems based on Boolean algebra have transformed into fuzzy logic or probabilistic programming, reproducing network architectures akin to neural networks that evolved within the neuromorphic approach. On the other hand, methods based on “artificial neural networks” are very far from reproducing the functions of actual biological neural networks and rely more on mathematical methods from linear algebra and tensor calculus.

Are There “Holes” in Neural Networks?

In the last decade, it was the connectionist, or subsymbolic, approach that brought about explosive progress in applying machine learning methods to a wide range of tasks. Examples include both traditional statistical methodologies, like logistical regression, and more recent achievements in artificial neural network modelling, like deep learning and reinforcement learning. The most significant breakthrough of the last decade was brought about not so much by new ideas as by the accumulation of a critical mass of tagged datasets, the low cost of storing massive volumes of training samples and, most importantly, the sharp decline of computational costs, including the possibility of using specialized, relatively cheap hardware for neural network modelling. The breakthrough was brought about by a combination of these factors that made it possible to train and configure neural network algorithms to make a quantitative leap, as well as to provide a cost-effective solution to a broad range of applied problems relating to recognition, classification and prediction. The biggest successes here have been brought about by systems based on “deep learning” networks that build on the idea of the “perceptron” suggested 60 years ago by Frank Rosenblatt. However, achievements in the use of neural networks also uncovered a range of problems that cannot be solved using existing neural network methods.

First, any classic neural network model, whatever amount of data it is trained on and however precise it is in its predictions, is still a black box that does not provide any explanation of why a given decision was made, let alone disclose the structure and content of the knowledge it has acquired in the course of its training. This rules out the use of neural networks in contexts where explainability is required for legal or security reasons. For example, a decision to refuse a loan or to carry out a dangerous surgical procedure needs to be justified for legal purposes, and in the event that a neural network launches a missile at a civilian plane, the causes of this decision need to be identifiable if we want to correct it and prevent future occurrences.

Second, attempts to understand the nature of modern neural networks have demonstrated their weak ability to generalize. Neural networks remember isolated, often random, details of the samples they were exposed to during training and make decisions based on those details and not on a real general grasp of the object represented in the sample set. For instance, a neural network that was trained to recognize elephants and whales using sets of standard photos will see a stranded whale as an elephant and an elephant splashing around in the surf as a whale. Neural networks are good at remembering situations in similar contexts, but they lack the capacity to understand situations and cannot extrapolate the accumulated knowledge to situations in unusual settings.

Third, neural network models are random, fragmentary and opaque, which allows hackers to find ways of compromising applications based on these models by means of adversarial attacks. For example, a security system trained to identify people in a video stream can be confused when it sees a person in unusually colourful clothing. If this person is shoplifting, the system may not be able to distinguish them from shelves containing equally colourful items. While the brain structures underlying human vision are prone to so-called optical illusions, this problem acquires a more dramatic scale with modern neural networks: there are known cases where replacing an image with noise leads to the recognition of an object that is not there, or replacing one pixel in an image makes the network mistake the object for something else.

Fourth, the inadequacy of the information capacity and parameters of the neural network to the image of the world it is shown during training and operation can lead to the practical problem of catastrophic forgetting. This is seen when a system that had first been trained to identify situations in a set of contexts and then fine-tuned to recognize them in a new set of contexts may lose the ability to recognize them in the old set. For instance, a neural machine vision system initially trained to recognize pedestrians in an urban environment may be unable to identify dogs and cows in a rural setting, but additional training to recognize cows and dogs can make the model forget how to identify pedestrians, or start confusing them with small roadside trees.

Growth Potential?

The expert community sees a number of fundamental problems that need to be solved before a “general,” or “strong,” AI is possible. In particular, as demonstrated by the biggest annual AI conference held in Macao, “explainable AI” and “transfer learning” are simply necessary in some cases, such as defence, security, healthcare and finance. Many leading researchers also think that mastering these two areas will be the key to creating a “general,” or “strong,” AI.

Explainable AI allows for human beings (the user of the AI system) to understand the reasons why a system makes decisions and approve them if they are correct, or rework or fine-tune the system if they are not. This can be achieved by presenting data in an appropriate (explainable) manner or by using methods that allow this knowledge to be extracted with regard to specific precedents or the subject area as a whole. In a broader sense, explainable AI also refers to the capacity of a system to store, or at least present its knowledge in a human-understandable and human-verifiable form. The latter can be crucial when the cost of an error is too high for it only to be explainable post factum. And here we come to the possibility of extracting knowledge from the system, either to verify it or to feed it into another system.

Transfer learning is the possibility of transferring knowledge between different AI systems, as well as between man and machine so that the knowledge possessed by a human expert or accumulated by an individual system can be fed into a different system for use and fine-tuning. Theoretically speaking, this is necessary because the transfer of knowledge is only fundamentally possible when universal laws and rules can be abstracted from the system’s individual experience. Practically speaking, it is the prerequisite for making AI applications that will not learn by trial and error or through the use of a “training set,” but can be initialized with a base of expert-derived knowledge and rules – when the cost of an error is too high or when the training sample is too small.

How to Get the Best of Both Worlds?

There is currently no consensus on how to make an artificial general intelligence that is capable of solving the abovementioned problems or is based on technologies that could solve them.

One of the most promising approaches is probabilistic programming, which is a modern development of symbolic AI. In probabilistic programming, knowledge takes the form of algorithms and source, and target data is not represented by values of variables but by a probabilistic distribution of all possible values. Alexei Potapov, a leading Russian expert on artificial general intelligence, thinks that this area is now in a state that deep learning technology was in about ten years ago, so we can expect breakthroughs in the coming years.

Another promising “symbolic” area is Evgenii Vityaev’s semantic probabilistic modelling, which makes it possible to build explainable predictive models based on information represented as semantic networks with probabilistic inference based on Pyotr Anokhin’s theory of functional systems.

One of the most widely discussed ways to achieve this is through so-called neuro-symbolic integration – an attempt to get the best of both worlds by combining the learning capabilities of subsymbolic deep neural networks (which have already proven their worth) with the explainability of symbolic probabilistic modelling and programming (which hold significant promise). In addition to the technological considerations mentioned above, this area merits close attention from a cognitive psychology standpoint. As viewed by Daniel Kahneman, human thought can be construed as the interaction of two distinct but complementary systems: System 1 thinking is fast, unconscious, intuitive, unexplainable thinking, whereas System 2 thinking is slow, conscious, logical and explainable. System 1 provides for the effective performance of run-of-the-mill tasks and the recognition of familiar situations. In contrast, System 2 processes new information and makes sure we can adapt to new conditions by controlling and adapting the learning process of the first system. Systems of the first kind, as represented by neural networks, are already reaching Gartner’s so-called plateau of productivity in a variety of applications. But working applications based on systems of the second kind – not to mention hybrid neuro-symbolic systems which the most prominent industry players have only started to explore – have yet to be created.

This year, Russian researchers, entrepreneurs and government officials who are interested in developing artificial general intelligence have a unique opportunity to attend the first AGI-2020 international conference in St. Petersburg in late June 2020, where they can learn about all the latest developments in the field from the world’s leading experts.

From our partner RIAC

Continue Reading
Comments

Science & Technology

From nanotechnology to solar power: Solutions to drought

Published

on

While the drought has intensified in Iran and the country is facing water stress, various solutions from the use of solar power plants to the expansion of watershed management and nanotechnology are offered by experts and officials.

Iran is located in an arid and semi-arid region, and Iranians have long sought to make the most of water.

In recent years, the drought has intensified making water resources fragile and it can be said that we have reached water bankruptcy in Iran.

However, water stress will continue this fall (September 23-December 21), and the season is expected to be relatively hot and short of rain, according to Ahad Vazifeh, head of the national center for drought and crisis management.

In such a situation, officials and experts propose various solutions for optimal water management.

Alireza Qazizadeh, a water and environment expert, referring to 80 percent of the arid regions in the country, said that “Iran has one percent of the earth’s area and receives only 36 percent of renewable resources.

The country receives 250 mm of rainfall annually, which is about 400 billion cubic meters, considering 70 percent evaporation, there is only 130 billion cubic meters of renewable water and 13 billion cubic meters of input from border waters.”

Referring to 800 ml of average rainfall and 700 mm of global evaporation, he noted that 70 percent of rainfall in Iran occurs in only 25 percent of the country and only 25 percent rains in irrigation seasons.

Pointing to the need for 113 billion cubic meters of water in the current year (began on March 21), he stated that “of this amount, 102 billion is projected for agricultural use, 7 percent for drinking and 2 percent for industry, and at this point water stress occurs.

In 2001, 5.5 billion cubic meters of underground resources were withdrawn annually, and if we consider this amount as 20 years from that year until now, it means that we have withdrawn an equivalent of one year of water consumption from non-renewable resources, which is alarming.”

The use of unconventional water sources can be effective in controlling drought, such as rainwater or river runoff, desalinated water, municipal wastewater that can be reused by treatment, he concluded.

Rasoul Sarraf, the Faculty of Materials at Shahid Modarres University, suggests a different solution and states that “To solve ease water stress, we have no choice but to use nanotechnology and solar power plants.

Pointing to the sun as the main condition for solar power plant, and while pointing to 300 sunny days in the country, he said that at the Paris Convention, Iran was required to reduce emissions by 4 percent definitively and 8 percent conditionally, which will only be achieved by using solar power plants.

Hamidreza Zakizadeh, deputy director of watershed management at Tehran’s Department of Natural Resources and Watershed Management, believes that watershed management can at least reduce the effects of drought by managing floods and extracting water for farmers.

Amir Abbas Ahmadi, head of habitats and regional affairs of Tehran Department of Environment, also referring to the severe drought in Tehran, pointed to the need to develop a comprehensive plan for water management and said that it is necessary to cooperate with several responsible bodies and develop a comprehensive plan to control the situation.

He also emphasizes the need to control migration to the capital, construction, and the implementation of the Comprehensive Plan of Tehran city.

While various solutions are proposed by officials and experts to manage water and deal with drought, it is necessary for the related organizations to work together to manage the current situation.

Mohammad Reza Espahbod, an expert in groundwater resources, also suggested that while the country is dealing with severe drought due to improper withdrawal of groundwater and low rainfall, karst water resources can supply the whole water needed by the country, only if managed.

Iran is the fifth country in the world in terms of karst water resources, he stated.

Qanats can also come efficient to contain water scarcity due to relatively low cost, low evaporation rates, and not requiring technical knowledge, moreover, they proved sustainable being used in perpetuity without posing any damages to the environment.

According to the Ministry of Energy, about 36,300 qanats have been identified in Iran, which has been saturated with water for over 2,000 years.

In recent years, 3,800 qanats have been rehabilitated through watershed and aquifer management, and people who had migrated due to water scarcity have returned to their homes.

Water resources shrinking

Renewable water resources have decreased by 30 percent over the last four decades, while Iran’s population has increased by about 2.5 times, Qasem Taqizadeh, deputy minister of energy, said in June.

The current water year (started on September 23, 2020) has received the lowest rain in the past 52 years, so climate change and Iran’s arid region should become a common belief at all levels, he lamented.

A recent report by Nature Scientific Journal on Iran’s water crisis indicates that from 2002 to 2015, over 74 billion cubic meters have been extracted from aquifers, which is unprecedented and its revival takes thousands of years along with urgent action.

Three Iranian scientists studied 30 basins in the country and realized that the rate of aquifer depletion over a 14-year period has been about 74 billion cubic meters, which is recently published in Nature Scientific Journal.

Also, over-harvesting in 77 percent of Iran has led to more land subsidence and soil salinity. Research and statistics show that the average overdraft from the country’s aquifers was about 5.2 billion cubic meters per year.

Mohammad Darvish, head of the environment group in the UNESCO Chair on Social Health, has said that the situation of groundwater resources is worrisome.

From our partner Tehran Times

Continue Reading

Science & Technology

Technology and crime: A never-ending cat-and-mouse game

Published

on

Is technology a good or bad thing? It depends on who you ask, as it is more about the way technology is used. Afterall, technology can be used by criminals but can also be used to catch criminals, creating a fascinating cat-and-mouse game.

Countless ways technology can be used for evil

The first spear was used to improve hunting and to defend from attacking beasts. However, it was also soon used against other humans; nuclear power is used to produce energy, but it was also used to annihilate whole cities. Looking at today’s news, we’ve learned that cryptocurrencies could be (and are) used as the preferred form of payments of ransomware since they provide an anonymous, reliable, and fast payment method for cybercriminals.

Similarly, secure phones are providing criminal rings with a fast and easy way to coordinate their rogue activities. The list could go on. Ultimately, all technological advancements can be used for good or evil. Indeed, technology is not inherently bad or good, it is its usage that makes the difference. After all, spears served well in preventing the extinction of humankind, nuclear power is used to generate energy, cryptocurrency is a promise to democratize finance, and mobile phones are the device of choice of billions of people daily (you too are probably reading this piece on a mobile).

However, what is new with respect to the past (recent and distant) is that technology is nowadays much more widespread, pervasive, and easier to manipulate than it was some time ago. Indeed, not all of us are experts in nuclear material, or willing and capable of effectively throwing a spear at someone else. But each of us is surrounded by, and uses, technology, with a sizeable part of users also capable of modifying that technology to better serve their purposes (think of computer scientists, programmers, coding kids – technology democratization).

This huge reservoir of people that are capable of using technology in a way that is different from what it was devised for, is not made of just ethical hackers: there can be black hats as well (that is, technology experts supporting evil usages of such technology). In technical terms, the attack vector and the security perimeter have dramatically expanded, leading to a scenario where technology can be easily exploited for rogue purposes by large cohorts of people that can attack some of the many assets that are nowadays vulnerable – the cybersecurity domain provides the best example for the depicted scenario. 

Fast-paced innovation and unprecedented threats

What is more, is that technology developments will not stop. On the contrary, we are experiencing an exponentially fast pace in technology innovation, that resolves in less time between technology innovations cycles that, while improving our way of living, also pave the way for novel, unprecedented threats to materialize. For instance, the advent of quantum computers will make the majority of current encryption and digital signature methods useless and what was encrypted and signed in the past, exposed.

The tension between legitimate and illegitimate usages of technology is also heating up. For instance, there are discussions in the US and the EU about the need for the provider of ICT services to grant the decryption keys of future novel secure applications to law enforcement agencies should the need arise –a debatable measure.

However, technology is the very weapon we need to fight crime. Think of the use of Terahertz technology to discover the smuggling of drugs and explosives – the very same technology Qatar      has successfully employed. Or the infiltration of mobile phone crime rings by law enforcement operators via high tech, ethical hacking (as it was the case for the EncroChat operation). And even if crime has shown the capability to infiltrate any sector of society, such as sports, where money can be laundered over digital networks and matches can be rigged and coordinated via chats, technology can help spot the anomalies of money transfer, and data science can spot anomalies in matches, and can therefore thwart such a crime – a recent United Nations-sponsored event, participated by the International Centre for Sport Security (ICSS) Qatar and the College of Science and Engineering (CSE) at Hamad Bin Khalifa University (HBKU) discussed      the cited topic. In the end, the very same technology that is used by criminals is also used to fight crime itself.

Don’t get left behind

In the above-depicted cybersecurity cat-and-mouse game, the loser is the party that does not update its tools, does not plan, and does not evolve.

In particular, cybersecurity can help a country such as Qatar over two strategic dimensions: to better prevent/detect/react to the criminal usage of technology, as well as to advance robustly toward a knowledge-based economy and reinforce the country’s presence in the segment of high value-added services and products to fight crime.

In this context, a safe bet is to invest in education, for both governments and private citizens. On the one hand, only an educated workforce would be able to conceptualize/design/implement advanced cybersecurity tools and frameworks, as well as strategically frame the fight against crime. On the other hand, the same well-educated workforce will be able to spur innovation, create start-ups, produce novel high-skill products, and diversify the economy. 

In this context, Qatar enjoys a head start, thanks to its huge investment in education over the last 20 years. In particular, at HBKU – part of Qatar Foundation – where we have been educating future generations. 

CSE engages and leads in research disciplines of national and global importance. The college’s speciality divisions are firmly committed to excellence in graduate teaching and training of highly qualified students with entrepreneurial  capacity.

For instance, the MS in Cybersecurity offered by CSE touches on the foundations of cryptocurrencies, while the PhD in Computer Science and Engineering, offering several majors (including cybersecurity), prepares future high-level decision-makers, researchers, and entrepreneurs in the ICT domain  – the leaders who will be driving the digitalization of the economy and leading the techno-fight against crime. 

Continue Reading

Science & Technology

Enhancing poverty measurement through big data

Published

on

Authors: Jasmina Ernst and Ruhimat Soerakoesoemah*

Ending poverty in all its forms is the first of the 17 Sustainable Development Goals (SDGs). While significant progress to reduce poverty had been made at the global and regional levels by 2019, the Covid-19 pandemic has partly reversed this trend. A significant share of the population in South-East Asia still lacks access to basic needs such as health services, proper nutrition and housing, causing many children to suffer from malnutrition and treatable illnesses. 

Delivering on the commitments of the 2030 Agenda for Sustainable Development and leaving no one behind requires monitoring of the SDG implementation trends. At the country level, national statistics offices (NSOs) are generally responsible for SDG data collection and reporting, using traditional data sources such as surveys, census and administrative data. However, as the availability of data for almost half of the SDG indicators (105 of 231) in South-East Asia is insufficient, NSOs are exploring alternative sources and methods, such as big data and machine learning, to address the data gaps. Currently, earth observation and mobile phone data receive most attention in the domain of poverty reporting. Both data sources can significantly reduce the cost of reporting, as the data collection is less time and resource intensive than for conventional data.

The NSOs of Thailand and the Philippines, with support from the Asian Development Bank, conducted a feasibility study on the use of earth observation data to predict poverty levels. In the study, an algorithm, convolutional neural nets, was pretrained on an ImageNet database to detect simple low-level features in images such as lines or curves. Following a transfer learning technique, the algorithm was then trained to predict the intensity of night lights from features in corresponding daytime satellite images. Afterwards income-based poverty levels were estimated using the same features that were found to predict night light intensity combined with nationwide survey data, register-based data, and geospatial information. The resulting machine learning models yielded an accuracy of up to 94 per cent in predicting the poverty categories of satellite images. Despite promising study results, scaling up the models and integrating big data and machine learning for poverty statistics and SDG reporting still face many challenges. Thus, NSOs need support to train their staff, gain continuous access to new datasets and expand their digital infrastructure.

Some support is available to NSOs for big data integration. The UN Committee of Experts on Big Data and Data Science for Official Statistics (UN-CEBD) oversees several task teams, including the UN Global Platform which has launched a cloud-service ecosystem to facilitate international collaboration with respect to big data. Two additional task teams focus on Big Data for the SDGs and Earth Observation data, providing technical guidance and trainings to NSOs. At the regional level, the weekly ESCAP Stats Café series provides a knowledge sharing platform for experiences related to the impact of COVID-19 on national statistical systems. The Stats Café includes multiple sessions dedicated to the use of alternative data sources for official statistics and the SDGs. Additionally, ESCAP has published policy briefs on the region’s practices in using non-traditional data sources for official statistics.

Mobile phone data can also be used to understand socioeconomic conditions in the absence of traditional statistics and to provide greater granularity and frequency for existing estimates. Call detail records coupled with airtime credit purchases, for instance, could be used to infer economic density, wealth or poverty levels, and to measure food consumption. An example can be found in poverty estimates for Vanuatu based on education, household characteristics and expenditure. These were generated by Pulse Lab Jakarta – a joint innovation facility associated with UN Global Pulse and the government of Indonesia.

Access to mobile phone data, however, remains a challenge. It requires long negotiations with mobile network operators, finding the most suitable data access model, ensuring data privacy and security, training the NSO staff and securing dedicated resources. The UN-CEBD – through the Task Team on Mobile Phone Data and ESCAP – supports NSOs in accessing and using mobile phone data through workshops, guides and the sharing of country experiences. BPS Statistics Indonesia, the Indonesian NSO, is exploring this data source for reporting on four SDG indicators and has been leading the regional efforts in South-East Asia. While several other NSOs in Asia and the Pacific can access mobile phone data or are negotiating access with mobile network operators, none of them have integrated it into poverty reporting.

As the interest and experience in the use of mobile phone data, satellite imagery and other alternative data sources for SDGs is growing among many South-East Asian NSOs, so is the need for training and capacity-building. Continuous knowledge exchange and collaboration is the best long-term strategy for NSOs and government agencies to track and alleviate poverty, and to measure the other 16 SDGs.

*Ruhimat Soerakoesoemah, Head, Sub-Regional Office for South-East Asia

UNESCAP

Continue Reading

Publications

Latest

Americas1 hour ago

Interpreting the Biden Doctrine: The View From Moscow

It is the success or failure of remaking America, not Afghanistan, that will determine not just the legacy of the...

Urban Development5 hours ago

WEF Launches Toolbox of Solutions to Accelerate Decarbonization in Cities

With the percentage of people living in cities projected to rise to 68% by 2050, resulting in high energy consumption,...

Development7 hours ago

Demand for Circular Economy Solutions Prompts Business and Government Changes

To truly tackle climate goals, the world must transform how it makes and consumes. To support this effort, circular economy...

Africa9 hours ago

Money seized from Equatorial Guinea VP Goes into Vaccine

As a classic precedence, the Justice Department of the United States has decided that $26.6m (£20m) seized from Equatorial Guinea’s...

forest forest
Environment11 hours ago

More Than 2.5 Billion Trees to be Conserved, Restored, and Grown by 2030

Companies from across sectors are working to support healthy and resilient forests through the World Economic Forum’s 1t.org trillion tree...

Americas13 hours ago

AUKUS aims to perpetuate the Anglo-Saxon supremacy

On September 15, U.S. President Joe Biden worked with British Prime Minister Boris Johnson and Australian Prime Minister Scott Morrison...

Terrorism Terrorism
Terrorism15 hours ago

A shift in militants’ strategy could shine a more positive light on failed US policy

A paradigm shift in jihadist thinking suggests that the US invasion of Afghanistan may prove to have achieved more than...

Trending