Connect with us

Science & Technology

Artificial Intelligence: A Blessing or a Threat for Humanity?

Published

on

In August 2018, Czech Technical University in Prague simultaneously hosted several conferences on AI-related topics: human-level AI, artificial general intelligence, biologically inspired cognitive architectures, and neural-symbolic integration technology. Reports were presented by prominent experts representing global leaders in artificial intelligence: Microsoft, Facebook, DARPA, MIT and Good AI. The reports described the current status of AI developments, identified the problems facing society that have yet to be resolved, and highlighted the threats arising from the further development of this technology. In this review, we will attempt to briefly identify the main problems and threats, as well as the possible ways to counter these threats.

To begin with, let us provide definitions for some of the terms that are commonly used in conjunction with AI in various contexts: weak, or specialized, AI; autonomous AI; adaptive AI; artificial general intelligence (AGI); strong AI; human-level AI; and super-human AI.

Weak, or specialized, AI is represented by all existing solutions without exception and implies the automated solution of one specific task, be it a game of Go or face recognition with CCTV footage. Such systems are incapable of independent learning for the purpose of solving other problems: they can only be reprogrammed by humans to do so.

Autonomous AI implies a system’s ability to function for protracted periods of time without the intervention of a human operator. This could be a solar-powered UAV performing a multi-day flight from Champs-Elysees in Paris to Moscow’s Red Square or back, independently selecting its route and recharging stops while avoiding all sorts of obstacles.

Adaptive AI implies the system’s ability to adapt to new situations and obtain knowledge that it did not possess at the time of its creation. For example, a system originally tasked with conducting conversations in Russian could independently learn new languages and apply this knowledge in conversation if it found itself in a new language environment or if it deliberately studied educational materials on these new languages.

Artificial general intelligence implies adaptability of such a high level that the corresponding system could, given the appropriate training, be used in a wide variety of activities. New knowledge could either be self-taught or learned with the help of an instructor. It is in this same sense that the notion of strong AI is often used in opposition to weak or specialized AI.

Human-level AI implies a level of adaptability comparable to that of a human being, meaning that the system is capable of mastering the same skills as a human and within comparable periods of time.

Super-human AI implies even greater adaptability and learning speeds, allowing the system to masker the knowledge and skills that humans would never be able to.

Fundamental Problems Associated with Creating a Strong AI

Despite the multitude of advances in neuroscience, we still do not know exactly how natural intelligence works. For this same reason, we do not know for sure how to create artificial intelligence (AI). There are a number of known problems that need to be resolved, as well as differing opinions as to how these problems should be prioritized. For example, Ben Goertzel, who heads the OpenCog and SingularityNET, open-source international projects to create artificial intelligence, believes that all the requisite technology for creating an artificial general intelligence has already been developed, and that the only thing necessary is to combine them in a way that would ensure the necessary synergy. Other experts are more sceptical, pointing out that many of the problems that we will discuss below need to be resolved first. Also, expert estimates for when a strong AI may be created vary greatly, from ten or so years to several decades from now.

On the other hand, the emergence of a strong AI is logical in the framework of the general process of evolution as the emergence of molecules from atoms and cells from molecules, the creation of the central nervous system from specialized cells, the emergence of social structure, the development of speech and writing systems and, ultimately, the nascence of information technology. Valentin Turchin demonstrates the logic behind the increasing complexity of information structures and organizational mechanisms in the process of evolution. Unless humanity perishes first, this evolution will be inevitable and will, in the long run, rescue humankind, as only non-biological lifeforms will be able to survive the inevitable end of the Solar System and preserve our civilization’s information code in the Universe.

It is important to realize that the creation of a strong AI does not necessarily require the understanding of how the natural intelligence works, just as the development of a rocket does not necessarily require understanding how a bird flies. Such an AI will certainly be created, sooner or later, in one way or another, and perhaps even in several different ways.

Most experts identify the following fundamental problems that need to be solved before a general or strong AI can be created:

Few-shot learning: systems need to be developed that can learn with the use of a small amount of materials, in contrast to the current deep-learning systems, which require massive amounts of specifically prepared learning materials.

Strong generalization: creating problem recognition technologies allowing for recognizing objects in situations that differ from those in which they were encountered in the learning materials.

Generative learning models: developing learning technologies in which the objects to be memorized are not the features of the object to be recognised, but rather the principles of its formation. This would help in addressing the more profound characteristics of objects, providing for faster learning and stronger generalization.

Structured prediction and learning: developing learning technologies based on the representation of learning objects as multi-layered hierarchical structures, with lower-level elements defining higher level ones. This could prove an alternative solution to the problems of fast learning and strong generalization.

Solving the problem of catastrophic forgetting, which is pertinent to the majority of existing systems: a system originally trained with the use of one class of object and then additionally trained to recognize a new class of objects loses the ability to recognize objects of the original class.

Achieving an incremental learning ability, which implies a system’s ability to gradually accumulate knowledge and perfect its skills without losing the previously obtained knowledge, but rather obtaining new knowledge, with regard to systems intended for interaction in natural languages. Ideally, such a system should pass the so-called Baby Turing Test by demonstrating its ability to gradually master a language from the baby level to the adult level.

Solving the consciousness problem, i.e. coming up with a proven working model for conscious behaviour that ensures effective prediction and deliberate behaviour through the formation of an “internal worldview,” which could be used for seeking optimum behavioural strategies to achieve goals without actually interacting with the real world. This would significantly improve security and the testing of hypotheses while increasing the speed and energy efficiency of such checks, thus enabling a live or artificial system to learn independently within the “virtual reality” of its own consciousness. There are two applied sides to solving the consciousness problem. On the one hand, creating conscious AI systems would increase their efficiency dramatically. On the other hand, such systems would come with both additional risks and ethical problems, seeing as they could, at some point, be equated to the level of self-awareness of human beings, with the ensuing legal consequences.

Potential AI-Related Threats

Even the emergence of autonomous or adaptive AI systems, let alone general or strong AI, is associated with several threats of varying degrees of severity that are relevant today.

The first threat to humans may not necessarily be presented by a strong, general, human-level or super-human AI, as it would be enough to have an autonomous system capable of processing massive amounts of data at high speeds. Such a system could be used as the basis for so-called lethal autonomous weapons systems (LAWS), the simplest example being drone assassins (3D-printed in large batches or in small numbers).

Second, a threat could be posed by a state (a potential adversary) gaining access to weapons system based on more adaptive, autonomous and general AI with improved reaction times and better predictive ability.

Third, a threat for the entire world would be a situation based on the previous threat, in which several states would enter a new round of the arms race, perfecting the intelligence levels of autonomous weapon systems, as Stanislaw Lem predicted several decades ago.

Fourth, a threat to any party would be presented by any intellectual system (not necessarily a combat system, but one that could have industrial or domestic applications too) with enough autonomy and adaptivity to be capable not only of deliberate activity, but also of autonomous conscious target-setting, which could run counter to the individual and collective goals of humans. Such a system would have far more opportunities to achieve these goals due to its higher operating speeds, greater information processing performance and better predictive ability. Unfortunately, humanity has not yet fully researched or even grasped the scale of this particular threat.

Fifth, society is facing a threat in the form of the transition to a new level in the development of production relations in the capitalist (or totalitarian) society, in which a minority comes to control material production and excludes an overwhelming majority of the population from this sector thanks to ever-growing automation. This may result in greater social stratification, the reduced effectiveness of “social elevators” and an increase in the numbers of people made redundant, with adverse social consequences.

Finally, another potential threat to humanity in general is the increasing autonomy of global data processing, information distribution and decision-making systems growing, since information distribution speeds within such systems, and the scale of their interactions, could result in social phenomena that cannot be predicted based on prior experience and the existing models. For example, the social credit system currently being introduced in China is a unique experiment of truly civilizational scale that could have unpredictable consequences.

The problems of controlling artificial intelligence systems are currently associated, among other things, with the closed nature of the existing applications, which are based on “deep neural networks.” Such applications do not make it possible to validate the correctness of decisions prior to implementation, nor do they allow for an analysis of the solution provided by the machine after the fact. This phenomenon is being addressed by the new science, which explores explainable artificial intelligence (XAI). The process is aided by a renewed interest in integrating the associative (neural) and symbolic (logic-based) approaches to the problem.

Ways to Counter the Threats

It appears absolutely necessary to take the following measures in order to prevent catastrophic scenarios associated with the further development and application of AI technologies.

An international ban on LAWS, as well as the development and introduction of international measures to enforce such a ban.

Governmental backing for research into the aforementioned problems (into “explainable AI ” in particular), the integration of different approaches, and studying the principles of creating target-setting mechanisms for the purpose of developing effective programming and control tools for intellectual systems. Such programming should be based on values rather than rules, and it is targets that need to be controlled, not actions.

Democratizing access to AI technologies and methods, including through re-investing profits from the introduction of intellectual systems into the mass teaching of computing and cognitive technologies, as well as creating open-source AI solutions and devising measures to stimulate existing “closed” AI systems to open their source codes. For example, the Aigents project is aimed at creating AI personal agents for mass users that would operate autonomously and be immune to centralized manipulations.

Intergovernmental regulation of the openness of AI algorithms, operating protocols for data processing and decision-making systems, including the possibility of independent audits by international structures, national agencies and individuals. One initiative in this sense is to create the SingularityNET open-source platform and ecosystem for AI applications.

First published in our partner RIAC

Continue Reading
Comments

Science & Technology

From nanotechnology to solar power: Solutions to drought

Published

on

While the drought has intensified in Iran and the country is facing water stress, various solutions from the use of solar power plants to the expansion of watershed management and nanotechnology are offered by experts and officials.

Iran is located in an arid and semi-arid region, and Iranians have long sought to make the most of water.

In recent years, the drought has intensified making water resources fragile and it can be said that we have reached water bankruptcy in Iran.

However, water stress will continue this fall (September 23-December 21), and the season is expected to be relatively hot and short of rain, according to Ahad Vazifeh, head of the national center for drought and crisis management.

In such a situation, officials and experts propose various solutions for optimal water management.

Alireza Qazizadeh, a water and environment expert, referring to 80 percent of the arid regions in the country, said that “Iran has one percent of the earth’s area and receives only 36 percent of renewable resources.

The country receives 250 mm of rainfall annually, which is about 400 billion cubic meters, considering 70 percent evaporation, there is only 130 billion cubic meters of renewable water and 13 billion cubic meters of input from border waters.”

Referring to 800 ml of average rainfall and 700 mm of global evaporation, he noted that 70 percent of rainfall in Iran occurs in only 25 percent of the country and only 25 percent rains in irrigation seasons.

Pointing to the need for 113 billion cubic meters of water in the current year (began on March 21), he stated that “of this amount, 102 billion is projected for agricultural use, 7 percent for drinking and 2 percent for industry, and at this point water stress occurs.

In 2001, 5.5 billion cubic meters of underground resources were withdrawn annually, and if we consider this amount as 20 years from that year until now, it means that we have withdrawn an equivalent of one year of water consumption from non-renewable resources, which is alarming.”

The use of unconventional water sources can be effective in controlling drought, such as rainwater or river runoff, desalinated water, municipal wastewater that can be reused by treatment, he concluded.

Rasoul Sarraf, the Faculty of Materials at Shahid Modarres University, suggests a different solution and states that “To solve ease water stress, we have no choice but to use nanotechnology and solar power plants.

Pointing to the sun as the main condition for solar power plant, and while pointing to 300 sunny days in the country, he said that at the Paris Convention, Iran was required to reduce emissions by 4 percent definitively and 8 percent conditionally, which will only be achieved by using solar power plants.

Hamidreza Zakizadeh, deputy director of watershed management at Tehran’s Department of Natural Resources and Watershed Management, believes that watershed management can at least reduce the effects of drought by managing floods and extracting water for farmers.

Amir Abbas Ahmadi, head of habitats and regional affairs of Tehran Department of Environment, also referring to the severe drought in Tehran, pointed to the need to develop a comprehensive plan for water management and said that it is necessary to cooperate with several responsible bodies and develop a comprehensive plan to control the situation.

He also emphasizes the need to control migration to the capital, construction, and the implementation of the Comprehensive Plan of Tehran city.

While various solutions are proposed by officials and experts to manage water and deal with drought, it is necessary for the related organizations to work together to manage the current situation.

Mohammad Reza Espahbod, an expert in groundwater resources, also suggested that while the country is dealing with severe drought due to improper withdrawal of groundwater and low rainfall, karst water resources can supply the whole water needed by the country, only if managed.

Iran is the fifth country in the world in terms of karst water resources, he stated.

Qanats can also come efficient to contain water scarcity due to relatively low cost, low evaporation rates, and not requiring technical knowledge, moreover, they proved sustainable being used in perpetuity without posing any damages to the environment.

According to the Ministry of Energy, about 36,300 qanats have been identified in Iran, which has been saturated with water for over 2,000 years.

In recent years, 3,800 qanats have been rehabilitated through watershed and aquifer management, and people who had migrated due to water scarcity have returned to their homes.

Water resources shrinking

Renewable water resources have decreased by 30 percent over the last four decades, while Iran’s population has increased by about 2.5 times, Qasem Taqizadeh, deputy minister of energy, said in June.

The current water year (started on September 23, 2020) has received the lowest rain in the past 52 years, so climate change and Iran’s arid region should become a common belief at all levels, he lamented.

A recent report by Nature Scientific Journal on Iran’s water crisis indicates that from 2002 to 2015, over 74 billion cubic meters have been extracted from aquifers, which is unprecedented and its revival takes thousands of years along with urgent action.

Three Iranian scientists studied 30 basins in the country and realized that the rate of aquifer depletion over a 14-year period has been about 74 billion cubic meters, which is recently published in Nature Scientific Journal.

Also, over-harvesting in 77 percent of Iran has led to more land subsidence and soil salinity. Research and statistics show that the average overdraft from the country’s aquifers was about 5.2 billion cubic meters per year.

Mohammad Darvish, head of the environment group in the UNESCO Chair on Social Health, has said that the situation of groundwater resources is worrisome.

From our partner Tehran Times

Continue Reading

Science & Technology

Technology and crime: A never-ending cat-and-mouse game

Published

on

Is technology a good or bad thing? It depends on who you ask, as it is more about the way technology is used. Afterall, technology can be used by criminals but can also be used to catch criminals, creating a fascinating cat-and-mouse game.

Countless ways technology can be used for evil

The first spear was used to improve hunting and to defend from attacking beasts. However, it was also soon used against other humans; nuclear power is used to produce energy, but it was also used to annihilate whole cities. Looking at today’s news, we’ve learned that cryptocurrencies could be (and are) used as the preferred form of payments of ransomware since they provide an anonymous, reliable, and fast payment method for cybercriminals.

Similarly, secure phones are providing criminal rings with a fast and easy way to coordinate their rogue activities. The list could go on. Ultimately, all technological advancements can be used for good or evil. Indeed, technology is not inherently bad or good, it is its usage that makes the difference. After all, spears served well in preventing the extinction of humankind, nuclear power is used to generate energy, cryptocurrency is a promise to democratize finance, and mobile phones are the device of choice of billions of people daily (you too are probably reading this piece on a mobile).

However, what is new with respect to the past (recent and distant) is that technology is nowadays much more widespread, pervasive, and easier to manipulate than it was some time ago. Indeed, not all of us are experts in nuclear material, or willing and capable of effectively throwing a spear at someone else. But each of us is surrounded by, and uses, technology, with a sizeable part of users also capable of modifying that technology to better serve their purposes (think of computer scientists, programmers, coding kids – technology democratization).

This huge reservoir of people that are capable of using technology in a way that is different from what it was devised for, is not made of just ethical hackers: there can be black hats as well (that is, technology experts supporting evil usages of such technology). In technical terms, the attack vector and the security perimeter have dramatically expanded, leading to a scenario where technology can be easily exploited for rogue purposes by large cohorts of people that can attack some of the many assets that are nowadays vulnerable – the cybersecurity domain provides the best example for the depicted scenario. 

Fast-paced innovation and unprecedented threats

What is more, is that technology developments will not stop. On the contrary, we are experiencing an exponentially fast pace in technology innovation, that resolves in less time between technology innovations cycles that, while improving our way of living, also pave the way for novel, unprecedented threats to materialize. For instance, the advent of quantum computers will make the majority of current encryption and digital signature methods useless and what was encrypted and signed in the past, exposed.

The tension between legitimate and illegitimate usages of technology is also heating up. For instance, there are discussions in the US and the EU about the need for the provider of ICT services to grant the decryption keys of future novel secure applications to law enforcement agencies should the need arise –a debatable measure.

However, technology is the very weapon we need to fight crime. Think of the use of Terahertz technology to discover the smuggling of drugs and explosives – the very same technology Qatar      has successfully employed. Or the infiltration of mobile phone crime rings by law enforcement operators via high tech, ethical hacking (as it was the case for the EncroChat operation). And even if crime has shown the capability to infiltrate any sector of society, such as sports, where money can be laundered over digital networks and matches can be rigged and coordinated via chats, technology can help spot the anomalies of money transfer, and data science can spot anomalies in matches, and can therefore thwart such a crime – a recent United Nations-sponsored event, participated by the International Centre for Sport Security (ICSS) Qatar and the College of Science and Engineering (CSE) at Hamad Bin Khalifa University (HBKU) discussed      the cited topic. In the end, the very same technology that is used by criminals is also used to fight crime itself.

Don’t get left behind

In the above-depicted cybersecurity cat-and-mouse game, the loser is the party that does not update its tools, does not plan, and does not evolve.

In particular, cybersecurity can help a country such as Qatar over two strategic dimensions: to better prevent/detect/react to the criminal usage of technology, as well as to advance robustly toward a knowledge-based economy and reinforce the country’s presence in the segment of high value-added services and products to fight crime.

In this context, a safe bet is to invest in education, for both governments and private citizens. On the one hand, only an educated workforce would be able to conceptualize/design/implement advanced cybersecurity tools and frameworks, as well as strategically frame the fight against crime. On the other hand, the same well-educated workforce will be able to spur innovation, create start-ups, produce novel high-skill products, and diversify the economy. 

In this context, Qatar enjoys a head start, thanks to its huge investment in education over the last 20 years. In particular, at HBKU – part of Qatar Foundation – where we have been educating future generations. 

CSE engages and leads in research disciplines of national and global importance. The college’s speciality divisions are firmly committed to excellence in graduate teaching and training of highly qualified students with entrepreneurial  capacity.

For instance, the MS in Cybersecurity offered by CSE touches on the foundations of cryptocurrencies, while the PhD in Computer Science and Engineering, offering several majors (including cybersecurity), prepares future high-level decision-makers, researchers, and entrepreneurs in the ICT domain  – the leaders who will be driving the digitalization of the economy and leading the techno-fight against crime. 

Continue Reading

Science & Technology

Enhancing poverty measurement through big data

Published

on

Authors: Jasmina Ernst and Ruhimat Soerakoesoemah*

Ending poverty in all its forms is the first of the 17 Sustainable Development Goals (SDGs). While significant progress to reduce poverty had been made at the global and regional levels by 2019, the Covid-19 pandemic has partly reversed this trend. A significant share of the population in South-East Asia still lacks access to basic needs such as health services, proper nutrition and housing, causing many children to suffer from malnutrition and treatable illnesses. 

Delivering on the commitments of the 2030 Agenda for Sustainable Development and leaving no one behind requires monitoring of the SDG implementation trends. At the country level, national statistics offices (NSOs) are generally responsible for SDG data collection and reporting, using traditional data sources such as surveys, census and administrative data. However, as the availability of data for almost half of the SDG indicators (105 of 231) in South-East Asia is insufficient, NSOs are exploring alternative sources and methods, such as big data and machine learning, to address the data gaps. Currently, earth observation and mobile phone data receive most attention in the domain of poverty reporting. Both data sources can significantly reduce the cost of reporting, as the data collection is less time and resource intensive than for conventional data.

The NSOs of Thailand and the Philippines, with support from the Asian Development Bank, conducted a feasibility study on the use of earth observation data to predict poverty levels. In the study, an algorithm, convolutional neural nets, was pretrained on an ImageNet database to detect simple low-level features in images such as lines or curves. Following a transfer learning technique, the algorithm was then trained to predict the intensity of night lights from features in corresponding daytime satellite images. Afterwards income-based poverty levels were estimated using the same features that were found to predict night light intensity combined with nationwide survey data, register-based data, and geospatial information. The resulting machine learning models yielded an accuracy of up to 94 per cent in predicting the poverty categories of satellite images. Despite promising study results, scaling up the models and integrating big data and machine learning for poverty statistics and SDG reporting still face many challenges. Thus, NSOs need support to train their staff, gain continuous access to new datasets and expand their digital infrastructure.

Some support is available to NSOs for big data integration. The UN Committee of Experts on Big Data and Data Science for Official Statistics (UN-CEBD) oversees several task teams, including the UN Global Platform which has launched a cloud-service ecosystem to facilitate international collaboration with respect to big data. Two additional task teams focus on Big Data for the SDGs and Earth Observation data, providing technical guidance and trainings to NSOs. At the regional level, the weekly ESCAP Stats Café series provides a knowledge sharing platform for experiences related to the impact of COVID-19 on national statistical systems. The Stats Café includes multiple sessions dedicated to the use of alternative data sources for official statistics and the SDGs. Additionally, ESCAP has published policy briefs on the region’s practices in using non-traditional data sources for official statistics.

Mobile phone data can also be used to understand socioeconomic conditions in the absence of traditional statistics and to provide greater granularity and frequency for existing estimates. Call detail records coupled with airtime credit purchases, for instance, could be used to infer economic density, wealth or poverty levels, and to measure food consumption. An example can be found in poverty estimates for Vanuatu based on education, household characteristics and expenditure. These were generated by Pulse Lab Jakarta – a joint innovation facility associated with UN Global Pulse and the government of Indonesia.

Access to mobile phone data, however, remains a challenge. It requires long negotiations with mobile network operators, finding the most suitable data access model, ensuring data privacy and security, training the NSO staff and securing dedicated resources. The UN-CEBD – through the Task Team on Mobile Phone Data and ESCAP – supports NSOs in accessing and using mobile phone data through workshops, guides and the sharing of country experiences. BPS Statistics Indonesia, the Indonesian NSO, is exploring this data source for reporting on four SDG indicators and has been leading the regional efforts in South-East Asia. While several other NSOs in Asia and the Pacific can access mobile phone data or are negotiating access with mobile network operators, none of them have integrated it into poverty reporting.

As the interest and experience in the use of mobile phone data, satellite imagery and other alternative data sources for SDGs is growing among many South-East Asian NSOs, so is the need for training and capacity-building. Continuous knowledge exchange and collaboration is the best long-term strategy for NSOs and government agencies to track and alleviate poverty, and to measure the other 16 SDGs.

*Ruhimat Soerakoesoemah, Head, Sub-Regional Office for South-East Asia

UNESCAP

Continue Reading

Publications

Latest

Trending