Connect with us

Science & Technology

Future Goals in the AI Race: Explainable AI and Transfer Learning

Published

on

Recent years have seen breakthroughs in neural network technology: computers can now beat any living person at the most complex game invented by humankind, as well as imitate human voices and faces (both real and non-existent) in a deceptively realistic manner. Is this a victory for artificial intelligence over human intelligence? And if not, what else do researchers and developers need to achieve to make the winners in the AI race the “kings of the world?”

Background

Over the last 60 years, artificial intelligence (AI) has been the subject of much discussion among researchers representing different approaches and schools of thought. One of the crucial reasons for this is that there is no unified definition of what constitutes AI, with differences persisting even now. This means that any objective assessment of the current state and prospects of AI, and its crucial areas of research, in particular, will be intricately linked with the subjective philosophical views of researchers and the practical experience of developers.

In recent years, the term “general intelligence,” meaning the ability to solve cognitive problems in general terms, adapting to the environment through learning, minimizing risks and optimizing the losses in achieving goals, has gained currency among researchers and developers. This led to the concept of artificial general intelligence (AGI), potentially vested not in a human, but a cybernetic system of sufficient computational power. Many refer to this kind of intelligence as “strong AI,” as opposed to “weak AI,” which has become a mundane topic in recent years.

As applied AI technology has developed over the last 60 years, we can see how many practical applications – knowledge bases, expert systems, image recognition systems, prediction systems, tracking and control systems for various technological processes – are no longer viewed as examples of AI and have become part of “ordinary technology.” The bar for what constitutes AI rises accordingly, and today it is the hypothetical “general intelligence,” human-level intelligence or “strong AI,” that is assumed to be the “real thing” in most discussions. Technologies that are already being used are broken down into knowledge engineering, data science or specific areas of “narrow AI” that combine elements of different AI approaches with specialized humanities or mathematical disciplines, such as stock market or weather forecasting, speech and text recognition and language processing.

Different schools of research, each working within their own paradigms, also have differing interpretations of the spheres of application, goals, definitions and prospects of AI, and are often dismissive of alternative approaches. However, there has been a kind of synergistic convergence of various approaches in recent years, and researchers and developers are increasingly turning to hybrid models and methodologies, coming up with different combinations.

Since the dawn of AI, two approaches to AI have been the most popular. The first, “symbolic” approach, assumes that the roots of AI lie in philosophy, logic and mathematics and operate according to logical rules, sign and symbolic systems, interpreted in terms of the conscious human cognitive process. The second approach (biological in nature), referred to as connectionist, neural-network, neuromorphic, associative or subsymbolic, is based on reproducing the physical structures and processes of the human brain identified through neurophysiological research. The two approaches have evolved over 60 years, steadily becoming closer to each other. For instance, logical inference systems based on Boolean algebra have transformed into fuzzy logic or probabilistic programming, reproducing network architectures akin to neural networks that evolved within the neuromorphic approach. On the other hand, methods based on “artificial neural networks” are very far from reproducing the functions of actual biological neural networks and rely more on mathematical methods from linear algebra and tensor calculus.

Are There “Holes” in Neural Networks?

In the last decade, it was the connectionist, or subsymbolic, approach that brought about explosive progress in applying machine learning methods to a wide range of tasks. Examples include both traditional statistical methodologies, like logistical regression, and more recent achievements in artificial neural network modelling, like deep learning and reinforcement learning. The most significant breakthrough of the last decade was brought about not so much by new ideas as by the accumulation of a critical mass of tagged datasets, the low cost of storing massive volumes of training samples and, most importantly, the sharp decline of computational costs, including the possibility of using specialized, relatively cheap hardware for neural network modelling. The breakthrough was brought about by a combination of these factors that made it possible to train and configure neural network algorithms to make a quantitative leap, as well as to provide a cost-effective solution to a broad range of applied problems relating to recognition, classification and prediction. The biggest successes here have been brought about by systems based on “deep learning” networks that build on the idea of the “perceptron” suggested 60 years ago by Frank Rosenblatt. However, achievements in the use of neural networks also uncovered a range of problems that cannot be solved using existing neural network methods.

First, any classic neural network model, whatever amount of data it is trained on and however precise it is in its predictions, is still a black box that does not provide any explanation of why a given decision was made, let alone disclose the structure and content of the knowledge it has acquired in the course of its training. This rules out the use of neural networks in contexts where explainability is required for legal or security reasons. For example, a decision to refuse a loan or to carry out a dangerous surgical procedure needs to be justified for legal purposes, and in the event that a neural network launches a missile at a civilian plane, the causes of this decision need to be identifiable if we want to correct it and prevent future occurrences.

Second, attempts to understand the nature of modern neural networks have demonstrated their weak ability to generalize. Neural networks remember isolated, often random, details of the samples they were exposed to during training and make decisions based on those details and not on a real general grasp of the object represented in the sample set. For instance, a neural network that was trained to recognize elephants and whales using sets of standard photos will see a stranded whale as an elephant and an elephant splashing around in the surf as a whale. Neural networks are good at remembering situations in similar contexts, but they lack the capacity to understand situations and cannot extrapolate the accumulated knowledge to situations in unusual settings.

Third, neural network models are random, fragmentary and opaque, which allows hackers to find ways of compromising applications based on these models by means of adversarial attacks. For example, a security system trained to identify people in a video stream can be confused when it sees a person in unusually colourful clothing. If this person is shoplifting, the system may not be able to distinguish them from shelves containing equally colourful items. While the brain structures underlying human vision are prone to so-called optical illusions, this problem acquires a more dramatic scale with modern neural networks: there are known cases where replacing an image with noise leads to the recognition of an object that is not there, or replacing one pixel in an image makes the network mistake the object for something else.

Fourth, the inadequacy of the information capacity and parameters of the neural network to the image of the world it is shown during training and operation can lead to the practical problem of catastrophic forgetting. This is seen when a system that had first been trained to identify situations in a set of contexts and then fine-tuned to recognize them in a new set of contexts may lose the ability to recognize them in the old set. For instance, a neural machine vision system initially trained to recognize pedestrians in an urban environment may be unable to identify dogs and cows in a rural setting, but additional training to recognize cows and dogs can make the model forget how to identify pedestrians, or start confusing them with small roadside trees.

Growth Potential?

The expert community sees a number of fundamental problems that need to be solved before a “general,” or “strong,” AI is possible. In particular, as demonstrated by the biggest annual AI conference held in Macao, “explainable AI” and “transfer learning” are simply necessary in some cases, such as defence, security, healthcare and finance. Many leading researchers also think that mastering these two areas will be the key to creating a “general,” or “strong,” AI.

Explainable AI allows for human beings (the user of the AI system) to understand the reasons why a system makes decisions and approve them if they are correct, or rework or fine-tune the system if they are not. This can be achieved by presenting data in an appropriate (explainable) manner or by using methods that allow this knowledge to be extracted with regard to specific precedents or the subject area as a whole. In a broader sense, explainable AI also refers to the capacity of a system to store, or at least present its knowledge in a human-understandable and human-verifiable form. The latter can be crucial when the cost of an error is too high for it only to be explainable post factum. And here we come to the possibility of extracting knowledge from the system, either to verify it or to feed it into another system.

Transfer learning is the possibility of transferring knowledge between different AI systems, as well as between man and machine so that the knowledge possessed by a human expert or accumulated by an individual system can be fed into a different system for use and fine-tuning. Theoretically speaking, this is necessary because the transfer of knowledge is only fundamentally possible when universal laws and rules can be abstracted from the system’s individual experience. Practically speaking, it is the prerequisite for making AI applications that will not learn by trial and error or through the use of a “training set,” but can be initialized with a base of expert-derived knowledge and rules – when the cost of an error is too high or when the training sample is too small.

How to Get the Best of Both Worlds?

There is currently no consensus on how to make an artificial general intelligence that is capable of solving the abovementioned problems or is based on technologies that could solve them.

One of the most promising approaches is probabilistic programming, which is a modern development of symbolic AI. In probabilistic programming, knowledge takes the form of algorithms and source, and target data is not represented by values of variables but by a probabilistic distribution of all possible values. Alexei Potapov, a leading Russian expert on artificial general intelligence, thinks that this area is now in a state that deep learning technology was in about ten years ago, so we can expect breakthroughs in the coming years.

Another promising “symbolic” area is Evgenii Vityaev’s semantic probabilistic modelling, which makes it possible to build explainable predictive models based on information represented as semantic networks with probabilistic inference based on Pyotr Anokhin’s theory of functional systems.

One of the most widely discussed ways to achieve this is through so-called neuro-symbolic integration – an attempt to get the best of both worlds by combining the learning capabilities of subsymbolic deep neural networks (which have already proven their worth) with the explainability of symbolic probabilistic modelling and programming (which hold significant promise). In addition to the technological considerations mentioned above, this area merits close attention from a cognitive psychology standpoint. As viewed by Daniel Kahneman, human thought can be construed as the interaction of two distinct but complementary systems: System 1 thinking is fast, unconscious, intuitive, unexplainable thinking, whereas System 2 thinking is slow, conscious, logical and explainable. System 1 provides for the effective performance of run-of-the-mill tasks and the recognition of familiar situations. In contrast, System 2 processes new information and makes sure we can adapt to new conditions by controlling and adapting the learning process of the first system. Systems of the first kind, as represented by neural networks, are already reaching Gartner’s so-called plateau of productivity in a variety of applications. But working applications based on systems of the second kind – not to mention hybrid neuro-symbolic systems which the most prominent industry players have only started to explore – have yet to be created.

This year, Russian researchers, entrepreneurs and government officials who are interested in developing artificial general intelligence have a unique opportunity to attend the first AGI-2020 international conference in St. Petersburg in late June 2020, where they can learn about all the latest developments in the field from the world’s leading experts.

From our partner RIAC

Continue Reading
Comments

Science & Technology

Can big data help protect the planet?

Published

on

How do we get to a more sustainable and inclusive future if we don’t know where we are? This is where data comes in and, right now, we do not have the data we need. 

These were some of the questions asked at the Third Global Session of the UN Science-Policy-Business Forum held during the UN Environment Assembly. The virtual discussion delved into the role of big data and frontier tech in the transition to a sustainable future. 

Opening the session, United Nations Environment Programme (UNEP) Executive Director Inger Andersen said science needed to be digitized so it could be more democratic and accessible. She said digital transformation is central to UNEP’s new Medium-Term Strategy

“Big data and new tech can support real-time monitoring of the environment, help consumers adopt more sustainable behaviour, and create sustainable value chains,” she said. “The [UN] Secretary-General has made it very clear that digital transformation has to be part and parcel of the UN … we have oceans of data but drops of information.”

UNEP studies show that for 68 per cent of the environment-related Sustainable Development Goal indicators, there is not enough data to assess progress.

At the event, participants stressed that knowledge obtained through the latest digital technologies such as Artificial Intelligence, Machine Learning and the Internet of Things could speed up progress on environmental goals. Better data could inform interventions and investment, while boosting results and impact measurement.

Bridging the data divide

The data deficit is also hindering the world’s ability to respond to climate change.

Petteri Taalas, Secretary-General of the World Meteorological Organization, said earth observation systems and early warning services were still poor in parts of the world, with around US$ 400 million needed to improve these. 

“That is one of the ways to adapt to climate change – to invest in early warning services and observation systems. We have to monitor what is happening to the climate but this monitoring is in poor shape,” he said.  

Making the right technology available to developing countries not only presents a financing challenge, but also underlines the profound need for accessible, open-source technology.

Munir Akram, President of the UN Economic and Social Council, said bridging the digital divide is critical. He noted that connectivity was only around 17 per cent in the poorest countries compared to above 80 per cent in richer countries.

“We need to build a database for all the open source technologies that are available in the world and could be applied to build greener and more sustainable structures of production and consumption. These technologies are available but there is no composite database to access them,” he said.

UNEP’s digital transformation

UNEP’s commitment to harnessing technology for environmental action begins ‘at home.’ At the fourth session of the UN Environment Assembly in 2019, Member States called for a Big Data Strategy for UNEP by 2025.

The organisation is currently undertaking a digital transformation process, while also focusing on four key challenges:

  1. Help producers measure and disclose the environmental and climate performance of their products and supply chains;
  2. Help investors assess climate and environmental risks and align global capital flows to climate goals;
  3. Enable regulators to monitor real-time progress and risks;
  4. Integrate this data into the digital economy to shape incentives, feedback loops and behaviours.
  5. Indispensable tools
  6. Other cutting-edge digital transformation initiatives are also in progress. UNEP’s World Environment Situation Room, a platform put together by a consortium of Big Data partners in 2019, includes geo-referenced, remote-sensing and earth observation information and collates climate data in near real-time.
  7. At the event, Juliet Kabera, Director General of the Rwanda Environment Management Authority, described how her country had invested heavily in technology, including connectivity, drones and online platforms, such as the citizen e-service portal, Irembo.
  8. “There is no doubt that technology has a critical role in addressing the urgent challenges we all face today, regardless of where we are in the world,” Kabera said. “The COVID-19 pandemic once again reminded us that science and technology remain indispensable tools for humanity at large.”

UN Environment

Continue Reading

Science & Technology

Women and girls belong in science

Published

on

As part of the World Bank's Education Quality Improvement Programme, students study biology at Sofia Amma Jan Girl's School in the Kandahar province of Afghanistan. World Bank/Ishaq Anis

Closed labs and increased care responsibilities are just a two of the challenges women in scientific fields are facing during the COVID-19 pandemic, the UN chief said in his message for the International Day of Women and Girls in Science, on Thursday. 

“Advancing gender equality in science and technology is essential for building a better future”, Secretary-General António Guterres stated, “We have seen this yet again in the fight against COVID-19”. 

Women, who represent 70 per cent of all healthcare workers, have been among those most affected by the pandemic and those leading the response to it. Yet, as women bear the brunt of school closures and working from home, gender inequalities have increased dramatically over the past year.  

Woman’s place is in the lab 

Citing the UN Educational, Scientific and Cultural Organization (UNESCO) he said that women account for only one third of the world’s researchers and hold fewer senior positions than men at top universities, which has led to “a lower publication rate, less visibility, less recognition and, critically, less funding”. 

Meanwhile, artificial intelligence (AI) and machine learning replicate existing biases.  

“Women and girls belong in science”, stressed the Secretary-General. 

Yet stereotypes have steered them away from science-related fields.  

Diversity fosters innovation 

The UN chief underscored the need to recognize that “greater diversity fosters greater innovation”.  

“Without more women in STEM [science, technology, engineering and mathematics], the world will continue to be designed by and for men, and the potential of girls and women will remain untapped”, he spelled out. 

Their presence is also critical in achieving the Sustainable Development Goals (SDGs), to close gender pay gaps and boost women’s earnings by $299 billion over the next ten years, according to Mr. Guterres. 

“STEM skills are also crucial in closing the global Internet user gap”, he said, urging everyone to “end gender discrimination, and ensure that all women and girls fulfill their potential and are an integral part in building a better world for all”. 

‘A place in science’ 

Meanwhile, despite a shortage of skills in most of the technological fields driving the Fourth Industrial Revolution, women still account for only 28 per cent of engineering graduates and 40 per cent of graduates in computer science and informatics, according to UNESCO.  

It argues the need for women to be a part of the digital economy to “prevent Industry 4.0 from perpetuating traditional gender biases”.  

UNESCO chief Audrey Azoulay observed that “even today, in the 21st century, women and girls are being sidelined in science-related fields due to their gender”.  

As the impact of AI on societal priorities continues to grow, the underrepresentation of women’s contribution to research and development means that their needs and perspectives are likely to be overlooked in the design of products that impact our daily lives, such as smartphone applications.  

“Women need to know that they have a place in science, technology, engineering and mathematics, and that they have a right to share in scientific progress”, said Ms. Azoulay.

‘Pathway’ to equality

Commemorating the day at a dedicated event, General Assembly President Volkan Bozkir informed that he is working with a newly established Gender Advisory Board to mainstream gender throughout all of the UN’s work, including the field of science. 

“We cannot allow the COVID-19 pandemic to derail our plans for equality”, he said, adding that increasing access to science, technology, engineering and mathematics education, for women and girls has emerged as “a pathway to gender equality and as a key objective of the 2030 Agenda for Sustainable Development”. 

Mr. Volkan highlighted the need to accelerate efforts and invest in training for girls to “learn and excel in science”. 

“From the laboratory to the boardroom, Twitter to television, we must amplify the voices of female scientists”, he stressed. 

STEM minorities  

Meanwhile, UNESCO and the L’Oréal Foundation honoured five women researchers in the fields of astrophysics, mathematics, chemistry and informatics as part of the 23rd International Prize for Women in Science.  

In its newly published global study on gender equality in scientific research, To be smart, the digital revolution will need to be inclusive, UNESCO shows that although the number of women in scientific research has risen to one in three, they remain a minority in mathematics, computer science, engineering and artificial intelligence. 

“It is not enough to attract women to a scientific or technological discipline”, said Shamila Nair-Bedouelle, Assistant UNESCO Director-General for Natural Sciences.  

“We must also know how to retain them, ensuring that their careers are not strewn with obstacles and that their achievements are recognized and supported by the international scientific community”. 

Continue Reading

Science & Technology

Importance of information technology and digital marketing in Today’s world

Published

on

In the current times, to cope up with the demands of the changing world, we need to adopt digital and modern platforms. With the world rapidly growing towards digitalization and investing in information technology, our state is also going for unconventional means for carrying out different tasks in a more appropriate and time saving manner.

Firstly, we can take an example of online shopping. Many international and local brands have their online stores. Customers can order anything from any part of the world without traveling from one place to another. This initiative has contributed towards time saving and efficient use of technology. One can get whatever they want at their doorstep without any hustle of the traffic. This initiative has boosted the business as there are walk in customers as well as online. This initiative has also attracted a large number of audience due to ease and convenience in shopping. This phenomenon comes under the digitalization process. We should not forget the significance of internet in this regard as it was the first step towards digitalization. All the communication and digital platforms we are using are accessible to us due to internet.

Another aspect of information technology is combating the communication gap between states and its masses. Today, there are many applications like WhatsApp, Skype, Facebook, messenger etc. through which one can communicate with his/her friends, relatives without being physically present there.

We have websites of different organizations as well as educational institutions through which we can get the information of that specific organization. Like, when we are registered with an organization, all our data is stored on its official page and accessible to specific persons. Same is the case with students that their educational record is held by university and when they are registered with their institutions, they can receive any updates or any new events or job opening through emails and messages.

The Covid-19 factor cannot be ignored in this regard. Due to the rise in Corona cases, jobs have been shifted from physical to online. Work-from-home is the new normal. All this is happening due to the digitalization process. It would not be wrong to say that the progress in information technology and digital platforms has made the life easier for the people.

Another prominent component is the online banking. Through this people can easily do transactions through their phones or PC’s by logging in to their bank accounts while sitting at their home and can access it any time. Bills can be paid through it. This is definitely a sigh of relief for the people who are tired of standing in the long queue outside banks to submit their bills or complexities while going to banks and doing transactions over there. This facility has also minimized the time wasted in traffic jams and standing in queue for long hours while going to banks. This time could be used for other productive tasks.

Online registration of cars in Islamabad initiated during the COVID-19 is another wonder of digitalization process. Islamabad administration has made it easier for its people to register their cars while sitting at their homes without the fear of being infected. Food delivery systems should also be appreciated for their smart work. There are apps like food panda, cheetah etc. through which people can order their desired food through a call. Many food chains offer home deliveries that has made the lives of the people much convenient.

The much-appreciated step by the government is producing Pakistan made ventilators and stents in the view of the rapidly increasing Corona cases. This was possible due to appropriate scientific and technological knowledge. The government has also said that soon we will be seeing Pakistan made chargeable vehicles on the roads. They will prove to be economical and fuel saving; they will be easy to handle and have human friendly interface.

Developments in Nadra is another milestone as now everything is computerized, there is no paperwork required and all the records are saved in computers. Recently, our interior minister has said that Nadra will now exempt the cost of making identity cards and the card will be provided to the person after 15 days as previously it to took more time to give the card to the concerned person. Removing check posts in the capital and substituting them with other efficient measures like cameras, drones is another achievement. Another recent development in the line of digitalization that cannot be ignored is inauguration of online system by the Islamabad traffic Police through which people can get their license and other paperwork can be done through the online portal.

It can be concluded that we are gradually moving from traditional ways of working towards a digitalized era. However, there is still a room for improvement, the good thing is that people are understanding the importance of the digitalization process by gradually accepting it but further awareness through innovative campaigns does not bring any bad. An interesting take pertinent to advanced digitalization and technological growth is that it had definitely made people to completely rely on digital processes and solutions that now people have to opt for these advanced strategies in any case, whether they are comfortable with or not. Obviously, good things take time and using digital resources for fruitful purposes is not a bad idea at all; unless and until resources are not wasted.

Continue Reading

Publications

Latest

Trending