Connect with us

Science & Technology

Artificial intelligence: Between myth and reality

Jean-Gabriel Ganascia

Published

on

Are machines likely to become smarter than humans? No, says Jean-Gabriel Ganascia: this is a myth inspired by science fiction. The computer scientist walks us through the major milestones in artificial intelligence (AI), reviews the most recent technical advances, and discusses the ethical questions that require increasingly urgent answers.

A scientific discipline, AI officially began in 1956, during a summer workshop organized by four American researchers – John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon – at Dartmouth College in New Hampshire, United States. Since then, the term “artificial intelligence”, probably first coined to create a striking impact, has become so popular that today everyone has heard of it. This application of computer science has continued to expand over the years, and the technologies it has spawned have contributed greatly to changing the world over the past sixty years.

However, the success of the term AI is sometimes based on a misunderstanding, when it is used to refer to an artificial entity endowed with intelligence and which, as a result, would compete with human beings. This idea, which refers to ancient myths and legends, like that of the golem [from Jewish folklore, an image endowed with life], have recently been revived by contemporary personalities including the British physicist Stephen Hawking (1942-2018), American entrepreneur Elon Musk, American futurist Ray Kurzweil, and  proponents of what we now call Strong AI or Artificial General Intelligence (AGI). We will not discuss this second meaning here, because at least for now, it can only be ascribed to a fertile imagination, inspired more by science fiction than by any tangible scientific reality confirmed by experiments and empirical observations.

For McCarthy, Minsky, and the other researchers of the Dartmouth Summer Research Project (link is external)on Artificial Intelligence, AI was initially intended to simulate each of the different faculties of intelligence – human, animal, plant, social or phylogenetic – using machines. More precisely, this scientific discipline was based on the conjecture that all cognitive functions – especially learning, reasoning, computation, perception, memorization, and even scientific discovery or artistic creativity – can be described with such precision that it would be possible to programme a computer to reproduce them. In the more than sixty years that AI has existed, there has been nothing to disprove or irrefutably prove this conjecture, which remains both open and full of potential.

Uneven progress

In the course of its short existence, AI has undergone many changes. These can be summarized in six stages.

The time of the prophets

First of all, in the euphoria of AI’s origins and early successes, the researchers had given free range to their imagination, indulging in certain reckless pronouncements for which they were heavily criticized later. For instance, in 1958, American  political scientist and economist Herbert A. Simon – who received the Nobel Prize in Economic Sciences in 1978 – had declared that, within ten years, machines would become world chess champions if they were not barred from international competitions.

The dark years

By the mid-1960s, progress seemed to be slow in coming. A 10-year-old child beat a computer at a chess game in 1965, and a report commissioned by the US Senate in 1966 described the intrinsic limitations of machine translation. AI got bad press for about a decade.

Semantic AI

The work went on nevertheless, but the research was given new direction. It focused on the psychology of memory and the mechanisms of understanding – with attempts to simulate these on computers – and on the role of knowledge in reasoning. This gave rise to techniques for the semantic representation of knowledge, which developed considerably in the mid-1970s, and also led to the development of expert systems, so called because they use the knowledge of skilled specialists to reproduce their thought processes. Expert systems raised enormous hopes in the early 1980s with a whole range of applications, including medical diagnosis.

Neo-connectionism and machine learning

Technical improvements led to the development of machine learning algorithms, which allowed  computers to accumulate knowledge and to automatically reprogramme themselves, using their own experiences.

This led to the development of industrial applications (fingerprint identification, speech recognition, etc.), where techniques from AI, computer science, artificial life and other disciplines were combined to produce hybrid systems.

From AI to human-machine interfaces

Starting in the late 1990s, AI was coupled with robotics and human-machine interfaces to produce intelligent agents that suggested the presence of feelings and emotions. This gave rise, among other things, to the calculation of emotions (affective computing), which evaluates the reactions of a subject feeling emotions and reproduces them on a machine, and especially to the development of conversational agents (chatbots).

Renaissance of AI

Since 2010, the power of machines has made it possible to exploit  enormous quantities of data (big data) with deep learning techniques, based on the use of formal neural networks. A range of very successful applications in several areas – including speech and image recognition, natural language comprehension and autonomous cars – are leading to an AI renaissance.

Applications

Many achievements using AI techniques surpass human capabilities – in 1997, a computer programme defeated the reigning world chess champion, and more recently, in 2016, other computer programmes have beaten the world’s best Go [an ancient Chinese board game] players and some top poker players. Computers are proving, or helping to prove, mathematical theorems; knowledge is being automatically constructed from huge masses of data, in terabytes (1012 bytes), or even petabytes (1015 bytes), using machine learning techniques.

As a result, machines can recognize speech and transcribe it – just like typists did in the past. Computers can accurately identify faces or fingerprints from among tens of millions, or understand texts written in natural languages. Using machine learning techniques, cars drive themselves; machines are better than dermatologists at diagnosing melanomas using photographs of skin moles  taken with mobile phone cameras; robots are fighting wars instead of humans; and factory production lines are becoming increasingly automated.

Scientists are also using AI techniques to determine the function of certain biological macromolecules, especially proteins and genomes, from the sequences of their constituents ‒ amino acids for proteins, bases for genomes. More generally, all the sciences are undergoing a major epistemological rupture with in silico experiments – named so because they are carried out by computers from massive quantities of data, using powerful processors whose cores are made of silicon. In this way, they differ from in vivo experiments, performed on living matter, and above all, from in vitro experiments, carried out in glass test-tubes.

Today, AI applications affect almost all fields of activity – particularly in the industry, banking, insurance, health and defence sectors. Several routine tasks are now automated, transforming many trades and eventually eliminating some.

What are the ethical risks?

With AI, most dimensions of intelligence ‒ except perhaps humour ‒ are subject to rational analysis and reconstruction, using computers. Moreover, machines are exceeding our cognitive faculties in most fields, raising fears of ethical risks. These risks fall into three categories – the scarcity of work, because it can be carried out by machines instead of humans; the consequences for the autonomy of the individual, particularly in terms of freedom and security; and the overtaking of humanity, which would be replaced by more “intelligent” machines.

However, if we examine the reality, we see that work (done by humans) is not disappearing – quite the contrary – but it is changing and calling for new skills. Similarly, an individual’s autonomy and  freedom are not inevitably undermined by the development of AI – so long as we remain vigilant in the face of technological intrusions into our private lives.

Finally, contrary to what some people claim, machines pose no existential threat to humanity. Their autonomy is purely technological, in that it corresponds only to material chains of causality that go from the taking of information to decision-making. On the other hand, machines have no moral autonomy, because even if they do confuse and mislead us in the process of making decisions, they do not have a will of their own and remain subjugated to the objectives that we have assigned to them.

Source: UNESCO

French computer scientist Jean-Gabriel Ganascia is a professor at Sorbonne University Paris. He is also a researcher at LIP6 the computer science laboratory at the Sorbonne, a fellow of the European Association for Artificial Intelligence a member of the Institut Universitaire de France and chairman of the ethics committee of the National Centre for Scientific Research (CNRS Paris. His current research interests include machine learning, symbolic data fusion, computational ethics, computer ethics and digital humanities.

Continue Reading
Comments

Science & Technology

Central Banks Becoming Leaders in Blockchain Experimentation

MD Staff

Published

on

Although central banks are among the most cautious institutions in the world, they are, perhaps surprisingly, among the first to implement and experiment with blockchain technology. Central banks have been quietly researching its possibilities since 2014. Over the past two years, the beginning of a new wave has emerged as more central banks launch large-scale pilots and research efforts, including rapid and complete cross-border interbank securities.

The Blockchain and Distributed Ledger Technology team at the World Economic Forum interviewed dozens of central bank researchers and analysed more than 60 reports on past and current research efforts. The findings were released today in a white paper, Central Banks and Distributed Ledger Technology: How are Central Banks Exploring Blockchain Today?

“As the blockchain hype cools, we are starting to see the real use cases for blockchain technology take the spotlight,” said Ashley Lannquist, Blockchain Project Lead at the World Economic Forum. “Central bank activities with blockchain and distributed ledger technology are not always well known or communicated. As a result, there is much speculation and misunderstanding about objectives and the state of research. Dozens of central banks around the world are actively investigating whether blockchain can help solve long-standing challenges such as banking and payments system efficiency, payments security and resilience, as well as financial inclusion.”

It is not widely known, for instance, that the Bank of France has fully replaced its centralized process for the provisioning and sharing of SEPA Credit Identifiers (SCIs) with a decentralized, blockchain-based solution. SEPA, or Single Euro Payments Area, is a payment scheme created by the European Union and managed on a country-by-country basis for facilitating efficient and secure cross-border retail debit and card payments across European countries. The solution is a private deployment of the Ethereum blockchain network and has been in use since December 2017. It has enabled greater time efficiency, process auditability and disaster recovery.

The fact that dozens of central banks are exploring, and in some cases implementing, blockchain technology is significant, according to the white paper. It is an early indicator of the potential use of this emerging technology across financial and monetary systems. “Central banks play one of the most critical roles in the global economy, and their decisions about implementing distributed ledger and digital currency technologies in the future can have far-reaching implications for economies,” Lannquist said.

Top 10 central bank use cases

Following interviews and analysis, how central banks are experimenting with blockchain can be highlighted by 10 top use cases.

Retail central bank digital currency (CBDC) –
A substitute or complement for cash and an alternative to traditional bank deposits. A central-bank-issued digital currency can be operated and settled in a peer-to-peer and decentralized manner, widely available for consumer use. Central banks from several countries are experimenting, including those from the the Eastern Caribbean, Sweden, Uruguay, the Bahamas and Cambodia.

Wholesale central bank digital currency (CBDC) – This kind of digital currency would only be available for commercial banks and clearing houses to use the wholesale interbank market.Central bank-issued digital currency would be operated and settled in a peer-to-peer and decentralized manner. Central banks from several countries are experimenting, including those from South Africa, Canada, Japan, Thailand, Saudi Arabia, Singapore and Cambodia.

Interbank securities settlement – A focused application of blockchain technology, sometimes involving CBDC, enabling the rapid interbank clearing and settlement of securities for cash. This can achieve “delivery versus payment” interbank systems where two parties trading an asset, such as a security for cash, can conduct the payment for and delivery of the asset simultaneously. Central banks exploring this include the Bank of Japan, Monetary Authority of Singapore, Bank of England and Bank of Canada.

Payment system resiliency and contingency – The use of distributed ledger technology in a primary or back-up domestic interbank payment and settlement system to provide safety and continuity in case of threats, including technical or network failure, natural disaster, cybercrime and others. Often, this use case is coupled with others as part of the set of benefits that a distributed ledger technology implementation could potentially offer. Central banks exploring this include the Central Bank of Brazil and Eastern Caribbean Central Bank.

Bond issuance and lifecycle management – The use of distributed ledger technology in the bond auction, issuance or other life-cycle processes to reduce costs and increase efficiency. This may be applied to bonds issued and managed by sovereign states, international organizations or government agencies. Central banks or government regulators could be “observer nodes” to monitor activity where relevant. Early implementation is being conducted by the World Bank with their 2018 “bond-i” project.

Know-your-customer (KYC) and anti-money-laundering (AML) – Digital KYC/AML processes that leverage distributed ledger technology to track and share relevant customer payment and identity information to streamline processes. This may connect to a digital national identity platform or plug into pre-existing e-KYC or AML systems. Central banks exploring this include the Hong Kong Monetary Authority.

Information exchange and data sharing – The use of distributed or decentralized databases to create alternative systems for information and data sharing between or within related government or private sector institutions. Central banks exploring include the Central Bank of Brazil.

Trade finance – The employment of a decentralized database and functionality to enable faster, more efficient and more inclusive trade financing. Improves on today’s trade finance processes, which are often paper-based, labour-intensive and time-intensive. Customer information and transaction histories are shared between participants in the decentralized database while maintaining privacy and confidentiality where needed. Central banks exploring this include the Hong Kong Monetary Authority.

Cash money supply chain – The use of distributed ledger technology for issuing, tracking and managing the delivery and movement of cash from production facilities to the central bank and commercial bank branches; could include the ordering, depositing or movement of funds, and could simplify regulatory reporting. Central banks exploring this include the Eastern Caribbean Central Bank.

Customer SEPA Creditor Identifier (SCI) provisioning – Blockchain-based decentralized sharing repository for SEPA credit identifiers managed by the central bank and commercial banks in the SEPA debiting scheme. This is a faster, streamlined and decentralized system for identity provisioning and sharing. It can replace pre-existing manual and centralized processes that are time- and resource-intensive, as seen in the Bank of France’s Project MADRE implementation.

Emerging economies may benefit most: Cambodia, Thailand and South Africa and others experimenting

The National Bank of Cambodia will be one of the first countries to deploy blockchain technology in its national payments system for use by consumers and commercial banks. It is implementing blockchain technology in the second half of 2019 as an experiment to support financial inclusion and greater banking system efficiency.

The Bank of Thailand and the South African Reserve Bank, among others, are experimenting with CBDC in large-scale pilots for interbank payment and settlement efficiency. The Eastern Caribbean Central Bank is exploring the suitability of distributed ledger technology (DLT) to advance multiple goals, from financial inclusion and payments efficiency to payment system resilience against storms and hurricanes.

“Over the next four years, we should expect to see many central banks decide whether they will use blockchain and distributed ledger technologies to improve their processes and economic welfare,” Lannquist said. “Given the systemic importance of central bank processes, and the relative freshness of blockchain technology, banks must carefully consider all known and unknown risks to implementation.”

Continue Reading

Science & Technology

How Nuclear Techniques Help Feed China

Published

on

With 19% of the world’s population but only 7% of its arable land, China is in a bind: how to feed its growing and increasingly affluent population while protecting its natural resources. The country’s agricultural scientists have made growing use of nuclear and isotopic techniques in crop production over the last decades. In cooperation with the IAEA and the Food and Agriculture Organization of the United Nations (FAO), they are now helping experts from Asia and beyond in the development of new crop varieties, using irradiation.

While in many countries, nuclear research in agriculture is carried out by nuclear agencies that work independently from the country’s agriculture research establishment, in China the use of nuclear techniques in agriculture is integrated into the work of the Chinese Academy of Agricultural Sciences (CAAS) and provincial academies of agricultural sciences. This ensures that the findings are put to use immediately.

And indeed, the second most widely used wheat mutant variety in China, Luyuan 502, was developed by CAAS’s Institute of Crop Sciences and the Institute of Shandong Academy of Agricultural Sciences, using space-induced mutation breeding (see Space-induced mutation breeding). It has a yield that is 11% higher than the traditional variety and is also more tolerant to drought and main diseases, said Luxiang Liu, Deputy Director General of the Institute. It has been planted on over 3.6 million hectares – almost as large as Switzerland. It is one of 11 wheat varieties developed for improved salt and drought tolerance, grain quality and yield, Mr Liu said.

Through close cooperation with the IAEA and FAO, China has released over 1,000 mutant crop varieties in the past 60 years, and varieties developed in China account for a fourth of mutants listed currently in the IAEA/FAO’s database of mutant varieties produced worldwide, said Sobhana Sivasankar, Head of the Plant Breeding and Genetics Section at the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture. The new mutation induction and high-throughput mutant selection approaches established at the Institute serve as a model to researchers from around the world, she added.

The Institute uses heavy ion beam accelerators, cosmic rays and gamma rays along with chemicals to induce mutations in a wide variety of crops, including wheat, rice, maize, soybean and vegetables. “Nuclear techniques are at the heart of our work, fully integrated into the development of plant varieties for the improvement of food security,” Liu said.

The Institute has also become a key contributor to the IAEA technical cooperation programme over the years: more than 150 plant breeders from over 30 countries have participated in training courses and benefited from fellowships at CAAS. 

Indonesia’s nuclear agency, BATAN, and CAAS are looking for ways to collaborate on plant mutation breeding and Indonesian researchers are looking for ways to learn from China’s experience, said Totti Tjiptosumirat, Head of BATAN’s Center for Isotopes and Radiation Application. “Active dissemination and promotion of China’s activities in plant mutation breeding would benefit agricultural research across Asia,” he said.

From food safety to authenticity

Several of CAAS’ other institutes use nuclear-related and isotopic techniques in their research and development work and participate in several IAEA technical cooperation and coordinated research projects. The Institute of Quality Standards and Testing Technology for Agro-Products has developed a protocol to detect fake honey, using isotopic analysis. A large amount of what is sold in China as honey is estimated to be produced synthetically in labs rather than by bees in hives, so this has been an important tool in cracking down on fraudsters, said Professor Chen Gang, who leads the research work using isotopic techniques at the Institute. A programme is also in place to trace the geographical origin of beef using stable isotopes, he added.

The Institute uses isotopic techniques to test the safety and to verify the authenticity of milk and dairy products – work that was the outcome of IAEA technical coordinated research and cooperation projects that lasted from 2013 to 2018. “After a few years of support, we are now fully self-sufficient,” Mr Gang said.

Improving nutrition efficiency

Various CAAS institutes use stable isotopes to study the absorption, transfer and metabolism of nutrients in animals. The results are used to optimize feed composition and feeding schedules. Isotope tracing offers higher sensitivity than conventional analytical methods, and this is particularly advantageous when studying the absorption of micronutrients, vitamins, hormones and drugs, said Dengpan Bu, Professor at the Institute of Animal Science.

While China has perfected the use of many nuclear techniques, in several areas it is looking to the IAEA and the FAO for support: the country’s dairy industry is dogged by the low protein absorption rate of dairy cows. Less than half of the protein in animal feed is used by the ruminants, the rest ends up in their manure and urine. “This is wasteful for the farmer and the high nitrogen content in the manure hurts the environment,” Mr Bu said. The use of isotopes to trace nitrogen as it travels from feed through the animal’s body would help improve nitrogen efficiency by making the necessary adjustments to the composition of the feed. This will be particularly important as dairy consumption, currently at a third of global average per person, continues to rise. “We are looking for international expertise, through the IAEA and the FAO, to help us tackle this problem.”

IAEA

Continue Reading

Science & Technology

When neuroscience meets AI: What does the future of learning look like?

MD Staff

Published

on

Photo: MGIEP

Meet Dr. Nandini Chatterjee Singh, a cognitive neuroscientist at UNESCO MGIEP (Mahatma Gandhi Institute of Education for Peace and Sustainable Development) where she has been leading the development of a new framework for socio-emotional learning. MGIEP focuses on mainstreaming socio-emotional learning in education systems and innovating digital pedagogies.

Dr. Singh answered five questions on the convergence of neuroscience and Artificial Intelligence in learning, ahead of the International Congress on Cognitive Science in Schools where she will be speaking this week.

What are the links between neuroscience and Artificial Intelligence when it comes to learning?

The focus of both neuroscience and AI is to understand how the brain works and thus predict behaviour. And the better we understand the brain, the better designs we can create for AI algorithms. When it comes to learning, the neuroscience – AI partnership can be synergistic. A good understanding of a particular learning process by neuroscience can be used to inform the design of that process for AI. Similarly, if AI can find patterns from large data sets and get a learning model, neuroscience can conduct experiments to confirm it. 

Secondly, when neuroscience provides learning behaviours to AI, these behaviours can be translated into digital interactions, which in turn are used by AI to look at learning patterns across large numbers of children worldwide. The power of AI is that it can scale this to large numbers. AI can track and search through massive amounts of data to see how that learning happens, and when required, identify when learning is different or goes off track.

A third  feature is that of individualized learning.  We increasingly also know that learning has a strong individual component. Yet our classrooms are structured to provide common learning to all children. Sometimes these individual differences become crucial to bring out the best in children, which is when we might tailor learning.  Neuroscience research on individual differences has shown that detailed information on that individual can reveal a wealth of information about their learning patterns. However, this is extremely cost and labour intensive. Yet, this detailed learning from neuroscience can be provided to AI in order to scale. AI can collect extensive detailed data at the personal level, to design a path to learning for that child. Thus, what neuroscience can study in small groups, AI can implement in large populations. If we are to ensure a world where every child achieves full potential, such personalized learning offers a great promise.

How do we create a structure around AI to ensure learning standards globally?

One thing AI capitalizes on and constantly relies on is large volumes of data. AI algorithms perform better if they are being fed by continuous distributed data. We need to keep in mind that humans are the ones designing these algorithms. This means that the algorithms will only do as well as the data that they have been trained on. Ensuring that we have access to large amounts of data that comes from various situations of learning is crucial. What sometimes becomes an issue for AI algorithms is that most of the training data has been selected from one particular kind of population. This means that the diversity in the forms of learning is missing from the system.

To return to reading and literacy as an example, in neuroscience, a large part of our research and understanding of how the brain learns to read has come from individuals learning to read English and alphabetic languages. However, globally, billions of people speak or read non-alphabetic languages and scripts that are visually complex, which are not really reflected in this research. Our understanding is built on one particular system that does not have enough diversity.

Therefore, it is important that AI algorithms be tested in varied environments around the world where there are differences in culture. This will create more robust learning models that are able to meet diverse learning requirements and cater to every kind of learner from across the world. If we are able to do that, then we can predict what the learning trajectory will look like for children anywhere.

Human beings have similarities in the way they learn, but pedagogies vary across different situations. In addition, those differences must be reflected in the data provided. The results would be much more pertinent if we are able to capture and reflect those differences in the data. This will help us improve the learning of AI, and ultimately understand how the brain works. We would then be better suited to leverage the universal principles of learning that are being used across the world and effects that are cultural in nature. That is also something that we want to hold on to and capitalize on in trying to help children. People designing AI algorithms so far have not given a lot of attention to this, but they are now beginning to consider it in many places across the world.

How do you see AI’s role in inclusive education today, especially in the context of migration?

Societies have become multicultural in nature. If you go to a typical classroom in many countries, you will find children from diverse cultures sitting in the same learning space. Learning has to be able to meet a variety of needs and must become more inclusive and reflect cultural diversity. Innovative pedagogy such as games, interactive sessions and real-life situations are key because they test learning capabilities focused on skills that children should acquire.  AI relies on digital interactions to understand learning and that comes from assessing skills and behaviours. We now recognize that what we need to empower our children with are skills and behaviours – not necessarily tons of information.

Digital pedagogies like interactive games are among the ones emerging rapidly to assess children’s skills. They are powerful because they can be used in multicultural environments and can assess different competencies. They are not necessarily tied to a specific language or curricula but are rather performance-based. How do you assess children for collaboration in a classroom? In the context of migration and 21st century skills, these are necessary abilities and digital games provide a medium to assess these in education. When such interactive games are played by children across the world, they provide digital interactions to AI. AI might discover new patterns and ways to collaborate since children have ways of doing things that are often out of the box. A skills-based approach can be applied anywhere, whether it is in a classroom in India, France or Kenya. In contrast, curriculum-based methods are context-specific and show extensive cultural variation.

What are the risks and the challenges?

Data protection and security is of course still a huge issue and is the biggest challenge in this sphere. We have to ensure that children are never at risk of exposure and that the data is not misused in any way. This is something that needs more global attention and backing.

Another crucial point is that learning assessments should not be restricted to just one domain. There are multiple ways, and time and space to learn. Learning is continuous in nature and should be able to be adapted to the child’s needs at that particular point. The assessment should also be continuous in order to get a full picture of the improvement that the child is demonstrating. If there is no improvement, then we can provide interventions to help and find out why learning is not happening. From what we know from neuroscience, the earlier you can provide intervention, the better is the chance of the child to be able to change and adapt. The ability of the brain to learn and change is much easier and faster in childhood compared to adulthood.

Yet, we want to be cautious about the conclusions we draw about how to intervene with children. Poor academic performance might have a social or emotional reason.

Thus, learning today needs to be multi-dimensional.  Along with academic competencies, social and emotional skills also need to be assessed.  If this information is used wisely, it can provide a lot of insight about the child’s academic and emotional well-being. Based on the combination of the two, the right intervention can be provided. Unless multiple assessments all converge on the same result, the child’s learning abilities should not be labeled. AI gives a great opportunity to conduct multi-skills assessments, rather than just one. And that is something that we should leverage, rather than abandon. The standards for the baselines for the algorithms must be properly taken into consideration for any type of assessment. They must come from a large quantity of distributed data in order to provide more accurate results. That is something that we should not compromise under any condition.

How is the teaching community responding to this new way of learning and assessing?

There are teachers who worry about the future of learning but that is also because they do not necessarily have the full picture. People working and promoting the use of AI in learning must play a crucial role in telling teachers that they will not be obsolete. Teachers will be more empowered and be able to meet the needs of every kind of learner in their classrooms. The ideal world would be to have one teacher per child but that is of course impossible. AI is a tool to guide teachers when it comes to finding the right intervention for a student that might be struggling to learn. That intervention comes from data that has been checked for bias and diversity and does not use ‘a one size fits all ‘approach and therefore teachers can be more certain that it will fit the needs of the child. AI gives the opportunity for the teacher to tailor learning for the child. In addition, we do not really know all the different kinds of learning. Sometimes we have to be prepared to learn from children themselves. Children can give us insights into the different ways that learning actually happens, and teachers should be able apply them back into the classroom. Teachers are extremely powerful individuals who are able to shape the brains of so many children. If they are doing a good job, they are making individuals for life.    

UNESCO

Continue Reading

Latest

East Asia9 mins ago

Washington- Pyongyang: A third attempt?

During a recent meeting with his South Korean counterpart Moon Jae-in at the White House, US President Donald Trump said...

Newsdesk2 hours ago

Creating Opportunities for People through Inclusive and Sustainable Growth in North Macedonia

Supporting faster, inclusive, and sustainable growth in the Republic of North Macedonia is the objective of the new four-year Country...

Energy News4 hours ago

Greening industry through a transition to hydrogen societies

Hydrogen offers great potential to help green the energy sector and diversify the economy; however the technology’s development needs to...

Europe6 hours ago

Italy escapes the ‘western propaganda trap’

Authors: Carter Chapwanya and Arun Upadhyaya* The Trump administration – unlike other US administrations – has clearly taken the ‘with...

Russia8 hours ago

Is Israel Taking Advantage of a Longtime Strategic Partner for Russia?

In February, Israeli Prime Minister, Benjamin Netanyahu met with his Russian counterpart, President Vladimir Putin. In what can only be...

EU Politics10 hours ago

What is InvestEU?

The InvestEU Programme will bring together under one roof the multitude of EU financial instruments currently available to support investment...

Reports12 hours ago

New safety and health issues emerge as work changes

Changes in working practices, demographics, technology and the environment are creating new occupational safety and health (OSH) concerns, according to...

Trending

Copyright © 2019 Modern Diplomacy