Connect with us

Science & Technology

Artificial intelligence: Between myth and reality

Jean-Gabriel Ganascia

Published

on

Are machines likely to become smarter than humans? No, says Jean-Gabriel Ganascia: this is a myth inspired by science fiction. The computer scientist walks us through the major milestones in artificial intelligence (AI), reviews the most recent technical advances, and discusses the ethical questions that require increasingly urgent answers.

A scientific discipline, AI officially began in 1956, during a summer workshop organized by four American researchers – John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon – at Dartmouth College in New Hampshire, United States. Since then, the term “artificial intelligence”, probably first coined to create a striking impact, has become so popular that today everyone has heard of it. This application of computer science has continued to expand over the years, and the technologies it has spawned have contributed greatly to changing the world over the past sixty years.

However, the success of the term AI is sometimes based on a misunderstanding, when it is used to refer to an artificial entity endowed with intelligence and which, as a result, would compete with human beings. This idea, which refers to ancient myths and legends, like that of the golem [from Jewish folklore, an image endowed with life], have recently been revived by contemporary personalities including the British physicist Stephen Hawking (1942-2018), American entrepreneur Elon Musk, American futurist Ray Kurzweil, and  proponents of what we now call Strong AI or Artificial General Intelligence (AGI). We will not discuss this second meaning here, because at least for now, it can only be ascribed to a fertile imagination, inspired more by science fiction than by any tangible scientific reality confirmed by experiments and empirical observations.

For McCarthy, Minsky, and the other researchers of the Dartmouth Summer Research Project (link is external)on Artificial Intelligence, AI was initially intended to simulate each of the different faculties of intelligence – human, animal, plant, social or phylogenetic – using machines. More precisely, this scientific discipline was based on the conjecture that all cognitive functions – especially learning, reasoning, computation, perception, memorization, and even scientific discovery or artistic creativity – can be described with such precision that it would be possible to programme a computer to reproduce them. In the more than sixty years that AI has existed, there has been nothing to disprove or irrefutably prove this conjecture, which remains both open and full of potential.

Uneven progress

In the course of its short existence, AI has undergone many changes. These can be summarized in six stages.

The time of the prophets

First of all, in the euphoria of AI’s origins and early successes, the researchers had given free range to their imagination, indulging in certain reckless pronouncements for which they were heavily criticized later. For instance, in 1958, American  political scientist and economist Herbert A. Simon – who received the Nobel Prize in Economic Sciences in 1978 – had declared that, within ten years, machines would become world chess champions if they were not barred from international competitions.

The dark years

By the mid-1960s, progress seemed to be slow in coming. A 10-year-old child beat a computer at a chess game in 1965, and a report commissioned by the US Senate in 1966 described the intrinsic limitations of machine translation. AI got bad press for about a decade.

Semantic AI

The work went on nevertheless, but the research was given new direction. It focused on the psychology of memory and the mechanisms of understanding – with attempts to simulate these on computers – and on the role of knowledge in reasoning. This gave rise to techniques for the semantic representation of knowledge, which developed considerably in the mid-1970s, and also led to the development of expert systems, so called because they use the knowledge of skilled specialists to reproduce their thought processes. Expert systems raised enormous hopes in the early 1980s with a whole range of applications, including medical diagnosis.

Neo-connectionism and machine learning

Technical improvements led to the development of machine learning algorithms, which allowed  computers to accumulate knowledge and to automatically reprogramme themselves, using their own experiences.

This led to the development of industrial applications (fingerprint identification, speech recognition, etc.), where techniques from AI, computer science, artificial life and other disciplines were combined to produce hybrid systems.

From AI to human-machine interfaces

Starting in the late 1990s, AI was coupled with robotics and human-machine interfaces to produce intelligent agents that suggested the presence of feelings and emotions. This gave rise, among other things, to the calculation of emotions (affective computing), which evaluates the reactions of a subject feeling emotions and reproduces them on a machine, and especially to the development of conversational agents (chatbots).

Renaissance of AI

Since 2010, the power of machines has made it possible to exploit  enormous quantities of data (big data) with deep learning techniques, based on the use of formal neural networks. A range of very successful applications in several areas – including speech and image recognition, natural language comprehension and autonomous cars – are leading to an AI renaissance.

Applications

Many achievements using AI techniques surpass human capabilities – in 1997, a computer programme defeated the reigning world chess champion, and more recently, in 2016, other computer programmes have beaten the world’s best Go [an ancient Chinese board game] players and some top poker players. Computers are proving, or helping to prove, mathematical theorems; knowledge is being automatically constructed from huge masses of data, in terabytes (1012 bytes), or even petabytes (1015 bytes), using machine learning techniques.

As a result, machines can recognize speech and transcribe it – just like typists did in the past. Computers can accurately identify faces or fingerprints from among tens of millions, or understand texts written in natural languages. Using machine learning techniques, cars drive themselves; machines are better than dermatologists at diagnosing melanomas using photographs of skin moles  taken with mobile phone cameras; robots are fighting wars instead of humans; and factory production lines are becoming increasingly automated.

Scientists are also using AI techniques to determine the function of certain biological macromolecules, especially proteins and genomes, from the sequences of their constituents ‒ amino acids for proteins, bases for genomes. More generally, all the sciences are undergoing a major epistemological rupture with in silico experiments – named so because they are carried out by computers from massive quantities of data, using powerful processors whose cores are made of silicon. In this way, they differ from in vivo experiments, performed on living matter, and above all, from in vitro experiments, carried out in glass test-tubes.

Today, AI applications affect almost all fields of activity – particularly in the industry, banking, insurance, health and defence sectors. Several routine tasks are now automated, transforming many trades and eventually eliminating some.

What are the ethical risks?

With AI, most dimensions of intelligence ‒ except perhaps humour ‒ are subject to rational analysis and reconstruction, using computers. Moreover, machines are exceeding our cognitive faculties in most fields, raising fears of ethical risks. These risks fall into three categories – the scarcity of work, because it can be carried out by machines instead of humans; the consequences for the autonomy of the individual, particularly in terms of freedom and security; and the overtaking of humanity, which would be replaced by more “intelligent” machines.

However, if we examine the reality, we see that work (done by humans) is not disappearing – quite the contrary – but it is changing and calling for new skills. Similarly, an individual’s autonomy and  freedom are not inevitably undermined by the development of AI – so long as we remain vigilant in the face of technological intrusions into our private lives.

Finally, contrary to what some people claim, machines pose no existential threat to humanity. Their autonomy is purely technological, in that it corresponds only to material chains of causality that go from the taking of information to decision-making. On the other hand, machines have no moral autonomy, because even if they do confuse and mislead us in the process of making decisions, they do not have a will of their own and remain subjugated to the objectives that we have assigned to them.

Source: UNESCO

French computer scientist Jean-Gabriel Ganascia is a professor at Sorbonne University Paris. He is also a researcher at LIP6 the computer science laboratory at the Sorbonne, a fellow of the European Association for Artificial Intelligence a member of the Institut Universitaire de France and chairman of the ethics committee of the National Centre for Scientific Research (CNRS Paris. His current research interests include machine learning, symbolic data fusion, computational ethics, computer ethics and digital humanities.

Continue Reading
Comments

Science & Technology

Digital tracking of environmental risks offers insights to humanitarian actors

MD Staff

Published

on

photo: UN Environment

By the end of this day many people will have made life-changing decisions, relying on their best guess or their instinct. Some will yield great results while others will imperil individuals, corporations and communities.

Humanitarian crises require that we make difficult choices. As they increasingly become complex, as are their impact on the environment, the choices we make must be the right ones. And to make sound, informed decisions, we need data. 

Thankfully today, all those who work in the environmental field have at their fingertips a combination of global environmental data, technologies and data science tools and techniques. These have the potential to create insights that can underpin a sustainable future and profoundly transform our relationship with our planet.

For decades, the UN Environment Programme has been working with the Office for the Coordination of Humanitarian Affairs, and partners such as the UN Refugee Agency, to make sense of environmental data for improved humanitarian planning.

In December last year, UN Environment with support from the UN Refugee Agency piloted an innovative tool for environmental data gathering and risk assessment, the Nexus Environmental Assessment Tool (NEAT+). The tool was deployed in the Mantapala refugee settlement in northern Zambia.

Built around existing farmland, Mantapala refugee settlement, near Nchelenge in northern Zambia, was built in 2018 for up to 20,000 people. It was designed to enable refugees to make a living while contributing to local development. The surrounding humid sub-tropical Mantapala Forest Reserve—an area characterized by rich biodiversity—includes the productive Wet Miombo Woodland.

According to the UN Refugee agency, Zambia hosts at least 41,000 refugees from the Democratic Republic of Congo and Mantapala refugee settlement is home to around 13,000 of them.

 Daily life isn’t easy. Flash floods can be common during the long rainy seasons when rainfalls are particularly heavy. In addition, less than 20 per cent of Nchelenge district’s households have access to electricity, and even when they do, it is so expensive that people prefer to use firewood and charcoal as their primary cooking fuels.

“With pressure mounting on natural resources throughout the world, we are exploring how to support humanitarian actors in collecting, sharing and processing environmental data for better decision-making using innovative digital environmental tools such as the Nexus Environmental Assessment Tool (NEAT+) and MapX—a United Nations-backed platform—in Mantapala settlement and beyond,” says David Jensen, UN Environment’s Head of Environmental Cooperation for Peacebuilding and Co-Director of MapX.

What makes NEAT+ so appealing is its simplicity. It is a user-friendly environmental screening tool for humanitarian contexts, which combines environmental data with site-specific questions to automatically analyse and flag priority environmental risks. The tool was developed by eight humanitarian and environmental organizations as part of the Joint Initiative, a multi-stakeholder project aimed at improving collaboration between environmental and humanitarian actors. NEAT+ supports humanitarian actors in quickly identifying issues of concern to increase the efficiency, accountability and sustainability of emergency or recovery interventions.

“NEAT+ answers the demand of a simple process to assess the sensitivity of the environment in displacement settings. It overlays environmental realities with a proposed humanitarian intervention, identifying risk and mitigation measures,” says Emilia Wahlstrom, Programme Officer, UN Environment / Office for the Coordination of Humanitarian Affairs Joint Unit.

NEAT+ runs on KoBo—a free, open source data collection platform—built by the Harvard Humanitarian Initiative—that allows data to be collected through phone, tablet or computer. Once the data is recorded, the programme automatically generates a report in Excel, categorizing risk into high, medium and low, and providing information that can help mitigate the risk.

As a next step, NEAT+ will draw increasingly on MapX, an online, open-source, fully-customizable platform for accessing and visualizing geospatial environmental data. It offers various tools to highlight different environmental risks such as deforestation, natural hazards and flood risks. NEAT will use MapX to gather and vizualise data.

In the Mantapala settlement, the NEAT+ assessment tool was used to identify negative environmental and livelihoods impacts in the settlement, where MapX spatial data highlighted nearby areas of environmental concern.

The results showed opportunities for environmental action. Where there was risk of deforestation, alternative livelihoods and agroforestry programmes could be supported. Agricultural plots vulnerable to flood damage are undergoing modification to prevent further deforestation and to reduce flood risks.

“Developing a digital ecosystem for the environment offers the possibility to access the best available data for decision-making. Tools such as MapX and NEAT+ are critical in mitigating the effects of sudden-onset natural disasters and slow-onset environmental change and degradation,” says Jensen.

“Developing and applying the NEAT+ tool has showed us the added value the environmental community can bring to the frontlines of humanitarian response. By taking the time to understand the environmental context they operate in, humanitarian actors are designing programmes that are saving money, contributing to a healthy environment, and supporting the dignity, livelihoods and health of affected people. This is critical for an increasingly complex and protracted global humanitarian crisis panorama,” comments Wahlstrom.

In 2019, the same actors who developed the NEAT+ tool, the Joint Initiative partners, launched the Environment and Humanitarian Action Connect website. Environment and Humanitarian Action Connect is a unique digital tool spanning the humanitarian-environment nexus and represents the first comprehensive online repository of environmental and humanitarian action tools and guidance. It is easily searchable and readily accessible, whether at the office, at home, or in the field. The content aligns with the humanitarian programme cycle with specific guidance available for humanitarian clusters and themes.

Environment and Humanitarian Action Connect is administered and updated by the United Nations Environment / Office for the Coordination of Humanitarian Affairs Joint Unit. Through the Joint Unit, UN Environment and OCHA respond as one to the environmental dimensions of emergencies. The partnership assists countries affected by disasters and crises and works to enhance the sustainability of humanitarian action. The partnership has supported almost 100 countries and conducted over 200 missions, and celebrates its 25th anniversary this year.

UN Environment

Continue Reading

Science & Technology

China’s Experience with High Speed Rail Offers Lessons for Other Countries

MD Staff

Published

on

China has put into operation over 25,000 kilometers of dedicated high-speed railway (HSR) lines since 2008, far more than the total high-speed lines operating in the rest of the world.  What type of planning, business models, and approaches to construction enabled this rapid growth? In an era when many railways face declining ridership, what pricing and services make high-speed rail attractive to this large number of passengers and maintain financial and economic viability? A new World Bank study seeks to answer these and other questions.

“China has built the largest high-speed rail network in the world. The impacts go well beyond the railway sector and include changed patterns of urban development, increases in tourism, and promotion of regional economic growth. Large numbers of people are now able to travel more easily and reliably than ever before, and the network has laid the groundwork for future reductions in greenhouse gas emissions,” said Martin Raiser, World Bank Country Director for China.

The World Bank has financed some 2,600 km of high-speed rail in China to date. Building on analysis and experience gained through this work and relevant Chinese studies, China’s High-Speed Rail Development summarizes key lessons and practices for other countries that may be considering high-speed rail investments.

A key enabling factor identified by the study is the development of a comprehensive long-term plan to provide a clear framework for the development of the system. China’s Medium- and Long-Term Railway Plan looks up to 15 years ahead and is complemented by a series of Five-Year Plans.

In China, high-speed rail service is competitive with road and air transport for distances of up to about 1200 km. Fares are competitive with bus and airfares and are about one-fourth the base fares in other countries. This has allowed high-speed rail to attract more than 1.7 billion passengers a year from all income groups. Countries with smaller populations will need to choose routes carefully and balance the wider economic and social benefits of improved connectivity against financial viability concerns.

A key factor keeping costs down is the standardization of designs and procedures. The construction cost of the Chinese high-speed rail network, at an average of $17 million to $21 million per km, is about two-thirds of the cost in other countries.

The study also looks into the economic benefits of HSR services. The rate of return of China’s network as of 2015 is estimated at 8 percent, well above the opportunity cost of capital in China and most other countries for major long-term infrastructure investments. Benefits include shortened travel times, improved safety and facilitation of labor mobility, and tourism. High-speed networks also reduce operating costs, accidents, highway congestion, and greenhouse gas emissions as some air and auto travelers switch to rail.

This report is the first of a series of five studies of transport in China—high-speed rail, highways, urban transport, ports, and inland waterways—produced by TransFORM, a knowledge platform developed by the World Bank and China’s Ministry of Transport to share Chinese and international transport experiences and facilitate learning in China and other countries.

Continue Reading

Science & Technology

Net Neutrality, EU final call on Internet governance?

Published

on

It is possible to celebrate the ability of European models of pluralism protection to adapt to the new challenges posed by technological progress. The European Union has in particular issued a favorable framework for innovation by liberalizing the telecommunications market. In addition, it has also reaffirmed its conception of the digital world thanks to numerous regulations  regarding  the responsibility  of  the  contents  diffused,  cybersecurity,  taxation, competition or in the field of the culture with the recent directive on copyrights. There is therefore obvious convergence between the infrastructure and their contents, but these two regulatory bodies still have specific missions within the European Union. The 2009 European Regulation created the European Electronic Regulators Body (BEREC) to better formalize the joint actions of independent regulators and relations with the European institutions.

However, it remains that in terms of digital, US hegemony is undeniable. All the more so, that one can observe a powerful economic competition between the United States and China to determine who will have the monopoly in the digital sphere. The debate leading to questioning an end on net neutrality is largely influenced by an American regulation of the digital, which is at the antipodes of a European strategy. Net neutrality was actually installed by the Federal Communications Commission under President Barack Obama, but have been abandoned under the administration of President Donald Trump. Net neutrality is a founding principle of the Internet, which ensures that telecom operators do not discriminate against the communications of their users, but remain mere transmitters of information. The legal framework of net neutrality in the European Union (EU) is laid down by Article 3 of EU Regulation 2015/2120. This principle allows all users, regardless of their resources, to access the same network as a whole. Thus, this regulation guarantees the possibility for all users to communicate freely through the exercise of effective and fair competition between network operators and telecommunications service providers.

The arrival of Netflix, the subscription video-on-demand service, has polarized the essentially positive view of net neutrality in the EU. Thus, Olivier Schrameck, the president of the CSA pronounced in his speech of July 3, 2014 during the 11th days of the association of the promotion of the audio-visual (APA) that one “must finish with the absolutist conception of the  net  neutrality  “.    Indeed,  the  service is a broad bandwidth consumer in the evening without contributing financially in return. The hyper-demand for bandwidth pushed up the costs of network infrastructure. Proponents of an end to neutrality believes that it primarily benefits actors like Google or Facebook who already have a favorable tax regime. Consequently, strengthening the power of large players in the digital field. By ending net neutrality providers would then be able to slow down data traffic from certain website and give priorities to others by charging differently depending on the content. It seems legitimate to  wonder  if  the  EU  should  then  follow  the  path of Donald Trump’s administration by changing the rules of the Internet. However, net neutrality seems like a fundamental instrument  for  the  protection  of  the  EU  fundamental  rights  on  the  Internet  such  as  the freedom  of  expression  and  the  right  to receive and impart information. Adding political objectives  to  a  debate,  which  seems  dominated  by  the  will  to  maintain  an  economic modelling of pricing in two-sided markets.

If  net neutrality is fundamental in order to preserve the European model of pluralism of information  and  consumer  protection,  how  can  it  be  maintained  in  the  digital  age?  I personally  believe  net  neutrality  should  be  thought  in terms of how to conceptualise its regulation rather than imagining its end. For instance, a prescriptive ex-ante regulation could undermine innovation. The flexibility of European competition law allows for the treatment of a wide variety of sectors, such as responding to digital challenges. It would be dangerous to move away from it. Today, the way in which the internet works rests on a biased competition. There is therefore a major dysfunction of the digital market, which poses a very important risk to our economy.  Competition law should be rethought in order to create new competitors,   as   the   previous  regulations  of  Telecoms  did  by  creating  a  favourable environment for actors concurrencing a monopoly.   The actual regulation allows national judicial different interpretations on net neutrality which lead to different implementations as data traffic is treated according to national jurisdictions interpretation.

Although useful, the competition itself is not enough to regulate the digital. Digital platforms, for example, do not necessarily have an interest in ensuring diversity and sufficient quality of their  content.  In  terms  of  digital  regulation,  Member  states  can not act alone, since the intrinsic nature of digital technology establishes a world-class territory. If the prospect of a global regulation of the digital remains distant, it is possible to solidify a regulation on a European scale. Especially since the GDPR establishes a network regulation, with the obligation  of  cooperation  between  the  different  regulatory bodies across Europe. Europe therefore has the tools to combine regulation and innovation, but they remain difficulties in its implementation, including the lack of common decision-making between member states resulting from a true “balkanisation of the web”.  The GAFA’s taxation policy also illustrates the presence of disparate opinions that hold back the prospect of a Europe acting as a unified actor in the digital domain.

Continue Reading

Latest

Newsdesk7 hours ago

Increasing Data Accessibility and Usability for Prosperous Nepal

Over 75 Nepali professionals from the academia, media, and private and non-profit sectors successfully completed the first phase of the...

Middle East9 hours ago

Muslim causes vs national interest: Muslim nations make risky bets

Saudi attitudes towards the plight of thousands of illegal Rohingya in the kingdom fleeing persecution in Myanmar and squalid Bangladeshi...

South Asia11 hours ago

Aftermath of US-Afghan Peace Talks

In Doha, the Capital of Qatar, an unprecedented meeting co-hosted by German and Qatari officials brought together diverse factions interested...

Hotels & Resorts13 hours ago

Marriott Bonvoy Brings Once-In-A-Lifetime Manchester United Experiences to Asia Pacific

Members of Marriott International’s travel program, Marriott Bonvoy can enjoy an exclusive series of experiences during Manchester United’s pre-season tour...

Russia15 hours ago

Why Economic Sanctions Mean Little to Moscow

Realpolitik, a German term for politics based on day-to-day calculations regarding the military and economic balance of power among major...

Newsdesk17 hours ago

Afghan returnees face economic difficulties, unemployment

Afghan refugees who returned to Afghanistan between 2014 and 2017 tend to be worse off financially and face multiple economic...

South Asia19 hours ago

Pak-US Relations: The Way Forward

Cooperation and Trust is the only way forward for Pakistan –US relations. Both countries have wasted a huge time experienced...

Trending

Copyright © 2019 Modern Diplomacy