Connect with us

Science & Technology

Artificial Intelligence: A Blessing or a Threat for Humanity?

Published

on

In August 2018, Czech Technical University in Prague simultaneously hosted several conferences on AI-related topics: human-level AI, artificial general intelligence, biologically inspired cognitive architectures, and neural-symbolic integration technology. Reports were presented by prominent experts representing global leaders in artificial intelligence: Microsoft, Facebook, DARPA, MIT and Good AI. The reports described the current status of AI developments, identified the problems facing society that have yet to be resolved, and highlighted the threats arising from the further development of this technology. In this review, we will attempt to briefly identify the main problems and threats, as well as the possible ways to counter these threats.

To begin with, let us provide definitions for some of the terms that are commonly used in conjunction with AI in various contexts: weak, or specialized, AI; autonomous AI; adaptive AI; artificial general intelligence (AGI); strong AI; human-level AI; and super-human AI.

Weak, or specialized, AI is represented by all existing solutions without exception and implies the automated solution of one specific task, be it a game of Go or face recognition with CCTV footage. Such systems are incapable of independent learning for the purpose of solving other problems: they can only be reprogrammed by humans to do so.

Autonomous AI implies a system’s ability to function for protracted periods of time without the intervention of a human operator. This could be a solar-powered UAV performing a multi-day flight from Champs-Elysees in Paris to Moscow’s Red Square or back, independently selecting its route and recharging stops while avoiding all sorts of obstacles.

Adaptive AI implies the system’s ability to adapt to new situations and obtain knowledge that it did not possess at the time of its creation. For example, a system originally tasked with conducting conversations in Russian could independently learn new languages and apply this knowledge in conversation if it found itself in a new language environment or if it deliberately studied educational materials on these new languages.

Artificial general intelligence implies adaptability of such a high level that the corresponding system could, given the appropriate training, be used in a wide variety of activities. New knowledge could either be self-taught or learned with the help of an instructor. It is in this same sense that the notion of strong AI is often used in opposition to weak or specialized AI.

Human-level AI implies a level of adaptability comparable to that of a human being, meaning that the system is capable of mastering the same skills as a human and within comparable periods of time.

Super-human AI implies even greater adaptability and learning speeds, allowing the system to masker the knowledge and skills that humans would never be able to.

Fundamental Problems Associated with Creating a Strong AI

Despite the multitude of advances in neuroscience, we still do not know exactly how natural intelligence works. For this same reason, we do not know for sure how to create artificial intelligence (AI). There are a number of known problems that need to be resolved, as well as differing opinions as to how these problems should be prioritized. For example, Ben Goertzel, who heads the OpenCog and SingularityNET, open-source international projects to create artificial intelligence, believes that all the requisite technology for creating an artificial general intelligence has already been developed, and that the only thing necessary is to combine them in a way that would ensure the necessary synergy. Other experts are more sceptical, pointing out that many of the problems that we will discuss below need to be resolved first. Also, expert estimates for when a strong AI may be created vary greatly, from ten or so years to several decades from now.

On the other hand, the emergence of a strong AI is logical in the framework of the general process of evolution as the emergence of molecules from atoms and cells from molecules, the creation of the central nervous system from specialized cells, the emergence of social structure, the development of speech and writing systems and, ultimately, the nascence of information technology. Valentin Turchin demonstrates the logic behind the increasing complexity of information structures and organizational mechanisms in the process of evolution. Unless humanity perishes first, this evolution will be inevitable and will, in the long run, rescue humankind, as only non-biological lifeforms will be able to survive the inevitable end of the Solar System and preserve our civilization’s information code in the Universe.

It is important to realize that the creation of a strong AI does not necessarily require the understanding of how the natural intelligence works, just as the development of a rocket does not necessarily require understanding how a bird flies. Such an AI will certainly be created, sooner or later, in one way or another, and perhaps even in several different ways.

Most experts identify the following fundamental problems that need to be solved before a general or strong AI can be created:

Few-shot learning: systems need to be developed that can learn with the use of a small amount of materials, in contrast to the current deep-learning systems, which require massive amounts of specifically prepared learning materials.

Strong generalization: creating problem recognition technologies allowing for recognizing objects in situations that differ from those in which they were encountered in the learning materials.

Generative learning models: developing learning technologies in which the objects to be memorized are not the features of the object to be recognised, but rather the principles of its formation. This would help in addressing the more profound characteristics of objects, providing for faster learning and stronger generalization.

Structured prediction and learning: developing learning technologies based on the representation of learning objects as multi-layered hierarchical structures, with lower-level elements defining higher level ones. This could prove an alternative solution to the problems of fast learning and strong generalization.

Solving the problem of catastrophic forgetting, which is pertinent to the majority of existing systems: a system originally trained with the use of one class of object and then additionally trained to recognize a new class of objects loses the ability to recognize objects of the original class.

Achieving an incremental learning ability, which implies a system’s ability to gradually accumulate knowledge and perfect its skills without losing the previously obtained knowledge, but rather obtaining new knowledge, with regard to systems intended for interaction in natural languages. Ideally, such a system should pass the so-called Baby Turing Test by demonstrating its ability to gradually master a language from the baby level to the adult level.

Solving the consciousness problem, i.e. coming up with a proven working model for conscious behaviour that ensures effective prediction and deliberate behaviour through the formation of an “internal worldview,” which could be used for seeking optimum behavioural strategies to achieve goals without actually interacting with the real world. This would significantly improve security and the testing of hypotheses while increasing the speed and energy efficiency of such checks, thus enabling a live or artificial system to learn independently within the “virtual reality” of its own consciousness. There are two applied sides to solving the consciousness problem. On the one hand, creating conscious AI systems would increase their efficiency dramatically. On the other hand, such systems would come with both additional risks and ethical problems, seeing as they could, at some point, be equated to the level of self-awareness of human beings, with the ensuing legal consequences.

Potential AI-Related Threats

Even the emergence of autonomous or adaptive AI systems, let alone general or strong AI, is associated with several threats of varying degrees of severity that are relevant today.

The first threat to humans may not necessarily be presented by a strong, general, human-level or super-human AI, as it would be enough to have an autonomous system capable of processing massive amounts of data at high speeds. Such a system could be used as the basis for so-called lethal autonomous weapons systems (LAWS), the simplest example being drone assassins (3D-printed in large batches or in small numbers).

Second, a threat could be posed by a state (a potential adversary) gaining access to weapons system based on more adaptive, autonomous and general AI with improved reaction times and better predictive ability.

Third, a threat for the entire world would be a situation based on the previous threat, in which several states would enter a new round of the arms race, perfecting the intelligence levels of autonomous weapon systems, as Stanislaw Lem predicted several decades ago.

Fourth, a threat to any party would be presented by any intellectual system (not necessarily a combat system, but one that could have industrial or domestic applications too) with enough autonomy and adaptivity to be capable not only of deliberate activity, but also of autonomous conscious target-setting, which could run counter to the individual and collective goals of humans. Such a system would have far more opportunities to achieve these goals due to its higher operating speeds, greater information processing performance and better predictive ability. Unfortunately, humanity has not yet fully researched or even grasped the scale of this particular threat.

Fifth, society is facing a threat in the form of the transition to a new level in the development of production relations in the capitalist (or totalitarian) society, in which a minority comes to control material production and excludes an overwhelming majority of the population from this sector thanks to ever-growing automation. This may result in greater social stratification, the reduced effectiveness of “social elevators” and an increase in the numbers of people made redundant, with adverse social consequences.

Finally, another potential threat to humanity in general is the increasing autonomy of global data processing, information distribution and decision-making systems growing, since information distribution speeds within such systems, and the scale of their interactions, could result in social phenomena that cannot be predicted based on prior experience and the existing models. For example, the social credit system currently being introduced in China is a unique experiment of truly civilizational scale that could have unpredictable consequences.

The problems of controlling artificial intelligence systems are currently associated, among other things, with the closed nature of the existing applications, which are based on “deep neural networks.” Such applications do not make it possible to validate the correctness of decisions prior to implementation, nor do they allow for an analysis of the solution provided by the machine after the fact. This phenomenon is being addressed by the new science, which explores explainable artificial intelligence (XAI). The process is aided by a renewed interest in integrating the associative (neural) and symbolic (logic-based) approaches to the problem.

Ways to Counter the Threats

It appears absolutely necessary to take the following measures in order to prevent catastrophic scenarios associated with the further development and application of AI technologies.

An international ban on LAWS, as well as the development and introduction of international measures to enforce such a ban.

Governmental backing for research into the aforementioned problems (into “explainable AI ” in particular), the integration of different approaches, and studying the principles of creating target-setting mechanisms for the purpose of developing effective programming and control tools for intellectual systems. Such programming should be based on values rather than rules, and it is targets that need to be controlled, not actions.

Democratizing access to AI technologies and methods, including through re-investing profits from the introduction of intellectual systems into the mass teaching of computing and cognitive technologies, as well as creating open-source AI solutions and devising measures to stimulate existing “closed” AI systems to open their source codes. For example, the Aigents project is aimed at creating AI personal agents for mass users that would operate autonomously and be immune to centralized manipulations.

Intergovernmental regulation of the openness of AI algorithms, operating protocols for data processing and decision-making systems, including the possibility of independent audits by international structures, national agencies and individuals. One initiative in this sense is to create the SingularityNET open-source platform and ecosystem for AI applications.

First published in our partner RIAC

Continue Reading
Comments

Science & Technology

After Google’s new set of community standards: What next?

Sisir Devkota

Published

on

After weeks of Google’s community standard guidelines made headlines, the Digital Industry Group Inc. (Australia based NGO) rejected proposals from the regulating body based in the southern hemisphere. The group claimed that regulating “fake news” would make the Australian Competition and Consumer Commission a moral police institution. In late August, Google itself forbade its employees from indulging in the dissemination of inadequate information or one that involved internal debates. From the outset, the picture is a bit confusing. After the events in Australia, Google’s latest act of disciplinary intrusion seems all but galvanizing from certain interests or interest groups.

A year earlier, Google was shaken by claims of protecting top-level executives from sexual crimes; the issue took a serious turn and almost deteriorated company operations. If anything but Google’s development from the horror of 2018 clearly suggests a desperate need from the hierarchy to curb actions that could potentially damage the interests of several stakeholders. There is no comprehensive evidence to suggest that Google had a view on how the regulations were proposed in Australia. After all, until proven otherwise, all whistleblowing social media posts and comments are at one point of time, “fake”. Although the global giant has decided to discontinue all forms of unjustifiable freedom inside its premises; however, it does profit by providing the platform for activism and all forms of censure. The Digital Industry Group wants the freedom to encourage digital creative contents, but Google’s need to publish a community guideline looks more of a defensive shield against uncertainties.

On its statement, the disciplinary clause, significantly mentions about the actions that will be taken against staffs providing information that goes around Google’s internal message boards. In 2017, female employees inside the Google office were subjected to discrimination based on the “gender-ness” of working positions. Kevin Kernekee, an ex-employee, who was fired in 2018, confirmed that staff bullying was at the core of such messaging platforms. Growing incidents inside Google and its recent community stance are but only fuelling assumptions about the ghost that is surrounding the internet giant’s reputation. Consequently, from the consumer’s point of view, an instable organization of such global stature is an alarm.

The dissidents at Google are not to be blamed entirely. As many would argue, the very foundation of the company was based on the values of expression at work. The nature of access stipulated into Google’s interface is another example of what it stands for, at least in the eyes of consumers. Stakeholders would not wish for an internal turmoil; it would be against the enormous amount of trust invested into the workings of the company. If google can backtrack from its core values upon higher forces, consumers cannot expect anything different. Google is not merely a search engine; for almost half of the internet users, it is almost everything.

“Be responsible, Be helpful, Be thoughtful”. These phrases are the opening remarks from the newly engineered community guideline. As it claims in the document, three principles govern the core values at Google. Upon closer inspection, it also sounds as if the values are only based on what it expects from the people working for the company. A global company that can resort to disciplining its staff via written texts can also trim the rights of its far-reaching consumer groups. It might only be the beginning but the tail is on fire.

Continue Reading

Science & Technology

How to Design Responsible Technology

MD Staff

Published

on

Biased algorithms and noninclusive data sets are contributing to a growing ‘techlash’ around the world. Today, the World Economic Forum, the international organisation for public-private cooperation has released a new approach to help governments and businesses counter these growing societal risks.

The Responsible Use of Technology report provides a step-by-step framework for companies and governments to pin point where and how they can integrate ethics and human rights-based approaches into innovation. Key questions and actions guide organizations through each phase of a technology’s development process and highlight what can be done and when to help organizations mitigate unethical practices. Notably, the framework can be applied on technology in the ‘final’ use and application phase, empowering users to play an active role in advocating for policies, laws and regulations that address societal risks.

The guide was co-designed by industry leaders from civil society, international organizations and businesses including BSR, the Markkula Centre for Applied Ethics, the United Nation’s Office of the High Commissioner for Human Rights, Microsoft, Uber, Salesforce, IDEO, Deloitte, Omidyar Network and Workday. The team examined national technology strategies, international business programmes and ethical task forces from around the world, combining lessons learned with local expertise to develop a guide that would be inclusive across different cultures.

“Numerous government and large technology companies around the world have announced strategies for managing emerging technologies,” said Pablo Quintanilla, Fellow at the World Economic Forum, and Director in the Office of Innovation, Salesforce. “This project presents an opportunity for companies, national governments, civil society organizations, and consumers to teach and to learn from each other how to better build and deploy ethically-sound technology. Having an inclusive vision requires collaboration across all global stakeholders.”

“We need to apply ethics and human rights-based approaches to every phase in the lifecycle of technology – from design and development by technology companies through to the end use and application by companies across a range of industries,” said Hannah Darnton, Programme Manager, BSR. “Through this paper, we hope to advance the conversation of distributed responsibility and appropriate action across the whole value chain of actors.”

“Here, we can draw from lessons learned from companies’ efforts to implement ‘privacy and security by design,” said Sabrina Ross, Global Head of Marketplace Policy, Uber. “Operationalizing responsible design requires leveraging a shared framework and building it into the right parts of each company’s process, culture and commitments. At Uber, we’ve baked five principles into our product development process so that our marketplace design remains consistent with and accountable to these principles.”

This report is part of the World Economic Forum’s Responsible Development, Deployment and Use of Technology project. It is the first in a series tackling the topic of technology governance. It will help inform the key themes at the Forum’s Global Technology Governance Summit in San Francisco in April 2020. The project team will work across industries to produce a more detailed suite of implementation tools for organizations to help companies promote and train their own ‘ethical champions’. The steering committee now in place will codesign the next steps with the project team, building on the input already received from global stakeholders in Africa, Asia, Europe, North America and South America.

About the Centre for the Fourth Industrial Revolution Network

The Centre for the Fourth Industrial Revolution Network brings together more than 100 governments, businesses, start-ups, international organizations, members of civil society and world-renown experts to co-design and pilot innovative approaches to the policy and governance of technology. Teams in Colombia, China, India, Israel, Japan, UAE and US are creating human-centred and agile policies to be piloted by policy-makers and legislators, shaping the future of emerging technology in ways that maximize their benefits and minimize their risks. More than 40 projects are in progress across six areas: artificial intelligence, autonomous mobility, blockchain, data policy, drones and the internet of things.

The Network helped Rwanda write the world’s first agile aviation regulation for drones and is scaling this up throughout Africa and Asia. It also developed actionable governance toolkits for corporate executives on blockchain and artificial intelligence, co-designed the first-ever Industrial IoT (IIoT) Safety and Security Protocol and created a personal data policy framework with the UAE.

Continue Reading

Science & Technology

Digitally shaping a greener world

MD Staff

Published

on

For the first time, Burhans is setting out to digitally map the land assets of one of the world’s largest land-owners. Photo by UN Environment Programme

Women were not allowed on map-making ship voyages until the 1960s—it was believed that they would bring bad luck. Spanish nuns made maps in the 10th century.

The first A-Z street map of London was created after one woman got lost on her way home from a party, then woke up every day at 5 a.m. to chart the city’s 23,000 streets.

As it turns out, women have always contributed to the drawing of maps despite hurdles.

This puts Molly Burhans, founder of GoodLands, in good company. For the first time in history, she is setting out to digitally map the land assets of one of the world’s largest land-owners—the Catholic Church. 

The journey has been spiritual. Instead of becoming a nun, she decided to pursue digital mapping instead. “Our work is grounded in science, driven by design and inspired by values of stewardship and charity,” she explains.

Unchartered waters

It all started when a course in biological illustration turned into a fascination with how everything fits together. 

“You can’t do surgery unless you’ve studied human anatomy—and you can’t really do sound environmental work unless you’ve mapped the environment and landscape, and can visualize it,” she explains.

She was introduced to digital mapping by Dana Tomlin, the originator or Map Algebra and Geographic Information Systems professor at the University of Pennsylvania and Yale University. When she visited the Vatican in 2016, it got her thinking.

“The Vatican has the most fantastic maps I’ve ever seen,” she said. “White, gold, platinum frescoes flanked the doors. I thought they must have the most incredible land datasets anywhere in the world.”

The Vatican is the smallest state in the world, and its biggest land owner. There are 250,000 Catholic-affiliated parishes, orphanages, community centers and retreat monasteries around the world, reaching an estimated 57.6 million people globally.

It is also the world’s largest non-government health care provider. The Pontifical Council for the Pastoral Care of Health Care workers estimates that around 26 per cent of healthcare facilities are operated by the Roman Catholic Church.

Iyad Abumoghli, Principal Coordinator of UN Environment Programme’s Faith for Earth Initiative, said:

“Globally, faith-based organizations own 8 per cent of habitable land on the surface of the earth and 5 per cent of all commercial forests. There are around 37 million churches and 3.6 million mosques around the world.

“Burhans’ work supports UNEP’s Faith for Earth Initiative to harness the socio-economic power of faith-based organizations, where preaching meets practice.

“Mapping faith-owned assets will contribute to strategically employ faith values in managing them, ultimately leading to fighting climate change and curbing ecosystem degradation.”

Fear of the unknown

Burhans reflects: “Why not leverage this network for environmental good?”

But then the hurdle hit. The data wasn’t digital. In fact—it wasn’t even there. 

“None of the land had been digitally mapped. I was surprised – this was bigger than I’d realized. We can’t manage property without foundational data—never mind ecosystem restoration. So, I just kept going to find the data.”

When she confirmed that data did not exist, Burhans asked the Holy See for permission to create the first comprehensive global digital data map of the Catholic Church’s footprint and people in history, working with a large team at mapping software company Esri, as Chief Cartographer.

Her mission: to help faith-based communities, such as religious orders, dioceses, and the Vatican to first understand what land assets they own. Next, figure out how to leverage those assets for ecosystem restoration on a scale parallel to its massive global health network.

The power of knowing

For Burhans, maps represent the power to shape our world for better health and environmental protection. “We dare to use land for environmental good. I can’t emphasize how important our surroundings and environment are,” she notes.

“Maps are just the tool, allowing us to capture complex information, from biodiversity to soil type, all in one place. If a picture is worth a thousand words, then a map is worth a million.”

“We can map where ecological failure might trigger heavy migration. Or, where sea level rise might force poor communities to move. We can see where more trees could cool hot cities; where green spaces could bring health benefits in areas with high respiratory problems.”

For Burhans, the potential of a large data hub capturing all this information across the church’s land portfolio is exciting—and unprecedented. It also has implications for all land owners and governments around the world.

Her team maps environmental, social and financial factors of a property portfolio. Centralizing information in one digital hub across sectors—health care, education, relief—could save tens of millions each year, she reflects.

She is also asking bigger questions: “How will artificial intelligence transform our world? How can we leverage land and religion to become the solution to our crises? We must be at the forefront of these issues.”

Mapping the church’s global footprint

Honing big data for environmental restoration is part of Burhans’ vision. Some of this is technical: bringing the Catholic Church into the digital area: “With relevancy, with the right information to roll out safety.”

But the vision is also about people. “We want to help people realize that mapping assets is vital to manage them responsibly. We cannot help the church improve its footprint if we don’t know what is has.”

“We all have different talents and gifts. Mine lean towards creating new technology and applying it to make land work for the greater good. That’s my vocation: to make sure that’s done—and done with integrity.”

UN Environment

Continue Reading

Latest

Trending

Copyright © 2019 Modern Diplomacy