[yt_dropcap type=”square” font=”” size=”14″ color=”#000″ background=”#fff” ] I [/yt_dropcap]t’s not the armies of killer robots, but the impact on society that should worry us most about artificial intelligence: profound changes in the job market and in the nature of work, exacerbated income and wealth inequality, and a widening “AI divide” separating AI haves from have-nots.
This was the consensus of a panel examining the state of artificial intelligence at the 11th Annual Meeting of the New Champions, which opened today in Dalian.The panel, produced in collaboration with New Scientist magazine, brought leading industry figures together with thought leaders to weigh the conspicuous benefits of the rapid advances in AI against its potential negative consequences. While uniformly recognizing the need to unleash its potential, the panellists underscored the need for governance systems and educational institutions to retool in the face of inevitable disruptions.
Vishal Sikka, Chief Executive Officer of Infosys, USA, and a Co-Chair of the Annual Meeting of the New Champions 2017, sought to demystify AI: “It’s not falling from the sky, or coming up from holy waters, or brought down from the mountain. It’s a human creation. It’s incredibly powerful. The key is it’s something we can all learn, and benefit from – not be victims of. The more we understand it and improve it the less we have to fear.”
Ya-Qin Zhang, President of Baidu.com, People’s Republic of China, echoed Sikka in downplaying the dangers of the technology itself: “There is no mystery, and no need for fear,” he said, adding that “smart computer scientists will write the right code – to help us amplify our capabilities, not to control us.”
Kamal Sinclair, Director of New Frontier Lab Programs, Sundance Institute, USA, argued that AI’s benefits will be multiplied – not diminished – by the “checks and balances of inclusion.” Making AI technologies accessible and inclusive will, she said, “catalyse a groundswell of imagination” that will democratize imagination through bringing communities and the grassroots into the innovation process.
Likening both the potential and the downside risk of AI to nuclear energy in the 1940s and 1950s, Sikka said that, in both cases, what’s necessary is a system of containment. At present, he noted, we can’t articulate exactly how the deep learning systems that chew through vast oceans of data and arrive at decisions actually make those decisions. “The ability to ascribe behaviours to systems is incredibly important to putting containment around these systems,” he said.
“We have a moral obligation to control the technology,” said Wendell Wallach of the Interdisciplinary Center for Bioethics at Yale University, USA. “We first of all have a responsibility to ensure that it does not cause harm to human beings, and manage social impact,” he said, adding that we have a “second moral obligation: Who are we creating this world for? What is this world about that we’re creating? What is the role of human beings? Are we really creating a transhumanist technology? … We have an obligation to make sure [AI] serves humanity as a whole – not a small segment of humanity.”
Pascale Fung, Professor, Department of Electronic and Computer Engineering at Hong Kong University of Science and Technology, pointed out that engineers do not take something akin to the Hippocratic Oath. “Engineers are trained as engineers, not as medical doctors.” She argued that engineers “need more ethics education, more humanities and arts education, to build technology that serves people – that is human-centric … We’re not traditionally trained to do this job. There’s an overwhelming burden on today’s AI engineers.” In a similar vein, Wallach called for the creation of an interdisciplinary culture, noting the difficulty at present of getting engineers and social scientists to cooperate in any meaningful way. While interdisciplinary skills are often praised, this amounts to little more than lip service. In fact, as Wallach said, “No one is rewarded for having those skills.”
Fung also noted that AI is creating dangerous divisions within society. “I do see an emerging AI divide between consumers and developers, between countries that have more research labs and those who don’t… and between the genders,” she said, noting that while globally the majority of consumers are women, the majority of developers are men. “There’s a divide in all aspects of society,” said Fung. “Our job is to bridge these divides.”