Powerful digital tools using artificial intelligence (AI) software are helping in the fight against COVID-19, and have the potential to improve the world in many other ways. However, as AI seeps into more areas of daily life, it’s becoming clear that its misuse can lead to serious harm, leading the UN to call for strong, international regulation of the technology.
The phrase “artificial intelligence” can conjure up images of machines that are able to think, and act, just like humans, independent of any oversight from actual, flesh and blood people. Movies versions of AI tend to feature super-intelligent machines attempting to overthrow humanity and conquer the world.
The reality is more prosaic, and tends to describe software that can solve problems, find patterns and, to a certain extent, “learn”. This is particularly useful when huge amounts of data need to be sorted and understood, and AI is already being used in a host of scenarios, particularly in the private sector.
Examples include chatbots able to conduct online correspondence; online shopping sites which learn how to predict what you might want to buy; and AI journalists writing sports and business articles (this story was, I can assure you, written by a human).
And, whilst a recent news story from Iran has revived fears about the use of killer robots (Iranian authorities have claimed that a “machine gun with AI” was used to assassinate the country’s most senior nuclear scientist), negative stories connected with AI, which have included exam grades incorrectly downgraded in the UK, an innocent man sent to jail in the USA, and personal data stolen worldwide, are more likely to concern its misuse, and old-fashioned human error.
Ahead of the launch of a UN guide to understanding the ethics of AI, here are five things you should know about the use of AI, its consequences, and how it can be improved.
1) The consequences of misuse can be devastating
In January, an African American man in the US state of Michigan, was arrested for a shoplifting crime he knew nothing about. He was taken into custody after being handcuffed outside his house in front of his family.
This is believed to be the first wrongful arrest of its kind: the police officers involved had trusted facial recognition AI to catch their man, but the tool hadn’t learned how to recognize the differences between black faces because the images used to train it had mostly been of white faces.
Luckily, it quickly became clear that he looked nothing like the suspect seen in a still taken from store security cameras, and he was released, although he spent several hours in jail.
And, in July, there was uproar in the UK, when the dreams of many students hoping to go to the university of their choice were dashed, when a computer programme was used to assess their grades (traditional exams had been cancelled, because of the COVID-19 pandemic).
To work out what the students would have got if they had sat exams, the programme took their existing grades, and also took into account the track record of their school over time. This ended up penalising bright students from minority and low-income neighbourhoods, who are more likely to go to schools that have, on the whole, lower average grades than schools attended by wealthier students
These examples show that, for AI tools to work properly, well-trained data scientists need to work with high quality data. Unfortunately, much of the data used to teach AI is currently taken from consumers around the world, often without their explicit consent: poorer countries often lack the ability to ensure that personal data are protected, or to protect their societies from the damaging cyber-attacks and misinformation that have grown since the COVID-19 pandemic.
2) Hate, division and lies are good for business
Many social media companies have come under fire from knowledgeable sceptics for using algorithms, powered by AI, to micro-target users, and send them tailored content that will reinforce their prejudices. The more inflammatory the content, the more chance that it will be consumed and shared.
The reason that these companies are happy to “push” socially divisive, polarizing content to their users, is that it increases the likelihood that they will stay longer on the platform, which keeps their advertisers happy, and boosts their profits.
This has boosted the popularity of extremist, hate-filled postings, spread by groups that would otherwise be little-known fringe outfits. During the COVID-19 pandemic, it has also led to the dissemination of dangerous misinformation about the virus, potentially leading to more people becoming infected, many experts say.
3) Global inequality is mirrored online
There is strong evidence to suggest that AI is playing a role in making the world more unequal, and is benefiting a small proportion of people. For example, more than three-quarters of all new digital innovation and patents are produced by just 200 firms. Out of the 15 biggest digital platforms we use, 11 are from the US, whilst the rest are Chinese.
This means that AI tools are mainly designed by developers in the West. In fact, these developers are overwhelmingly white men, who also account for the vast majority of authors on AI topics. The case of the wrongful arrest in Michigan is just one example of the dangers posed by a lack of diversity in this highly important field.
It also means that, by 2030, North America and China are expected to get the lion’s share of the economic gains, expected to be worth trillions of dollars, that AI is predicted to generate.
4)The potential benefits are enormous
This is not to say that AI should be used less: innovations using the technology are immensely useful to society, as we have seen during the pandemic.
Governments all around the world have turned to digital solutions to new problems, from contact-tracing apps, to tele-medicine and drugs delivered by drones, and, in order to track the worldwide spread of COVID-19, AI has been employed to trawl through vast stores of data derived from our interactions on social media and online.
The benefits go far beyond the pandemic, though: AI can help in the fight against the climate crisis, powering models that could help restore ecosystems and habitats, and slow biodiversity loss; and save lives by helping humanitarian organizations to better direct their resources where they are most needed.
The problem is that AI tools are being developed so rapidly that neither designers, corporate shareholders nor governments have had time to consider the potential pitfalls of these dazzling new technologies.
5) We need to agree on international AI regulation
For these reasons, the UN education, science and culture agency, UNESCO, is consulting a wide range of groups, including representatives from civil society, the private sector, and the general public, in order to set international AI standards, and ensure that the technology has a strong ethical base, which encompasses the rule of law, and the promotion of human rights.
Important areas that need to be considered include the importance of bringing more diversity in the field of data science to reduce bias, and racial and gender stereotyping; the appropriate use of AI in judicial systems to make them fairer as well as more efficient; and finding ways to ensure that the benefits of the technology are spread amongst as many people as possible.
First Quantum Computing Guidelines Launched as Investment Booms
National governments have invested over $25 billion into quantum computing research and over $1 billion in venture capital deals have closed in the past year – more than the past three years combined. Quantum computing promises to disrupt the future of business, science, government, and society itself, but an equitable framework is crucial to address future risks.
A new Insight Report released today at the World Economic Forum Annual Meeting 2022 provides a roadmap for these emerging opportunities across public and private sectors. The principles have been co-designed by a global multistakeholder community composed of quantum experts, emerging technology ethics and law experts, decision makers and policy makers, social scientists and academics.
“The critical opportunity at the dawn of this historic transformation is to address ethical, societal and legal concerns well before commercialization,” said Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning at the World Economic Forum. “This report represents an early intervention and the beginning of a multi-disciplinary, global conversation that will guide the development of quantum computing to the benefit of all society.”
“Quantum computing holds the potential to help solve some of society’s greatest challenges, and IBM has been at the forefront of bringing quantum hardware and software to communities of discovery worldwide,” said Dr. Heike Riel, IBM Fellow, Head of Science and Technology and Lead, Quantum, IBM Research Europe. “This report is a key step in initiating the discussion around how quantum computing should be shaped and governed, for the benefit of all.”
Professor Bronwyn Fox, Chief Scientist at CSIRO, Australia’s science national agency said, “the Principles reflect conversations CSIRO’s scientists have had with partners from around the world who share an ambition for a responsible quantum future. Embedding responsible innovation in quantum computing is key to its successful deployment and uptake for generations to come. CSIRO is committed to ensuring these Principles are used to support a strong quantum industry in Australia and generate significant social and public good.”
In adapting to the coming hybrid model of classical, multi-cloud, and soon quantum computing, the Forum’s framework establishes best-practice principles and core values. These guidelines set the foundation and give rise to a new information-processing paradigm while ensuring stakeholder equity, risk mitigation, and consumer benefit.
The governance principles are grouped into nine themes and underpinned by a set of seven core values. Themes and respective goals defining the principles:
1. Transformative capabilities: Harness the transformative capabilities of this technology and the applications for the good of humanity while managing the risks appropriately.
2. Access to hardware infrastructure: Ensure wide access to quantum computing hardware.
3. Open innovation: Encourage collaboration and a precompetitive environment, enabling faster development of the technology and the realization of its applications.
4. Creating awareness: Ensure the general population and quantum computing stakeholders are aware, engaged and sufficiently informed to enable ongoing responsible dialogue and communication; stakeholders with oversight and authority should be able to make informed decisions about quantum computing in their respective domains.
5. Workforce development and capability-building: Build and sustain a quantum-ready workforce.
6. Cybersecurity: Ensure the transition to a quantum-secure digital world.
7. Privacy: Mitigate potential data-privacy violations through theft and processing by quantum computers.
8. Standardization: Promote standards and road-mapping mechanisms to accelerate the development of the technology.
9. Sustainability: Develop a sustainable future with and for quantum computing technology
Quantum computing core values that hold across the themes and principles:
Common good: The transformative capabilities of quantum computing and its applications are harnessed to ensure they will be used to benefit humanity.
Accountability: Use of quantum computing in any context has mechanisms in place to ensure human accountability, both in its design and in its uses and outcomes. All stakeholders in the quantum computing community are responsible for ensuring that the intentional misuse of quantum computing for harmful purposes is not accepted or inadvertently positively sanctioned.
Inclusiveness: In the development of quantum computing, insofar as possible, a broad and truly diverse range of stakeholder perspectives are engaged in meaningful dialogue to avoid narrow definitions of what may be considered a harmful or beneficial use of the technology.
Equitability: Quantum computing developers and users ensure that the technology is equitable by design, and that quantum computing-based technologies are fairly and evenly distributed insofar as possible. Particular consideration is given to any specific needs of vulnerable populations to ensure equitability.
Non-maleficence: All stakeholders use quantum computing in a safe, ethical and responsible manner. Furthermore, all stakeholders ensure quantum computing does not put humans at risk of harm, either in the intended or unintended outcomes of its use, and that it is not used for nefarious purposes.
Accessibility: Quantum computing technology and knowledge are actively made widely accessible. This includes the development, deployment and use of the technology. The aim is to cultivate a general ability among the population, societal actors, corporations and governments to understand the main principles of quantum computing, the ways in which it differs from classical computing and the potential it brings.
Transparency: Users, developers and regulators are transparent about their purpose and intentions with regard to quantum computing.
“Governments and industries are accelerating their investments in quantum computing research and development worldwide,” said Derek O’Halloran, Head of Digital Economy, World Economic Forum. “This report starts the conversation that will help us understand the opportunities, set the premise for ethical guidelines, and pre-empt socioeconomic, political and legal risks well ahead of global deployment.”
The Quantum Computing Governance Principles is an initiative of the World Economic Forum’s Quantum Computing Network, a multi-stakeholder initiative focused on accelerating responsible quantum computing.
Next steps for the Quantum Computing Governance Initiative will be to work with wider stakeholder groups to adopt these principles as part of broader governance frameworks and policy approaches. With this framework, business and investment communities along with policy makers and academia will be better equipped to adopt to the coming paradigm shift. Ultimately, everyone will be better prepared to harness the transformative capabilities of quantum sciences – perhaps the most exciting emergent technologies of the 21st Century.
Closing the Cyber Gap: Business and Security Leaders at Crossroads as Cybercrime Spikes
The global digital economy has surged off the back of the COVID-19 pandemic, but so has cybercrime – ransomware attacks rose 151% in 2021. There were on average 270 cyberattacks per organization during 2021, a 31% increase on 2020, with each successful cyber breach costing a company $3.6m. After a breach becomes public, the average share price of the hacked company underperforms the NASDAQ by -3% even six months after the event.
According to the World Economic Forum’s new annual report, The Global Cybersecurity Outlook 2022, 80% of cyber leaders now consider ransomware a “danger” and “threat” to public safety and there is a large perception gap between business executives who think their companies are secure and security leaders who disagree.
Some 92% of business executives surveyed agree that cyber resilience is integrated into enterprise risk-management strategies, only 55% of cyber leaders surveyed agree. This gap between leaders can leave firms vulnerable to attacks as a direct result of incongruous security priorities and policies.
Even after a threat is detected, our survey, written in collaboration with Accenture, found nearly two-thirds would find it challenging to respond to a cybersecurity incident due to the shortage of skills within their team. Perhaps even more troubling is the growing trend that companies need 280 days on average to identify and respond to a cyberattack. To put this into perspective, an incident which occurs on 1 January may not be fully contained until 8 October.
“Companies must now embrace cyber resilience – not only defending against cyberattacks but also preparing for swift and timely incident response and recovery when an attack does occur,” said Jeremy Jurgens, Managing Director at the World Economic Forum.
“Organizations need to work more closely with ecosystem partners and other third parties to make cybersecurity part of an organization’s ecosystem DNA, so they can be resilient and promote customer trust,” said Julie Sweet, Chair and CEO, Accenture. “This report underscores key challenges leaders face – collaborating with ecosystem partners and retaining and recruiting talent. We are proud to work with the World Economic Forum on this important topic because cybersecurity impacts every organization at all levels.”
Chief Cybersecurity Officers kept up at night by three things
Less than one-fifth of cyber leaders feel confident their organizations are cyber resilient. Three major concerns keep them awake at night:
– They don’t feel consulted on business decisions, and they struggle to gain the support of decision-makers in prioritizing cyber risks – 7 in 10 see cyber resilience featuring prominently in corporate risk management
– Recruiting and retaining the right talent is their greatest concern – 6 in 10 think it would be challenging to respond to a cybersecurity incident because they lack the skills within their team
– Nearly 9 in 10 see SMEs as the weakest link in the supply chain – 40% of respondents have been negatively affected by a supply chain cybersecurity incident
Training and closing the cyber gap are key solutions
Solutions include employee cyber training, offline backups, cyber insurance and platform-based cybersecurity solutions that stop known ransomware threats across all attack vectors.
Above all, there is an urgent need to close the gap of understanding between business and security leaders. It is impossible to attain complete cybersecurity, so the key objective must be to reinforce cyber resilience.
Including cyber leaders into the corporate governance process will help close this gap.
Ethical aspects relating to cyberspace: Self-regulation and codes of conduct
Virtual interaction processes must be controlled in one way or another. But how, within what limits and, above all, on the basis of what principles? The proponents of the official viewpoint – supported by the strength of state structures – argue that since the Internet has a significant and not always positive impact not only on its users, but also on society as a whole, all areas of virtual interaction need to be clearly regulated through the enactment of appropriate legislation.
In practice, however, the various attempts to legislate on virtual communication face great difficulties due to the imperfection of modern information law. Moreover, considering that the Internet community is based on an internal “anarchist” ideology, it shows significant resistance to government regulations, believing that in a cross-border environment – which is the global network – the only effective regulator can be the voluntarily and consciously accepted intranet ethics based on the awareness of the individual person’s moral responsibility for what happens in cyberspace.
At the same time, the significance of moral self-regulation lies not only in the fact that it makes it possible to control the areas that are insufficiently covered, but also in other regulatory provisions at political, legal, technical or economic levels. It is up to ethics to check the meaning, lawfulness and legitimacy of the remaining regulatory means. The legal provisions themselves, supported by the force of state influence, are developed or – at least, ideally – should be implemented on the basis of moral rules. It should be noted that, although compliance with law provisions is regarded as the minimum requirement of morality, in reality this is not always the case – at least until an “ideal” legislation is devised that does not contradict morality in any way. Therefore, an ethical justification and an equal scrutiny of legislative and disciplinary acts in relation to both IT and computer technology are necessary.
In accordance with the deontological approach to justifying web ethics, the ethical foundation of information law is based on the human rights of information. Although these rights are enshrined in various national and international legal instruments, in practice their protection is often not guaranteed by anyone. This enables several state structures to introduce various restrictions on information, justifying them with noble aims such as the need to implement the concept of national security.
It should be stressed that information legislation (like any other in general) is of a conventional nature, i.e. it is a sort of temporary compromise reached by the representatives of the various social groups. Therefore, there are no unshakable principles in this sphere: legality and illegality are defined by a dynamic balance between the desire for freedom of information, on the one hand, and the attempts at restricting this freedom in one way or another.
Therefore, several subjects have extremely contradictory requirements with regard to modern information law, which are not so easy to reconcile. Information law should simultaneously protect the right to free reception of information and the right to information security, as well as ensure privacy and prevent cybercrime. It should also promote again the public accessibility of the information created, and protect copyright – even if this impinges on the universal principle of knowledge sharing.
The principle of a reasonable balance of these often diametrically opposed aspirations, with unconditional respect for fundamental human rights, should be the basis of the international information law system.
Various national and international public organisations, professionals and voluntary users’ associations define their own operation principles in a virtual environment. These principles are very often formalised in codes of conduct, aimed at minimising the potentially dangerous moral and social consequences of the use of information technologies and thus at achieving a certain degree of web community’s autonomy, at least when it comes to purely internal problematic issues. The names of these codes do not always hint at ethics, but this does not change their essence. After all, they have not the status of law provisions, which means that they cannot serve as a basis for imposing disciplinary, administrative or any other liability measures on offenders. They are therefore enforced by the community members who have adopted them solely with goodwill, as a result of free expression based on recognition and sharing of the values and rules enshrined in them. These codes therefore act as one of the moral self-regulating mechanisms of the web community.
The cyberspace codes of ethics provide the basic moral guidelines that should guide information activities. They specify the principles of general theoretical ethics and are reflected in a virtual environment. They contain criteria enabling to recognise a given act as ethical or unethical. They finally provide specific recommendations on how to behave in certain situations. The rules enshrined in the codes of ethics under the form of provisions, authorisations, bans, etc., represent in many respects the formalisation and systematisation of unwritten rules and requirements that have developed spontaneously in the process of virtual interaction over the last thirty years of the Internet.
Conversely, the provisions of codes of ethics must be thoroughly considered and judged – by their very nature, code of ethics are conventional and hence they are always the result of a mutual agreement of the relevant members of a given social group – as otherwise they are simply reduced to a formal and sectorial statement, divorced from life and not rule-bound.
Despite their multidirectionality due to the variety of net functional abilities and the heterogeneity of its audience, a comparison of the most significant codes of ethics on the Internet shows a number of common principles. Apparently, these principles are in one way or another shared by all the Internet community members. This means that they underpin the ethos of cyberspace. They include the principle of accessibility, confidentiality and quality of information; the principle of inviolability of intellectual property; the principle of no harm, and the principle of limiting the excessive use of net resources. As can be seen, this list echoes the four deontological principles of information ethics (“PAPA: Privacy, Accuracy, Property and Accessibility”) formulated by Richard Mason in his article Four Ethical Issues of the Information Age. (“MIS Quarterly”, March 1986).
The presence of a very well-written code of ethics cannot obviously ensure that all group members will act in accordance with it, because – for a person – the most reliable guarantees against unethical behaviour are his/her conscience and duties, which are not always respected. The importance of codes should therefore not be overestimated: the principles and actual morals proclaimed by codes may diverge decisively from one another. The codes of ethics, however, perform a number of extremely important functions on the Internet: firstly, they can induce Internet users to moral reflection by instilling the idea of the need to evaluate their actions accordingly (in this case, it is not so much a ready-made code that is useful, but the very experience of its development and discussion). Secondly, they can form a healthy public in a virtual environment, and also provide it with uniform and reasonable criteria for moral evaluation. Thirdly they can become the basis for the future creation of international information law, adapted to the realities of the electronic age.
Bringing dry land in the Sahel back to life
Millions of hectares of farmland are lost to the desert each year in Africa’s Sahel region, but the UN Food...
“Kurdish Spring”: drawing to a close?
For decades, the Kurdish problem was overshadowed by the Palestinian one, occasionally popping up in international media reports following the...
Great powers rivalry in Central Asia: New strategy, old game
In international politics, interstate rivalry involves conflicting relations between two international rivalries that are nation states. A fundamental feature of...
How UNEP is helping education systems go green
The world is facing a three-pronged environmental crisis of climate change, nature and biodiversity loss, and pollution and waste. To...
South Africa’s Covid-19 Response Gets a $750 Million Boost
The World Bank Group Board of Executive Directors today approved South Africa’s request for a $750 million development policy loan...
Urgent action needed to protect Vietnamese workers trafficked to Serbia
Urgent action is required to assist and protect some 400 Vietnamese migrant workers who were allegedly trafficked to Serbia, experts...
Introducing India’s first ever diving grant
Mumbai-based Vidhi Bubna, the founder of ‘Coral Warriors’, India’s first ever diving grant, is a keen humanitarian and is passionate...
Crypto Insights4 days ago
The First Crypto Mortgage: Bitcoin Continues to Rapidly Expand Across the US Markets
Middle East4 days ago
Iraq: Three Years of Drastic Changes (2019-2022)
Defense3 days ago
In 2022, military rivalry between powers will be increasingly intense
South Asia2 days ago
India is in big trouble as UK stands for Kashmiris
East Asia2 days ago
The Global (Dis) Order Warfare: The Chinese Way
Crypto Insights3 days ago
The Subtle Dominance of Stablecoins: A Ruse of Stability
Central Asia3 days ago
Unrest in Kazakhstan Only Solidifies China-Russia Ties
East Asia3 days ago
Rebuilding the World Order