The current COVID-19 pandemic has spurred the introduction of artificial intelligence (AI) into all spheres of human life. Faced with the urgent need for new algorithms, various government agencies and private companies have often introduced half-baked AI technologies, refining them as they go. The year 2021 will obviously increase the AI’s role in society and will significantly change the people’s lifestyles and the work of various institutions.
First off, let us figure out what the term “artificial intelligence” really means. In a nutshell, it is the ability of a digital computer or a computer-controlled robot to perform tasks usually associated with intelligent beings. However, in philosophy and science fiction, AI is often referred to as a program capable of thinking like humans do, of creating and feeling. However, while humans have yet to come up with their fully electronic replica, in some areas AI has already mastered the art of creation.
That being said, the AI as we know it is not yet capable of creating or thinking in a human way. It is essentially an algorithm that solves problems strictly as instructed by a human being. The tasks assigned to AI are extremely wide, from “smart” databases to the creation of self-driving cars. In the 21st century, the progress in computer programming, high technologies, and the need to process massive amounts of information has significantly increased the role of artificial intelligence in helping people cope with all this, bringing about such notions as “digital government,” “digital democracy” and “smart home.” However, in 2020, demand for AI has been especially high for a number of reasons.
The advent of the COVID-19 pandemic at the start of this year caught specialists flat-footed, with little information about the nature of the disease and the methods of dealing with it. Moreover, people were no longer able to bring together their knowledge and act as a united front without the help of machines. By April 2020, more than 30,000 scientific articles on COVID-19 had already been were published – a mass of information that is physically impossible for a person to analyze. This is when special search engines based on natural language processing algorithms immediately came to the aid of doctors, helping analyze new data and isolate the most important ones. The use of new technologies and rapid data exchange helped achieve significant success in the development of effective treatment algorithms and vaccines.
However, the pandemic laid bare the “dark” side of AI, finally turning it into a “Big Brother” watching over citizens. Countries around the world have been using various tracking technologies to prevent violations of quarantine measures (tracking drones, numerous smartphone apps and smart networks of surveillance cameras to identify violators, even though hastily-introduced AI-based programs have erred, malfunctioned and fined people who did not commit any violations.
China is a hands down leader when it comes to exercising control over citizens with the help of AI with a long history of introducing systems of social monitoring, all the way to the creation of personal ratings. At the end of 2019, free European media described this practice as “an electronic concentration camp,” only to adopt it itself just a few months later. Now, in terms of AI development, China is second only to the United States, but intends to catch up by 2030. Trailing immediately behind China are Britain, Canada, India and Israel, though on a less ambitious scale so far. Meanwhile, in terms of the number of AI startups and projects per 1 million people, Sweden and Finland lead the pack.
One of Beijing’s biggest breakthroughs in the use of artificial intelligence is the creation of a one-of a kind “social trust system” (or “social credit”) – the personal rating of every Chinese citizen, based on a huge array of data about him or her, up to the time one has spent playing a computer game. Despite the “undemocratic” nature of such surveillance, it is still very much in demand in the “free West.” After all, a citizen’s personal rating reflects not only one’s loyalty to the authorities, but also one’s creditworthiness, lifestyle that affects one’s health, job performance and other information that may be of interest to banks, retailers and insurance companies. To one degree or another, such a rating is being implemented all over the world in the form of electronic health cards integrated with various tracking applications, and credit ratings that scrupulously collect data on the citizens’ financial record, etc.
In fact, AI is already watching us and is able to quickly summarize data. By analyzing the number of queries with the keywords “taste loss” and “smell loss” – a simple and efficient algorithm developed by search engines, the system assesses the real incidence of coronavirus infections.
This personalized analysis of queries by search engines proved extremely important in the outgoing year 2020, leading to an over 40 percent spike in the number of online sales.
Public demand for targeted advertising based on similar requests from online stores and various platforms has increased exponentially, persuading its creators to focus more on the individual preferences of each potential client, including an analysis of all the details of his or her request.
Finally, in a matter of just a few months, AI’s role in educational processes has gone through the roof with the mandatory transition to distance learning forcing many educational institutions to speedily implement various programs aimed at testing students, monitoring their attendance and even behavior during exams. The computer algorithms learn to single out even cheats and hints made during an examination by analyzing tilted heads, extraneous sounds and turned-away eyes. Algorithms also help teachers track their students’ activity during classes, and it would be naive to assume that AI will go away from schools and universities once the quarantine is over. On the contrary, the education system, which has already been actively going online thanks to Coursera, Skillbox and other network resources, will make a quantum leap in this direction. AI is head and shoulders above the public administration systems of many countries, which have not yet learned to include in students’ diplomas information about additional skills picked up from extramural online courses.
Naturally enough, the coming year will see even wider use of artificial intelligence in all spheres of human life, including social monitoring, health care and education. Next year, the new algorithms developed in 2020 amid the COVID-19 pandemic will be further refined and used more efficiently.
The system of social control and social rating will be more or less adopted even by the world’s most liberal democracies. The problem, however, is that storing all information about a person in one hypothetical database to make his or her life easier makes it extremely vulnerable to hackers who can obtain full information about a person’s movements, addictions and expenses. There is also the risk of a human factor that AI may not be able to guard against when, for example, some corrupt employees get access to such massive volumes of information. Still, the development of new, more advanced AI algorithms ensuring the security of information and preventing, among other things, crimes using methods of social engineering, is sure to become a major priority for the developers of new technologies.
AI’s ability to process huge amounts of data and control all areas of human life keeps growing. Our life is getting more and more comfortable, but our personal space ends the moment we pick up a smartphone. At the same time, AI’s contribution to the fight against the pandemic, and its ability to forecast the next threat to humanity is now a factor that may further restrict a person’s free access to information, thus turning him or her into just one of the billions of folders with detailed personal data lost in the in the endless sea of information out there…
From our partner International Affairs
First Quantum Computing Guidelines Launched as Investment Booms
National governments have invested over $25 billion into quantum computing research and over $1 billion in venture capital deals have closed in the past year – more than the past three years combined. Quantum computing promises to disrupt the future of business, science, government, and society itself, but an equitable framework is crucial to address future risks.
A new Insight Report released today at the World Economic Forum Annual Meeting 2022 provides a roadmap for these emerging opportunities across public and private sectors. The principles have been co-designed by a global multistakeholder community composed of quantum experts, emerging technology ethics and law experts, decision makers and policy makers, social scientists and academics.
“The critical opportunity at the dawn of this historic transformation is to address ethical, societal and legal concerns well before commercialization,” said Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning at the World Economic Forum. “This report represents an early intervention and the beginning of a multi-disciplinary, global conversation that will guide the development of quantum computing to the benefit of all society.”
“Quantum computing holds the potential to help solve some of society’s greatest challenges, and IBM has been at the forefront of bringing quantum hardware and software to communities of discovery worldwide,” said Dr. Heike Riel, IBM Fellow, Head of Science and Technology and Lead, Quantum, IBM Research Europe. “This report is a key step in initiating the discussion around how quantum computing should be shaped and governed, for the benefit of all.”
Professor Bronwyn Fox, Chief Scientist at CSIRO, Australia’s science national agency said, “the Principles reflect conversations CSIRO’s scientists have had with partners from around the world who share an ambition for a responsible quantum future. Embedding responsible innovation in quantum computing is key to its successful deployment and uptake for generations to come. CSIRO is committed to ensuring these Principles are used to support a strong quantum industry in Australia and generate significant social and public good.”
In adapting to the coming hybrid model of classical, multi-cloud, and soon quantum computing, the Forum’s framework establishes best-practice principles and core values. These guidelines set the foundation and give rise to a new information-processing paradigm while ensuring stakeholder equity, risk mitigation, and consumer benefit.
The governance principles are grouped into nine themes and underpinned by a set of seven core values. Themes and respective goals defining the principles:
1. Transformative capabilities: Harness the transformative capabilities of this technology and the applications for the good of humanity while managing the risks appropriately.
2. Access to hardware infrastructure: Ensure wide access to quantum computing hardware.
3. Open innovation: Encourage collaboration and a precompetitive environment, enabling faster development of the technology and the realization of its applications.
4. Creating awareness: Ensure the general population and quantum computing stakeholders are aware, engaged and sufficiently informed to enable ongoing responsible dialogue and communication; stakeholders with oversight and authority should be able to make informed decisions about quantum computing in their respective domains.
5. Workforce development and capability-building: Build and sustain a quantum-ready workforce.
6. Cybersecurity: Ensure the transition to a quantum-secure digital world.
7. Privacy: Mitigate potential data-privacy violations through theft and processing by quantum computers.
8. Standardization: Promote standards and road-mapping mechanisms to accelerate the development of the technology.
9. Sustainability: Develop a sustainable future with and for quantum computing technology
Quantum computing core values that hold across the themes and principles:
Common good: The transformative capabilities of quantum computing and its applications are harnessed to ensure they will be used to benefit humanity.
Accountability: Use of quantum computing in any context has mechanisms in place to ensure human accountability, both in its design and in its uses and outcomes. All stakeholders in the quantum computing community are responsible for ensuring that the intentional misuse of quantum computing for harmful purposes is not accepted or inadvertently positively sanctioned.
Inclusiveness: In the development of quantum computing, insofar as possible, a broad and truly diverse range of stakeholder perspectives are engaged in meaningful dialogue to avoid narrow definitions of what may be considered a harmful or beneficial use of the technology.
Equitability: Quantum computing developers and users ensure that the technology is equitable by design, and that quantum computing-based technologies are fairly and evenly distributed insofar as possible. Particular consideration is given to any specific needs of vulnerable populations to ensure equitability.
Non-maleficence: All stakeholders use quantum computing in a safe, ethical and responsible manner. Furthermore, all stakeholders ensure quantum computing does not put humans at risk of harm, either in the intended or unintended outcomes of its use, and that it is not used for nefarious purposes.
Accessibility: Quantum computing technology and knowledge are actively made widely accessible. This includes the development, deployment and use of the technology. The aim is to cultivate a general ability among the population, societal actors, corporations and governments to understand the main principles of quantum computing, the ways in which it differs from classical computing and the potential it brings.
Transparency: Users, developers and regulators are transparent about their purpose and intentions with regard to quantum computing.
“Governments and industries are accelerating their investments in quantum computing research and development worldwide,” said Derek O’Halloran, Head of Digital Economy, World Economic Forum. “This report starts the conversation that will help us understand the opportunities, set the premise for ethical guidelines, and pre-empt socioeconomic, political and legal risks well ahead of global deployment.”
The Quantum Computing Governance Principles is an initiative of the World Economic Forum’s Quantum Computing Network, a multi-stakeholder initiative focused on accelerating responsible quantum computing.
Next steps for the Quantum Computing Governance Initiative will be to work with wider stakeholder groups to adopt these principles as part of broader governance frameworks and policy approaches. With this framework, business and investment communities along with policy makers and academia will be better equipped to adopt to the coming paradigm shift. Ultimately, everyone will be better prepared to harness the transformative capabilities of quantum sciences – perhaps the most exciting emergent technologies of the 21st Century.
Closing the Cyber Gap: Business and Security Leaders at Crossroads as Cybercrime Spikes
The global digital economy has surged off the back of the COVID-19 pandemic, but so has cybercrime – ransomware attacks rose 151% in 2021. There were on average 270 cyberattacks per organization during 2021, a 31% increase on 2020, with each successful cyber breach costing a company $3.6m. After a breach becomes public, the average share price of the hacked company underperforms the NASDAQ by -3% even six months after the event.
According to the World Economic Forum’s new annual report, The Global Cybersecurity Outlook 2022, 80% of cyber leaders now consider ransomware a “danger” and “threat” to public safety and there is a large perception gap between business executives who think their companies are secure and security leaders who disagree.
Some 92% of business executives surveyed agree that cyber resilience is integrated into enterprise risk-management strategies, only 55% of cyber leaders surveyed agree. This gap between leaders can leave firms vulnerable to attacks as a direct result of incongruous security priorities and policies.
Even after a threat is detected, our survey, written in collaboration with Accenture, found nearly two-thirds would find it challenging to respond to a cybersecurity incident due to the shortage of skills within their team. Perhaps even more troubling is the growing trend that companies need 280 days on average to identify and respond to a cyberattack. To put this into perspective, an incident which occurs on 1 January may not be fully contained until 8 October.
“Companies must now embrace cyber resilience – not only defending against cyberattacks but also preparing for swift and timely incident response and recovery when an attack does occur,” said Jeremy Jurgens, Managing Director at the World Economic Forum.
“Organizations need to work more closely with ecosystem partners and other third parties to make cybersecurity part of an organization’s ecosystem DNA, so they can be resilient and promote customer trust,” said Julie Sweet, Chair and CEO, Accenture. “This report underscores key challenges leaders face – collaborating with ecosystem partners and retaining and recruiting talent. We are proud to work with the World Economic Forum on this important topic because cybersecurity impacts every organization at all levels.”
Chief Cybersecurity Officers kept up at night by three things
Less than one-fifth of cyber leaders feel confident their organizations are cyber resilient. Three major concerns keep them awake at night:
– They don’t feel consulted on business decisions, and they struggle to gain the support of decision-makers in prioritizing cyber risks – 7 in 10 see cyber resilience featuring prominently in corporate risk management
– Recruiting and retaining the right talent is their greatest concern – 6 in 10 think it would be challenging to respond to a cybersecurity incident because they lack the skills within their team
– Nearly 9 in 10 see SMEs as the weakest link in the supply chain – 40% of respondents have been negatively affected by a supply chain cybersecurity incident
Training and closing the cyber gap are key solutions
Solutions include employee cyber training, offline backups, cyber insurance and platform-based cybersecurity solutions that stop known ransomware threats across all attack vectors.
Above all, there is an urgent need to close the gap of understanding between business and security leaders. It is impossible to attain complete cybersecurity, so the key objective must be to reinforce cyber resilience.
Including cyber leaders into the corporate governance process will help close this gap.
Ethical aspects relating to cyberspace: Self-regulation and codes of conduct
Virtual interaction processes must be controlled in one way or another. But how, within what limits and, above all, on the basis of what principles? The proponents of the official viewpoint – supported by the strength of state structures – argue that since the Internet has a significant and not always positive impact not only on its users, but also on society as a whole, all areas of virtual interaction need to be clearly regulated through the enactment of appropriate legislation.
In practice, however, the various attempts to legislate on virtual communication face great difficulties due to the imperfection of modern information law. Moreover, considering that the Internet community is based on an internal “anarchist” ideology, it shows significant resistance to government regulations, believing that in a cross-border environment – which is the global network – the only effective regulator can be the voluntarily and consciously accepted intranet ethics based on the awareness of the individual person’s moral responsibility for what happens in cyberspace.
At the same time, the significance of moral self-regulation lies not only in the fact that it makes it possible to control the areas that are insufficiently covered, but also in other regulatory provisions at political, legal, technical or economic levels. It is up to ethics to check the meaning, lawfulness and legitimacy of the remaining regulatory means. The legal provisions themselves, supported by the force of state influence, are developed or – at least, ideally – should be implemented on the basis of moral rules. It should be noted that, although compliance with law provisions is regarded as the minimum requirement of morality, in reality this is not always the case – at least until an “ideal” legislation is devised that does not contradict morality in any way. Therefore, an ethical justification and an equal scrutiny of legislative and disciplinary acts in relation to both IT and computer technology are necessary.
In accordance with the deontological approach to justifying web ethics, the ethical foundation of information law is based on the human rights of information. Although these rights are enshrined in various national and international legal instruments, in practice their protection is often not guaranteed by anyone. This enables several state structures to introduce various restrictions on information, justifying them with noble aims such as the need to implement the concept of national security.
It should be stressed that information legislation (like any other in general) is of a conventional nature, i.e. it is a sort of temporary compromise reached by the representatives of the various social groups. Therefore, there are no unshakable principles in this sphere: legality and illegality are defined by a dynamic balance between the desire for freedom of information, on the one hand, and the attempts at restricting this freedom in one way or another.
Therefore, several subjects have extremely contradictory requirements with regard to modern information law, which are not so easy to reconcile. Information law should simultaneously protect the right to free reception of information and the right to information security, as well as ensure privacy and prevent cybercrime. It should also promote again the public accessibility of the information created, and protect copyright – even if this impinges on the universal principle of knowledge sharing.
The principle of a reasonable balance of these often diametrically opposed aspirations, with unconditional respect for fundamental human rights, should be the basis of the international information law system.
Various national and international public organisations, professionals and voluntary users’ associations define their own operation principles in a virtual environment. These principles are very often formalised in codes of conduct, aimed at minimising the potentially dangerous moral and social consequences of the use of information technologies and thus at achieving a certain degree of web community’s autonomy, at least when it comes to purely internal problematic issues. The names of these codes do not always hint at ethics, but this does not change their essence. After all, they have not the status of law provisions, which means that they cannot serve as a basis for imposing disciplinary, administrative or any other liability measures on offenders. They are therefore enforced by the community members who have adopted them solely with goodwill, as a result of free expression based on recognition and sharing of the values and rules enshrined in them. These codes therefore act as one of the moral self-regulating mechanisms of the web community.
The cyberspace codes of ethics provide the basic moral guidelines that should guide information activities. They specify the principles of general theoretical ethics and are reflected in a virtual environment. They contain criteria enabling to recognise a given act as ethical or unethical. They finally provide specific recommendations on how to behave in certain situations. The rules enshrined in the codes of ethics under the form of provisions, authorisations, bans, etc., represent in many respects the formalisation and systematisation of unwritten rules and requirements that have developed spontaneously in the process of virtual interaction over the last thirty years of the Internet.
Conversely, the provisions of codes of ethics must be thoroughly considered and judged – by their very nature, code of ethics are conventional and hence they are always the result of a mutual agreement of the relevant members of a given social group – as otherwise they are simply reduced to a formal and sectorial statement, divorced from life and not rule-bound.
Despite their multidirectionality due to the variety of net functional abilities and the heterogeneity of its audience, a comparison of the most significant codes of ethics on the Internet shows a number of common principles. Apparently, these principles are in one way or another shared by all the Internet community members. This means that they underpin the ethos of cyberspace. They include the principle of accessibility, confidentiality and quality of information; the principle of inviolability of intellectual property; the principle of no harm, and the principle of limiting the excessive use of net resources. As can be seen, this list echoes the four deontological principles of information ethics (“PAPA: Privacy, Accuracy, Property and Accessibility”) formulated by Richard Mason in his article Four Ethical Issues of the Information Age. (“MIS Quarterly”, March 1986).
The presence of a very well-written code of ethics cannot obviously ensure that all group members will act in accordance with it, because – for a person – the most reliable guarantees against unethical behaviour are his/her conscience and duties, which are not always respected. The importance of codes should therefore not be overestimated: the principles and actual morals proclaimed by codes may diverge decisively from one another. The codes of ethics, however, perform a number of extremely important functions on the Internet: firstly, they can induce Internet users to moral reflection by instilling the idea of the need to evaluate their actions accordingly (in this case, it is not so much a ready-made code that is useful, but the very experience of its development and discussion). Secondly, they can form a healthy public in a virtual environment, and also provide it with uniform and reasonable criteria for moral evaluation. Thirdly they can become the basis for the future creation of international information law, adapted to the realities of the electronic age.
E-resilience readiness for an inclusive digital society by 2030
The COVID-19 pandemic has clearly demonstrated the link between digitalization and development, both by showing the potential of digital solutions...
Maintenance Tips for Second-Hand Cars
With a shortage of semiconductors continuing to plague the automotive industry, many are instead turning to the second-hand market to...
Delivering on Our Promise for Universal Education
On the International Day of Education, we call on world leaders to transform how we deliver on education. The clock...
Bringing dry land in the Sahel back to life
Millions of hectares of farmland are lost to the desert each year in Africa’s Sahel region, but the UN Food...
“Kurdish Spring”: drawing to a close?
For decades, the Kurdish problem was overshadowed by the Palestinian one, occasionally popping up in international media reports following the...
Great powers rivalry in Central Asia: New strategy, old game
In international politics, interstate rivalry involves conflicting relations between two international rivalries that are nation states. A fundamental feature of...
How UNEP is helping education systems go green
The world is facing a three-pronged environmental crisis of climate change, nature and biodiversity loss, and pollution and waste. To...
Middle East4 days ago
Iraq: Three Years of Drastic Changes (2019-2022)
Defense3 days ago
In 2022, military rivalry between powers will be increasingly intense
South Asia2 days ago
India is in big trouble as UK stands for Kashmiris
East Asia2 days ago
The Global (Dis) Order Warfare: The Chinese Way
Crypto Insights3 days ago
The Subtle Dominance of Stablecoins: A Ruse of Stability
Central Asia3 days ago
Unrest in Kazakhstan Only Solidifies China-Russia Ties
East Asia4 days ago
Rebuilding the World Order
Central Asia2 days ago
Post-Protest Kazakhstan Faces Three Major Crises