Connect with us

Science & Technology

On the Depth, Transparency and Power of Today’s AI



Two years into our last review on state of the artt in the area of artificial intelligence, there has been a widening gap between the seeming omnipotence of neural network models based on “deep learning”, which are offered by market leaders, and the demand for an “algorithmic transparency” emanating from the society. In this review, we will try to probe this gap, discussing what trends and solutions can help resolve the problem or lead to its further exacerbation.

1. Developments of Recent Years

First of all, what we know as strong or general AI (AGI) has become a well-established item on the global agenda. A team of Russian-speaking researchers and developers has published a book on this topic, where they provide a thorough analysis of the possible prospects of this technology. Open seminars are being held on a weekly basis during the last two years by the Russian-speaking community of the AGI developers.

Consciousness. One of the key problems concerning AGI is the issue of consciousness, as was outlined in our earlier review. Controversy surrounds both the very possibility of imbuing artificial systems with it and the extent to which it would be prudent for humanity to endow such systems with “consciousness”, if possible at all. As Konstantin Anokhin has put it at the OpenTalks.AI conference in 2018, “we must explore the issue of consciousness to prevent that AI is imbued with it.” According to the materials of a round table held at the AGIRussia seminar in 2020, one of the first requirements for the emergence of consciousness in artificial systems is the ability of AI systems to carry out “multimodal” behaviour, which implies integrating information from various sensory modalities (e.g., text, image, video, sound, etc.) by “grounding” it from different modalities in the surrounding reality, enabling them to construct coherent “images of the world”—just as humans do.

Multimodality. It is here that a number of promising technological breakthroughs took place in 2021. For example, having been trained on a multimodal dataset including text–image pairs, OpenAI’s DALL-E system can now generate images of various scenes from text descriptions. In the meantime, the Codex system, which is also developed by OpenAI, has learnt to generate software code in accordance with an algorithm written in plain English.

Super-deep learning. The race for the “depth” of neural models, while has long been dominated by the American giants Google, Microsoft (jointly with OpenAI) and Amazon, is now joined by China’s tech giants Baidu, Tencent and Alibaba. In November 2021, Alibaba created the M6 multimodal network that boasts a record number of parameters or connections (10 trillion in total)—this is a mere one tenth behind the number of synapses in the human brain, as the latest data suggest.

Foundation models. Super-deep multimodal neural network models have been termed “foundation models.” Their potential capabilities and related threats are analysed in a detailed report prepared by the world’s leading AI specialists at Stanford University. On the one hand, the further development of these models can be seen as the closest achievement on the way towards AGI, with the system’s intelligence increased by virtue of an increasing number of parameters (more than in the human brain), perceived modalities (including new modalities that are inaccessible to humans) as well as huge amounts of training data (something that no individual person could ever process). The latter allows some researchers to speculate that a “super-human AI” could be built on such systems in the not-too-distant future. However, there remain some serious issues, both those raised in the report and others discussed below.

Algorithmic transparency/opacity. The further “deepening” of deep models serves to exacerbate the conflict between this approach and the requirements of the “algorithmic transparency,” which is increasingly imperative for AI-based decision-making systems as they proliferate. Limitations on the applicability of “opaque” AI in the areas that concern the security, rights, life and health of people are adopted and discussed around the world. Interestingly, such restrictions can seriously hinder the applicability of AI in contexts where it may be useful, such as in the face of the ongoing COVID-19 pandemic, where it could help solve the problem of mass diagnostics amid a mounting wave of examinations and a catastrophic shortage of skilled medical personnel.

Totally-used AI. AI algorithms and applications are becoming ubiquitous to encompass all aspects of daily lives, be it any kind of movement or financial, consumer, cultural and social activities. Global corporations and the states that exert control over them are those that control and derive benefits from this massive use of AI. As we have argued earlier, the planet’s digitally active population is divided into unequal spheres of influence between American (Google, Facebook, Microsoft, Apple, Amazon) and Chinese (Baidu, Tencent, Alibaba) corporations. Objectively, any possible manipulations on the part of these corporations and states, since they seek to maximize the profits of majority shareholders while preserving the power of the ruling elites, will only increase as the AI power at their disposal grows. It is symptomatic that OpenAI, initially intended as an open public-oriented project, has shifted to closed source, and it is now becoming all the more dependent on Microsoft in its finances.

Energy (in)efficiency. As with cryptocurrency mining, which has long been drawing criticism due to its detrimental environmental impact, power consumption of “super-deep” learning systems and the associated carbon footprint are becoming another matter of concern. In particular, the latest results of the OpenAI Codex multimodal system, developed jointly with Microsoft, touch on the environmental impact of this technology in a separate section. Given that the existing number of parameters in the largest neural network models is several orders of magnitude less than the number of synapses in the human brain, an increase in the number of such models and their parameters will lead to an exponential increase in the negative impact of such systems on the environment. The efficiency of the human brain, as it consumes immeasurably less energy for the same number of parameters, remains unattainable for existing computing architectures.

Militarization. With no significant progress in imposing an international ban on the creation of Lethal Autonomous Weapons Systems (LAWS), such systems are being employed by special services. As the successful use of attack drones has already become a decisive factor in local military conflicts, a wider use of autonomous systems in military operations may become a reality in the near future, especially that live pilots are no longer able to compete with AI systems in the simulation of real air battles. Poor ability to explain and predict the behaviour of such systems at a time of their proliferation and possible expansion into space invites no special comment. Unfortunately, aggravated strategic competition between world leaders in both the AI and arms races leaves little hope for reaching a consensus in a competitive environment, as was stated in the Aigents review back in 2020.

2. Prospects for Development

Given the insights shared earlier on, we shall briefly discuss the possible “growth zones,” including those where further development is essentially critical.

What can the “depth” reveal? As shown by expert discussions, such as the workshop with leading computational linguists from Sberbank and Google held in September 2021, “there is no intelligence there,” to quote one of the participants. The deepest neural network models are essentially high-performance and high-cost associative memory devices, albeit operating at speeds and information volumes that exceed human capabilities in a large number of applications. However, by themselves, they are failing to adapt to new environmental conditions if not tuned to them manually, and they are unable to generate new knowledge by identifying phenomena in the environment to connect them into causal models of the world, let alone share this knowledge with other constituents of the environment, be they people or other similar systems.

Can parameters be reduced to synapses? Traditionally, the power of “deep” neural models is compared to the resources of the human brain on the basis of a whole set of neural network parameters and proceeding from the assumption that each parameter corresponds to a synapse between biological neurons, as is the case of the classical graph model of the human brain connectome, reproduced using neural networks since the invention of the perceptron over 60 years ago. However, this leaves out of account the ability of dendritic arms to independently process information, or hypergraph and metagraph axon and dendrite structures, or the possibility of different neurotransmitters acting in the same synapses, or the effects associated with interference of neurotransmitters from various axons in receptive clusters. Failing to reflect if one of these factors in full means that the complexity and capacity of existing “super-deep” neural network models is removed by dozens of orders of magnitude from the actual human brain, which in turn calls into question the fitness of their architectures for the task of reproducing human intelligence “in silico”.

From “explainability” to interpretability. Although the developments in Explainable AI technologies make it possible to “close” legal problems in cases related to the protection of civil rights, allowing companies to generate rather satisfactory “explanations” in cases stipulated by law, in general the problem cannot be considered solved. It is still an open question whether it is possible to “interpret” trained models before putting them to use, in order to avoid situations where belated “explanations” can no longer bring back human lives. In this regard, the development of hybrid neuro-symbolic architectures, “vertical” and “horizontal,” appears promising. Vertical neuro-symbolic architecture involves artificial neural networks at the “lower levels” for low-level processing of input signals (for example, audio and video) while using “symbolic” systems based on probabilistic logic (Evgenii Vityaev, Discovery, and Ben Goetzel, OpenCog) or non-axiomatic logic (Pei Wang, NARS) for high-level processing of behavioural patterns and decision-making. Horizontal neuro-symbolic architecture implies the possibility of representing the same knowledge either in a neural network, implementing an intuitive approach (what Daniel Kahneman calls System 1) or in a logical system (System 2) operating on the basis of the abovementioned probabilistic or non-axiomatic logic. With this, it is assumed that “models,” implicitly learned and implicitly applied in the former system, can be transformed into “knowledge,” explicitly deduced and analysed in the latter, and both systems can act independently on an adversarial basis, sharing their “experience” with each other in the process of continuous learning.

Can ethics be formalized? As the ethics of applying AI in various fields are increasingly discussed at the governmental and intergovernmental levels, it is becoming apparent that there are certain national peculiarities in the related legislation, first of all in the United States, the European Union, China, Russia and India. Research shows significant differences in the intuitive understanding of ethics by people belonging to different cultures. Asimov’s Three Laws of Robotics seem particularly useless as in critical situations people have to choose whether (and how) their action or inaction will cause harm to some in favour of others. If AI systems continue to be applied (as they are in transport) in fields where automated decisions lead to the death and injury of some in favour of others, it is inevitable that legislation will develop in relation to such systems, reflecting different ethical norms across countries, and AI developers working in international markets will have to adapt to local laws in the field of AI ethics, which is exactly what is now happening with personal data processing, where IT companies have to adapt to the legislation of each individual country.

3. Further Steps

From a humanitarian perspective, it seems necessary to intensify cooperation between the states leading in AI and arms races (Russia, the United States and China) within the UN framework in order to effect a complete ban on the development, deployment and use of Lethal Autonomous Weapon Systems (LAWS).

When entering international markets, developers of universal general AI systems will have to ensure that their AI decision-making systems can be pre-configured to account for the ethical norms and cultural patterns of the target markets, which could be done, for example, by embedding “core values” of the target market, building on the fundamental layer of the “knowledge graph,” when implementing systems based on “interpretable AI.”

Russia cannot hope to aspire for global leadership with its current lag in the development and deployment of “super-deep” neural network models. The country needs to close the gap on the leaders (the United States and China) by bringing its own software developments to the table, as well as its own computing equipment and data for training AI models.

However, keeping in mind the above-identified fundamental problems, limitations and opportunities, there may still be some potential for a breakthrough in the field of interpretable AI and hybrid neuro-symbolic architectures, where Russia’s mathematical school still emerges as a leader, which has been demonstrated by the Springer prize granted to a group of researchers from Novosibirsk for best cognitive architecture at the AGI 2020 International Conference on Artificial General Intelligence. In terms of practical applicability, this area is somewhat in a state similar to that of deep neural network models some 10–15 years ago; however, any delay in its practical development can lead to a strategic lag.

Finally, an additional opportunity to dive into the problems and solutions in the field of strong or general AI will be presented to participants in the AGI 2022 conference, which is expected to take place in St. Petersburg next year and which certainly deserves the attention of all those interested in this topic.

From our partner RIAC

Continue Reading

Science & Technology

First Quantum Computing Guidelines Launched as Investment Booms



National governments have invested over $25 billion into quantum computing research and over $1 billion in venture capital deals have closed in the past year – more than the past three years combined. Quantum computing promises to disrupt the future of business, science, government, and society itself, but an equitable framework is crucial to address future risks.

A new Insight Report released today at the World Economic Forum Annual Meeting 2022 provides a roadmap for these emerging opportunities across public and private sectors. The principles have been co-designed by a global multistakeholder community composed of quantum experts, emerging technology ethics and law experts, decision makers and policy makers, social scientists and academics.

“The critical opportunity at the dawn of this historic transformation is to address ethical, societal and legal concerns well before commercialization,” said Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning at the World Economic Forum. “This report represents an early intervention and the beginning of a multi-disciplinary, global conversation that will guide the development of quantum computing to the benefit of all society.”

“Quantum computing holds the potential to help solve some of society’s greatest challenges, and IBM has been at the forefront of bringing quantum hardware and software to communities of discovery worldwide,” said Dr. Heike Riel, IBM Fellow, Head of Science and Technology and Lead, Quantum, IBM Research Europe. “This report is a key step in initiating the discussion around how quantum computing should be shaped and governed, for the benefit of all.”

Professor Bronwyn Fox, Chief Scientist at CSIRO, Australia’s science national agency said, “the Principles reflect conversations CSIRO’s scientists have had with partners from around the world who share an ambition for a responsible quantum future. Embedding responsible innovation in quantum computing is key to its successful deployment and uptake for generations to come. CSIRO is committed to ensuring these Principles are used to support a strong quantum industry in Australia and generate significant social and public good.”

In adapting to the coming hybrid model of classical, multi-cloud, and soon quantum computing, the Forum’s framework establishes best-practice principles and core values. These guidelines set the foundation and give rise to a new information-processing paradigm while ensuring stakeholder equity, risk mitigation, and consumer benefit.

The governance principles are grouped into nine themes and underpinned by a set of seven core values. Themes and respective goals defining the principles:

1. Transformative capabilities: Harness the transformative capabilities of this technology and the applications for the good of humanity while managing the risks appropriately.

2. Access to hardware infrastructure: Ensure wide access to quantum computing hardware.

3. Open innovation: Encourage collaboration and a precompetitive environment, enabling faster development of the technology and the realization of its applications.

4. Creating awareness: Ensure the general population and quantum computing stakeholders are aware, engaged and sufficiently informed to enable ongoing responsible dialogue and communication; stakeholders with oversight and authority should be able to make informed decisions about quantum computing in their respective domains.

5. Workforce development and capability-building: Build and sustain a quantum-ready workforce.

6. Cybersecurity: Ensure the transition to a quantum-secure digital world.

7. Privacy: Mitigate potential data-privacy violations through theft and processing by quantum computers.

8. Standardization: Promote standards and road-mapping mechanisms to accelerate the development of the technology.

9. Sustainability: Develop a sustainable future with and for quantum computing technology

Quantum computing core values that hold across the themes and principles:

Common good: The transformative capabilities of quantum computing and its applications are harnessed to ensure they will be used to benefit humanity.

Accountability: Use of quantum computing in any context has mechanisms in place to ensure human accountability, both in its design and in its uses and outcomes. All stakeholders in the quantum computing community are responsible for ensuring that the intentional misuse of quantum computing for harmful purposes is not accepted or inadvertently positively sanctioned.

Inclusiveness: In the development of quantum computing, insofar as possible, a broad and truly diverse range of stakeholder perspectives are engaged in meaningful dialogue to avoid narrow definitions of what may be considered a harmful or beneficial use of the technology.

Equitability: Quantum computing developers and users ensure that the technology is equitable by design, and that quantum computing-based technologies are fairly and evenly distributed insofar as possible. Particular consideration is given to any specific needs of vulnerable populations to ensure equitability.

Non-maleficence: All stakeholders use quantum computing in a safe, ethical and responsible manner. Furthermore, all stakeholders ensure quantum computing does not put humans at risk of harm, either in the intended or unintended outcomes of its use, and that it is not used for nefarious purposes.

Accessibility: Quantum computing technology and knowledge are actively made widely accessible. This includes the development, deployment and use of the technology. The aim is to cultivate a general ability among the population, societal actors, corporations and governments to understand the main principles of quantum computing, the ways in which it differs from classical computing and the potential it brings.

Transparency: Users, developers and regulators are transparent about their purpose and intentions with regard to quantum computing.

“Governments and industries are accelerating their investments in quantum computing research and development worldwide,” said Derek O’Halloran, Head of Digital Economy, World Economic Forum. “This report starts the conversation that will help us understand the opportunities, set the premise for ethical guidelines, and pre-empt socioeconomic, political and legal risks well ahead of global deployment.”

The Quantum Computing Governance Principles is an initiative of the World Economic Forum’s Quantum Computing Network, a multi-stakeholder initiative focused on accelerating responsible quantum computing.

Next steps for the Quantum Computing Governance Initiative will be to work with wider stakeholder groups to adopt these principles as part of broader governance frameworks and policy approaches. With this framework, business and investment communities along with policy makers and academia will be better equipped to adopt to the coming paradigm shift. Ultimately, everyone will be better prepared to harness the transformative capabilities of quantum sciences – perhaps the most exciting emergent technologies of the 21st Century.

Continue Reading

Science & Technology

Closing the Cyber Gap: Business and Security Leaders at Crossroads as Cybercrime Spikes



The global digital economy has surged off the back of the COVID-19 pandemic, but so has cybercrime – ransomware attacks rose 151% in 2021. There were on average 270 cyberattacks per organization during 2021, a 31% increase on 2020, with each successful cyber breach costing a company $3.6m. After a breach becomes public, the average share price of the hacked company underperforms the NASDAQ by -3% even six months after the event.

According to the World Economic Forum’s new annual report, The Global Cybersecurity Outlook 2022, 80% of cyber leaders now consider ransomware a “danger” and “threat” to public safety and there is a large perception gap between business executives who think their companies are secure and security leaders who disagree.

Some 92% of business executives surveyed agree that cyber resilience is integrated into enterprise risk-management strategies, only 55% of cyber leaders surveyed agree. This gap between leaders can leave firms vulnerable to attacks as a direct result of incongruous security priorities and policies.

Even after a threat is detected, our survey, written in collaboration with Accenture, found nearly two-thirds would find it challenging to respond to a cybersecurity incident due to the shortage of skills within their team. Perhaps even more troubling is the growing trend that companies need 280 days on average to identify and respond to a cyberattack. To put this into perspective, an incident which occurs on 1 January may not be fully contained until 8 October.

“Companies must now embrace cyber resilience – not only defending against cyberattacks but also preparing for swift and timely incident response and recovery when an attack does occur,” said Jeremy Jurgens, Managing Director at the World Economic Forum.

“Organizations need to work more closely with ecosystem partners and other third parties to make cybersecurity part of an organization’s ecosystem DNA, so they can be resilient and promote customer trust,” said Julie Sweet, Chair and CEO, Accenture. “This report underscores key challenges leaders face – collaborating with ecosystem partners and retaining and recruiting talent. We are proud to work with the World Economic Forum on this important topic because cybersecurity impacts every organization at all levels.”

Chief Cybersecurity Officers kept up at night by three things

Less than one-fifth of cyber leaders feel confident their organizations are cyber resilient. Three major concerns keep them awake at night:

– They don’t feel consulted on business decisions, and they struggle to gain the support of decision-makers in prioritizing cyber risks – 7 in 10 see cyber resilience featuring prominently in corporate risk management

– Recruiting and retaining the right talent is their greatest concern – 6 in 10 think it would be challenging to respond to a cybersecurity incident because they lack the skills within their team

– Nearly 9 in 10 see SMEs as the weakest link in the supply chain – 40% of respondents have been negatively affected by a supply chain cybersecurity incident

Training and closing the cyber gap are key solutions

Solutions include employee cyber training, offline backups, cyber insurance and platform-based cybersecurity solutions that stop known ransomware threats across all attack vectors.

Above all, there is an urgent need to close the gap of understanding between business and security leaders. It is impossible to attain complete cybersecurity, so the key objective must be to reinforce cyber resilience.

Including cyber leaders into the corporate governance process will help close this gap.

Continue Reading

Science & Technology

Ethical aspects relating to cyberspace: Self-regulation and codes of conduct



Virtual interaction processes must be controlled in one way or another. But how, within what limits and, above all, on the basis of what principles? The proponents of the official viewpoint – supported by the strength of state structures – argue that since the Internet has a significant and not always positive impact not only on its users, but also on society as a whole, all areas of virtual interaction need to be clearly regulated through the enactment of appropriate legislation.

In practice, however, the various attempts to legislate on virtual communication face great difficulties due to the imperfection of modern information law. Moreover, considering that the Internet community is based on an internal “anarchist” ideology, it shows significant resistance to government regulations, believing that in a cross-border environment – which is the global network – the only effective regulator can be the voluntarily and consciously accepted intranet ethics based on the awareness of the individual person’s moral responsibility for what happens in cyberspace.

At the same time, the significance of moral self-regulation lies not only in the fact that it makes it possible to control the areas that are insufficiently covered, but also in other regulatory provisions at political, legal, technical or economic levels. It is up to ethics to check the meaning, lawfulness and legitimacy of the remaining regulatory means. The legal provisions themselves, supported by the force of state influence, are developed or – at least, ideally – should be implemented on the basis of moral rules. It should be noted that, although compliance with law provisions is regarded as the minimum requirement of morality, in reality this is not always the case – at least until an “ideal” legislation is devised that does not contradict morality in any way. Therefore, an ethical justification and an equal scrutiny of legislative and disciplinary acts in relation to both IT and computer technology are necessary.

In accordance with the deontological approach to justifying web ethics, the ethical foundation of information law is based on the human rights of information. Although these rights are enshrined in various national and international legal instruments, in practice their protection is often not guaranteed by anyone. This enables several state structures to introduce various restrictions on information, justifying them with noble aims such as the need to implement the concept of national security.

It should be stressed that information legislation (like any other in general) is of a conventional nature, i.e. it is a sort of temporary compromise reached by the representatives of the various social groups. Therefore, there are no unshakable principles in this sphere: legality and illegality are defined by a dynamic balance between the desire for freedom of information, on the one hand, and the attempts at restricting this freedom in one way or another.

Therefore, several subjects have extremely contradictory requirements with regard to modern information law, which are not so easy to reconcile. Information law should simultaneously protect the right to free reception of information and the right to information security, as well as ensure privacy and prevent cybercrime. It should also promote again the public accessibility of the information created, and protect copyright – even if this impinges on the universal principle of knowledge sharing.

The principle of a reasonable balance of these often diametrically opposed aspirations, with unconditional respect for fundamental human rights, should be the basis of the international information law system.

Various national and international public organisations, professionals and voluntary users’ associations define their own operation principles in a virtual environment. These principles are very often formalised in codes of conduct, aimed at minimising the potentially dangerous moral and social consequences of the use of information technologies and thus at achieving a certain degree of web community’s autonomy, at least when it comes to purely internal problematic issues. The names of these codes do not always hint at ethics, but this does not change their essence. After all, they have not the status of law provisions, which means that they cannot serve as a basis for imposing disciplinary, administrative or any other liability measures on offenders. They are therefore enforced by the community members who have adopted them solely with goodwill, as a result of free expression based on recognition and sharing of the values and rules enshrined in them. These codes therefore act as one of the moral self-regulating mechanisms of the web community.

The cyberspace codes of ethics provide the basic moral guidelines that should guide information activities. They specify the principles of general theoretical ethics and are reflected in a virtual environment. They contain criteria enabling to recognise a given act as ethical or unethical. They finally provide specific recommendations on how to behave in certain situations. The rules enshrined in the codes of ethics under the form of provisions, authorisations, bans, etc., represent in many respects the formalisation and systematisation of unwritten rules and requirements that have developed spontaneously in the process of virtual interaction over the last thirty years of the Internet.

Conversely, the provisions of codes of ethics must be thoroughly considered and judged – by their very nature, code of ethics are conventional and hence they are always the result of a mutual agreement of the relevant members of a given social group – as otherwise they are simply reduced to a formal and sectorial statement, divorced from life and not rule-bound.

Despite their multidirectionality due to the variety of net functional abilities and the heterogeneity of its audience, a comparison of the most significant codes of ethics on the Internet shows a number of common principles. Apparently, these principles are in one way or another shared by all the Internet community members. This means that they underpin the ethos of cyberspace. They include the principle of accessibility, confidentiality and quality of information; the principle of inviolability of intellectual property; the principle of no harm, and the principle of limiting the excessive use of net resources. As can be seen, this list echoes the four deontological principles of information ethics (“PAPA: Privacy, Accuracy, Property and Accessibility”) formulated by Richard Mason in his article Four Ethical Issues of the Information Age. (“MIS Quarterly”, March 1986).

The presence of a very well-written code of ethics cannot obviously ensure that all group members will act in accordance with it, because – for a person – the most reliable guarantees against unethical behaviour are his/her conscience and duties, which are not always respected. The importance of codes should therefore not be overestimated: the principles and actual morals proclaimed by codes may diverge decisively from one another. The codes of ethics, however, perform a number of extremely important functions on the Internet: firstly, they can induce Internet users to moral reflection by instilling the idea of the need to evaluate their actions accordingly (in this case, it is not so much a ready-made code that is useful, but the very experience of its development and discussion). Secondly, they can form a healthy public in a virtual environment, and also provide it with uniform and reasonable criteria for moral evaluation. Thirdly they can  become the basis for the future creation of international information law, adapted to the realities of the electronic age.

Continue Reading