Artificial intelligence has been used in products targeting children for several years, but legislation protecting them from the potential impacts of the technology is still in its infancy. Ahead of a global forum on AI for children, UN News spoke to two UN Children’s Fund (UNICEF) experts about the need for improved policy protection.
Children are already interacting with AI technologies in many different ways: they are embedded in toys, virtual assistants, video games, and adaptive learning software. Their impact on children’s lives is profound, yet UNICEF found that, when it comes to AI policies and practices, children’s rights are an afterthought, at best.
In response, the UN children’s agency has developed draft Policy Guidance on AI for Children to promote children’s rights, and raise awareness of how AI systems can uphold or undermine these rights.
Conor Lennon from UN News asked Jasmina Byrne, Policy Chief at the UNICEF Global Insights team, and Steven Vosloo, a UNICEF data, research and policy specialist, about the importance of putting children at the centre of AI-related policies.
AI Technology will fundamentally change society.
Steven Vosloo At UNICEF we saw that AI was a very hot topic, and something that would fundamentally change society and the economy, particularly for the coming generations. But when we looked at national AI strategies, and corporate policies and guidelines, we realized that not enough attention was being paid to children, and to how AI impacts them.
So, we began an extensive consultation process, speaking to experts around the world, and almost 250 children, in five countries. That process led to our draft guidance document and, after we released it, we invited governments, organizations and companies to pilot it. We’re developing case studies around the guidance, so that we can share the lessons learned.
Jasmina Byrne AI has been in development for many decades. It is neither harmful nor benevolent on its own. It’s the application of these technologies that makes them either beneficial or harmful.
There are many positive applications of AI that can be used in in education for personalized learning. It can be used in healthcare, language simulation and processing, and it is being used to support children with disabilities.
And we use it at UNICEF. For example, it helps us to predict the spread of disease, and improve poverty estimations. But there are also many risks that are associated with the use of AI technologies.
Children interact with digital technologies all the time, but they’re not aware, and many adults are not aware, that many of the toys or platforms they use are powered by artificial intelligence. That’s why we felt that there has to be a special consideration given to children and because of their special vulnerabilities.
Privacy and the profit motive
Steven Vosloo The AI could be using natural language processing to understand words and instructions, and so it’s collecting a lot of data from that child, including intimate conversations, and that data is being stored in the cloud, often on commercial servers. So, there are privacy concerns.
We also know of instances where these types of toys were hacked, and they were banned in Germany, because they were considered to be safe enough.
Around a third of all online users are children. We often find that younger children are using social media platforms or video sharing platforms that weren’t designed with them in mind.
They are often designed for maximum engagement, and are built on a certain level of profiling based on data sets that may not represent children.
Predictive analytics and profiling are particularly relevant when dealing with children: AI may profile children in a way that puts them in a certain bucket, and this may determine what kind of educational opportunities they have in the future, or what benefits parents can access for children. So, the AI is not just impacting them today, but it could set their whole life course on a different direction.
Jasmina Byrne Last year this was big news in the UK. The Government used an algorithm to predict the final grades of high schoolers. And because the data that was input in the algorithms was skewed towards children from private schools, their results were really appalling, and they really discriminated against a lot of children who were from minority communities. So, they had to abandon that system.
That’s just one example of how, if algorithms are based on data that is biased, it can actually have a really negative consequences for children.
‘It’s a digital life now’
Steven Vosloo We really hope that our recommendations will filter down to the people who are actually writing the code. The policy guidance has been aimed at a broad audience, from the governments and policymakers who are increasingly setting strategies and beginning to think about regulating AI, and the private sector that it often develops these AI systems.
We do see competing interests: the decisions around AI systems often have to balance a profit incentive versus an ethical one. What we advocate for is a commitment to responsible AI that comes from the top: not just at the level of the data scientist or software developer, from top management and senior government ministers.
Jasmina Byrne The data footprint that children leave by using digital technology is commercialized and used by third parties for their own profit and for their own gain. They’re often targeted by ads that are not really appropriate for them. This is something that we’ve been really closely following and monitoring.
However, I would say that there is now more political appetite to address these issues, and we are working to put get them on the agenda of policymakers.
Governments need to think and puts children at the centre of all their policy-making around frontier digital technologies. If we don’t think about them and their needs. Then we are really missing great opportunities.
Steven Vosloo The Scottish Government released their AI strategy in March and they officially adopted the UNICEF policy guidance on AI for children. And part of that was because the government as a whole has adopted the Convention on the Rights of the Child into law. Children’s lives are not really online or offline anymore. And it’s a digital life now.
First Quantum Computing Guidelines Launched as Investment Booms
National governments have invested over $25 billion into quantum computing research and over $1 billion in venture capital deals have closed in the past year – more than the past three years combined. Quantum computing promises to disrupt the future of business, science, government, and society itself, but an equitable framework is crucial to address future risks.
A new Insight Report released today at the World Economic Forum Annual Meeting 2022 provides a roadmap for these emerging opportunities across public and private sectors. The principles have been co-designed by a global multistakeholder community composed of quantum experts, emerging technology ethics and law experts, decision makers and policy makers, social scientists and academics.
“The critical opportunity at the dawn of this historic transformation is to address ethical, societal and legal concerns well before commercialization,” said Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning at the World Economic Forum. “This report represents an early intervention and the beginning of a multi-disciplinary, global conversation that will guide the development of quantum computing to the benefit of all society.”
“Quantum computing holds the potential to help solve some of society’s greatest challenges, and IBM has been at the forefront of bringing quantum hardware and software to communities of discovery worldwide,” said Dr. Heike Riel, IBM Fellow, Head of Science and Technology and Lead, Quantum, IBM Research Europe. “This report is a key step in initiating the discussion around how quantum computing should be shaped and governed, for the benefit of all.”
Professor Bronwyn Fox, Chief Scientist at CSIRO, Australia’s science national agency said, “the Principles reflect conversations CSIRO’s scientists have had with partners from around the world who share an ambition for a responsible quantum future. Embedding responsible innovation in quantum computing is key to its successful deployment and uptake for generations to come. CSIRO is committed to ensuring these Principles are used to support a strong quantum industry in Australia and generate significant social and public good.”
In adapting to the coming hybrid model of classical, multi-cloud, and soon quantum computing, the Forum’s framework establishes best-practice principles and core values. These guidelines set the foundation and give rise to a new information-processing paradigm while ensuring stakeholder equity, risk mitigation, and consumer benefit.
The governance principles are grouped into nine themes and underpinned by a set of seven core values. Themes and respective goals defining the principles:
1. Transformative capabilities: Harness the transformative capabilities of this technology and the applications for the good of humanity while managing the risks appropriately.
2. Access to hardware infrastructure: Ensure wide access to quantum computing hardware.
3. Open innovation: Encourage collaboration and a precompetitive environment, enabling faster development of the technology and the realization of its applications.
4. Creating awareness: Ensure the general population and quantum computing stakeholders are aware, engaged and sufficiently informed to enable ongoing responsible dialogue and communication; stakeholders with oversight and authority should be able to make informed decisions about quantum computing in their respective domains.
5. Workforce development and capability-building: Build and sustain a quantum-ready workforce.
6. Cybersecurity: Ensure the transition to a quantum-secure digital world.
7. Privacy: Mitigate potential data-privacy violations through theft and processing by quantum computers.
8. Standardization: Promote standards and road-mapping mechanisms to accelerate the development of the technology.
9. Sustainability: Develop a sustainable future with and for quantum computing technology
Quantum computing core values that hold across the themes and principles:
Common good: The transformative capabilities of quantum computing and its applications are harnessed to ensure they will be used to benefit humanity.
Accountability: Use of quantum computing in any context has mechanisms in place to ensure human accountability, both in its design and in its uses and outcomes. All stakeholders in the quantum computing community are responsible for ensuring that the intentional misuse of quantum computing for harmful purposes is not accepted or inadvertently positively sanctioned.
Inclusiveness: In the development of quantum computing, insofar as possible, a broad and truly diverse range of stakeholder perspectives are engaged in meaningful dialogue to avoid narrow definitions of what may be considered a harmful or beneficial use of the technology.
Equitability: Quantum computing developers and users ensure that the technology is equitable by design, and that quantum computing-based technologies are fairly and evenly distributed insofar as possible. Particular consideration is given to any specific needs of vulnerable populations to ensure equitability.
Non-maleficence: All stakeholders use quantum computing in a safe, ethical and responsible manner. Furthermore, all stakeholders ensure quantum computing does not put humans at risk of harm, either in the intended or unintended outcomes of its use, and that it is not used for nefarious purposes.
Accessibility: Quantum computing technology and knowledge are actively made widely accessible. This includes the development, deployment and use of the technology. The aim is to cultivate a general ability among the population, societal actors, corporations and governments to understand the main principles of quantum computing, the ways in which it differs from classical computing and the potential it brings.
Transparency: Users, developers and regulators are transparent about their purpose and intentions with regard to quantum computing.
“Governments and industries are accelerating their investments in quantum computing research and development worldwide,” said Derek O’Halloran, Head of Digital Economy, World Economic Forum. “This report starts the conversation that will help us understand the opportunities, set the premise for ethical guidelines, and pre-empt socioeconomic, political and legal risks well ahead of global deployment.”
The Quantum Computing Governance Principles is an initiative of the World Economic Forum’s Quantum Computing Network, a multi-stakeholder initiative focused on accelerating responsible quantum computing.
Next steps for the Quantum Computing Governance Initiative will be to work with wider stakeholder groups to adopt these principles as part of broader governance frameworks and policy approaches. With this framework, business and investment communities along with policy makers and academia will be better equipped to adopt to the coming paradigm shift. Ultimately, everyone will be better prepared to harness the transformative capabilities of quantum sciences – perhaps the most exciting emergent technologies of the 21st Century.
Closing the Cyber Gap: Business and Security Leaders at Crossroads as Cybercrime Spikes
The global digital economy has surged off the back of the COVID-19 pandemic, but so has cybercrime – ransomware attacks rose 151% in 2021. There were on average 270 cyberattacks per organization during 2021, a 31% increase on 2020, with each successful cyber breach costing a company $3.6m. After a breach becomes public, the average share price of the hacked company underperforms the NASDAQ by -3% even six months after the event.
According to the World Economic Forum’s new annual report, The Global Cybersecurity Outlook 2022, 80% of cyber leaders now consider ransomware a “danger” and “threat” to public safety and there is a large perception gap between business executives who think their companies are secure and security leaders who disagree.
Some 92% of business executives surveyed agree that cyber resilience is integrated into enterprise risk-management strategies, only 55% of cyber leaders surveyed agree. This gap between leaders can leave firms vulnerable to attacks as a direct result of incongruous security priorities and policies.
Even after a threat is detected, our survey, written in collaboration with Accenture, found nearly two-thirds would find it challenging to respond to a cybersecurity incident due to the shortage of skills within their team. Perhaps even more troubling is the growing trend that companies need 280 days on average to identify and respond to a cyberattack. To put this into perspective, an incident which occurs on 1 January may not be fully contained until 8 October.
“Companies must now embrace cyber resilience – not only defending against cyberattacks but also preparing for swift and timely incident response and recovery when an attack does occur,” said Jeremy Jurgens, Managing Director at the World Economic Forum.
“Organizations need to work more closely with ecosystem partners and other third parties to make cybersecurity part of an organization’s ecosystem DNA, so they can be resilient and promote customer trust,” said Julie Sweet, Chair and CEO, Accenture. “This report underscores key challenges leaders face – collaborating with ecosystem partners and retaining and recruiting talent. We are proud to work with the World Economic Forum on this important topic because cybersecurity impacts every organization at all levels.”
Chief Cybersecurity Officers kept up at night by three things
Less than one-fifth of cyber leaders feel confident their organizations are cyber resilient. Three major concerns keep them awake at night:
– They don’t feel consulted on business decisions, and they struggle to gain the support of decision-makers in prioritizing cyber risks – 7 in 10 see cyber resilience featuring prominently in corporate risk management
– Recruiting and retaining the right talent is their greatest concern – 6 in 10 think it would be challenging to respond to a cybersecurity incident because they lack the skills within their team
– Nearly 9 in 10 see SMEs as the weakest link in the supply chain – 40% of respondents have been negatively affected by a supply chain cybersecurity incident
Training and closing the cyber gap are key solutions
Solutions include employee cyber training, offline backups, cyber insurance and platform-based cybersecurity solutions that stop known ransomware threats across all attack vectors.
Above all, there is an urgent need to close the gap of understanding between business and security leaders. It is impossible to attain complete cybersecurity, so the key objective must be to reinforce cyber resilience.
Including cyber leaders into the corporate governance process will help close this gap.
Ethical aspects relating to cyberspace: Self-regulation and codes of conduct
Virtual interaction processes must be controlled in one way or another. But how, within what limits and, above all, on the basis of what principles? The proponents of the official viewpoint – supported by the strength of state structures – argue that since the Internet has a significant and not always positive impact not only on its users, but also on society as a whole, all areas of virtual interaction need to be clearly regulated through the enactment of appropriate legislation.
In practice, however, the various attempts to legislate on virtual communication face great difficulties due to the imperfection of modern information law. Moreover, considering that the Internet community is based on an internal “anarchist” ideology, it shows significant resistance to government regulations, believing that in a cross-border environment – which is the global network – the only effective regulator can be the voluntarily and consciously accepted intranet ethics based on the awareness of the individual person’s moral responsibility for what happens in cyberspace.
At the same time, the significance of moral self-regulation lies not only in the fact that it makes it possible to control the areas that are insufficiently covered, but also in other regulatory provisions at political, legal, technical or economic levels. It is up to ethics to check the meaning, lawfulness and legitimacy of the remaining regulatory means. The legal provisions themselves, supported by the force of state influence, are developed or – at least, ideally – should be implemented on the basis of moral rules. It should be noted that, although compliance with law provisions is regarded as the minimum requirement of morality, in reality this is not always the case – at least until an “ideal” legislation is devised that does not contradict morality in any way. Therefore, an ethical justification and an equal scrutiny of legislative and disciplinary acts in relation to both IT and computer technology are necessary.
In accordance with the deontological approach to justifying web ethics, the ethical foundation of information law is based on the human rights of information. Although these rights are enshrined in various national and international legal instruments, in practice their protection is often not guaranteed by anyone. This enables several state structures to introduce various restrictions on information, justifying them with noble aims such as the need to implement the concept of national security.
It should be stressed that information legislation (like any other in general) is of a conventional nature, i.e. it is a sort of temporary compromise reached by the representatives of the various social groups. Therefore, there are no unshakable principles in this sphere: legality and illegality are defined by a dynamic balance between the desire for freedom of information, on the one hand, and the attempts at restricting this freedom in one way or another.
Therefore, several subjects have extremely contradictory requirements with regard to modern information law, which are not so easy to reconcile. Information law should simultaneously protect the right to free reception of information and the right to information security, as well as ensure privacy and prevent cybercrime. It should also promote again the public accessibility of the information created, and protect copyright – even if this impinges on the universal principle of knowledge sharing.
The principle of a reasonable balance of these often diametrically opposed aspirations, with unconditional respect for fundamental human rights, should be the basis of the international information law system.
Various national and international public organisations, professionals and voluntary users’ associations define their own operation principles in a virtual environment. These principles are very often formalised in codes of conduct, aimed at minimising the potentially dangerous moral and social consequences of the use of information technologies and thus at achieving a certain degree of web community’s autonomy, at least when it comes to purely internal problematic issues. The names of these codes do not always hint at ethics, but this does not change their essence. After all, they have not the status of law provisions, which means that they cannot serve as a basis for imposing disciplinary, administrative or any other liability measures on offenders. They are therefore enforced by the community members who have adopted them solely with goodwill, as a result of free expression based on recognition and sharing of the values and rules enshrined in them. These codes therefore act as one of the moral self-regulating mechanisms of the web community.
The cyberspace codes of ethics provide the basic moral guidelines that should guide information activities. They specify the principles of general theoretical ethics and are reflected in a virtual environment. They contain criteria enabling to recognise a given act as ethical or unethical. They finally provide specific recommendations on how to behave in certain situations. The rules enshrined in the codes of ethics under the form of provisions, authorisations, bans, etc., represent in many respects the formalisation and systematisation of unwritten rules and requirements that have developed spontaneously in the process of virtual interaction over the last thirty years of the Internet.
Conversely, the provisions of codes of ethics must be thoroughly considered and judged – by their very nature, code of ethics are conventional and hence they are always the result of a mutual agreement of the relevant members of a given social group – as otherwise they are simply reduced to a formal and sectorial statement, divorced from life and not rule-bound.
Despite their multidirectionality due to the variety of net functional abilities and the heterogeneity of its audience, a comparison of the most significant codes of ethics on the Internet shows a number of common principles. Apparently, these principles are in one way or another shared by all the Internet community members. This means that they underpin the ethos of cyberspace. They include the principle of accessibility, confidentiality and quality of information; the principle of inviolability of intellectual property; the principle of no harm, and the principle of limiting the excessive use of net resources. As can be seen, this list echoes the four deontological principles of information ethics (“PAPA: Privacy, Accuracy, Property and Accessibility”) formulated by Richard Mason in his article Four Ethical Issues of the Information Age. (“MIS Quarterly”, March 1986).
The presence of a very well-written code of ethics cannot obviously ensure that all group members will act in accordance with it, because – for a person – the most reliable guarantees against unethical behaviour are his/her conscience and duties, which are not always respected. The importance of codes should therefore not be overestimated: the principles and actual morals proclaimed by codes may diverge decisively from one another. The codes of ethics, however, perform a number of extremely important functions on the Internet: firstly, they can induce Internet users to moral reflection by instilling the idea of the need to evaluate their actions accordingly (in this case, it is not so much a ready-made code that is useful, but the very experience of its development and discussion). Secondly, they can form a healthy public in a virtual environment, and also provide it with uniform and reasonable criteria for moral evaluation. Thirdly they can become the basis for the future creation of international information law, adapted to the realities of the electronic age.
Can e-commerce help save the planet?
If you have logged onto Google Flights recently, you might have noticed a small change in the page’s layout. Alongside...
1.5 million children lack treatment for severe wasting in Eastern and Southern Africa
At least 1.5 million children are not receiving life-saving treatment for severe wasting in Eastern and Southern Africa, warned the United Nations...
UNRWA condemns demolition of Palestinian home in East Jerusalem
The UN agency that supports Palestinian refugees, UNRWA, on Thursday urged Israeli to immediately halt all evictions and demolitions in...
India’s Unclear Neighbourhood Policy: How to Overcome ?
India has witnessed multiple trends with regards to its relations with its neighbours at a time vaccine diplomacy is gaining...
Post-Protest Kazakhstan Faces Three Major Crises
Kazakhstan suffered greatly from the biggest protest since its independence. As I recently returned to Almaty, I saw that everyday...
Maximizing Indonesia’s Public Diplomacy Through Indonesia’s First Mosque in London
Indonesia and UK have established bilateral cooperation in December 1949 in which the bilateral cooperation includes economic cooperation, tourism, energy,...
Is British Democracy in Danger?
On Sunday 12th of December 2021 Boris Johnson went on national television to warn about a tidal wave that would...
East Asia4 days ago
The Spirit of the Olympic Games and the Rise of China
Science & Technology3 days ago
Closing the Cyber Gap: Business and Security Leaders at Crossroads as Cybercrime Spikes
Crypto Insights4 days ago
Metaverse Leading the Gaming Revolution: Are NFTs Truly the Future of the Industry?
Defense3 days ago
Spotlight on the Russia-Ukraine situation
New Social Compact4 days ago
The Social Innovators of the Year 2022
Economy3 days ago
2022: Small Medium Business & Economic Development Errors
Science & Technology3 days ago
First Quantum Computing Guidelines Launched as Investment Booms
Crypto Insights3 days ago
The First Crypto Mortgage: Bitcoin Continues to Rapidly Expand Across the US Markets