A Human Rights and International Policy Centric Approach to the Risks and Benefit of Barely Understood Tehnological Systems
Today, Digitalization is all-pervading and is changing our way of working and everyday living. It is now up to us not only to oversee and regulate the twin processes of digitalization and digitization but also to harness this new trend/technology to enrich our social, political and economic lives, in line with certain guiding principles, such as those of equality, equity, human development and choices, and democracy.
If handled properly with the correct vision and insightful policy decisions, emerging technologies, from Big Data to the IoT, to Artificial Intelligence and cryptocurrencies, may be the key to ensuring and empowering individuals and communities around the world, i.e., to bring about equality. However, said technology in itself raises a host of issues, ironically centred around equality in terms of access or opportunities, along with dilemmatic concerns related to freedom and privacy.
Access to and development of innovative technology can help us sustainably solve many contemporary issues, ranging from agricultural productivity stagnation to closing the gender-education gap, while also facilitating cooperation in a multilateral world. As we move closer through the current decade, we are presented with both, a remarkable opportunity and a risky avenue to fulfil the targets and goals set out in the 2030 Agenda for Sustainable Development. Upcoming and innovative technological systems can play a unique role in making the Sustainable Development Goals a reality.
Especially in our times, much of the economy has moved online. Even as parts of the world recover from the pandemic and move back offline, it would be foolish to ignore the alternative set-up that is available to us now and its reluctantly-accepted advantages. In terms of delivery, it continues to have an upper hand. In the educational sector, for example, systems are currently threatened by unknown professional and economic environments due to an intersection of distance-learning and classroom learning or the so-called “hybrid model of education”, and here, digital tools and technologies become not only the content of new training programmes but also a means of delivery for new contents.
The majority of the responsibility for ensuring such a monumental change lies with the government, which must undertake investments and re-prioritisation of needs and economic resources, accordingly. Multilevel governance is a key factor for the systemic transformations we are talking about here. Effective strategies for eradicating poverty through the implementation of governance strategies using new and modern technology (for instance) are those which coordinate and monitor multisectoral legislations, initiatives and policy actions and long-term visions rather than any trade-off in the short run. In the same line of thought, the funding of non-priority research and peacetime services related research is an important consideration here. Oftentimes, the funding on the parts of the governments has been towards selected sectors and services which are unreasonably given higher priority. In many instances, these involve the militarization of said “developing” technology. However, adequate funding must be provided to other areas and services that can be used to improve the social life of citizens in general, ranging from particular uses of Big Data, robotics and automation to technologies dedicated solely to these domains. If anything, Article 15(1)b of the International Covenant on Economic, Social and Cultural Rights is another reminder to governments all over the world about the same!
Having evaluated that particular side of the argument, one must also take into cognizance the vulnerabilities presented by said technological systems and infrastructure. As reasoned previously, we continue to live and participate in a world increasingly moving towards a virtual plane. This has only been accelerated by the COVID-19 pandemic which has uprooted the existing structures of society and the backbones of national economies all over the world. As the digital world continues to replace many of the existing physical and real-world interactions between people, at all levels, it becomes even more important to question who has access to what information and data and how. Further, it is important to keep a check on the awareness amongst users regarding their own data, digital rights and the manner in which they might be exploited.
In 2016, when the United Nations General Assembly recognised “access to the internet” as a human right in Resolution A/HRC/32/L.20, it also implied the member states’ obligations regarding the same. Internet access disruption is nothing short of a human rights violation and the misuse of data collected from unaware and unsuspecting users and citizens falls in the same category of de facto human rights abuses and violations. Not only do countries have an obligation to protect the potentially invaluable digital data of their citizens, very well exemplified by the EU’s GDPR but they must themselves not engage in such practices through mass surveillance.
The functioning of truly democratic setups and the exercise of human rights is greatly threatened by practices such as mass surveillance by the State, interception of communications, non-consensual personal data storage. While evaluating such arguments, we must borrow from the discourse of Political Science and public policy, the dilemma between equality, absolute freedom, privacy and public order. Particularly, imposed limitations on the right to digital privacy must nevertheless adhere to the universal principles and ideals of legality, necessity and proportionality.
In the same regard, countries must be made aware of the fact that supposedly harmless academic research related to the field of such technology too can yield dangerously devastating results. An example of this can be found in the research carried out by one Wanquan Liu, an academician from Curtin University, which admitted that a former researcher of their institution/member of their faculty (academic) had collected a dataset of facial images by recruiting hundreds of students of Uyghur, Tibetan and Korean ethnicity from the Dalian Minzu University in China. The study was funded by the Chinese government and even the Australian Broadcasting Corporation’s Four Corners programme had accepted that the Uyghur subjects had not given consent to the same. This research has aided Chinese facial recognition software to better identify ethnic Uyghurs and thus has led to even more human rights violations in the Xinjiang province. This merely goes on to shed a light on the extent to which carelessness on the part of any stakeholder, whether it be the State, a private actor or even a researcher can lead to vastly unprecedented and unexpected results. It thus becomes extremely important to take into consideration the “Vulnerability-Capability Paradox”, when using technology as an aid in governance, for instance.
Effectively addressing the existing loopholes and grey areas, within the current legal provisions and academic discourse regarding emerging technologies should be the primary concern, with a focus and emphasis on implementation and accountability. Apart from policies and legislations, products and services should be designed to minimize data breach and privacy violation related risks by embedding more security and transparency into the supply chain. Corporations ought to be compulsorily required to carry out privacy impact assessments to prevent and mitigate privacy harms establish and regularly update and maintain oversight over organizational aspects, such as the establishment of internal supervisory mechanisms. Increased interlinking of public and private data processing in order to maintain a manageable and realistic track record of the misuse of personal information is also a considerable option here.