Managing Cybersecurity Risks Related to AI in Developing Countries: Challenges and Strategies Part I

In today's digital era, artificial intelligence (AI) has become a crucial component across various sectors, particularly in developing countries.

In today’s digital era, artificial intelligence (AI) has become a crucial component across various sectors, particularly in developing countries that face unique cybersecurity challenges associated with the use of this technology. AI has played a significant role in the education sector, where, according to a UNESCO report, its use has increased significantly, especially in supporting remote learning during the COVID-19 pandemic by providing tailored learning materials and personal guidance. In healthcare, AI contributes to disease diagnosis, patient data management, and drug development, with the WHO highlighting the great potential of AI in improving healthcare services in developing countries, particularly in diagnosing previously undetected diseases.

In the agricultural sector, AI has been used for weather prediction, resource optimization, and productivity enhancement, with data from the FAO showing the potential to increase production by up to 70% by 2050. Meanwhile, in the financial sector, AI has been adopted for risk analysis, fraud detection, and personal banking services, with the World Bank noting an increase in financial inclusion through the use of AI technology in remote areas. The adoption of AI in these sectors shows great promise but also brings cybersecurity challenges that need to be managed effectively. In this first part, I will outline the typical challenges in managing cybersecurity risks in developing countries.

Infrastructure Unpreparedness

A significant barrier in the utilization of AI in developing countries is the unpreparedness of technological infrastructure. This immature infrastructure condition not only potentially hinders the effective implementation of AI technology but also creates vulnerabilities to various types of cyberattacks. Less developed infrastructure often means a lack of sophisticated security systems, both in terms of hardware and software, which are essential elements in protecting AI data and operations.

Consequently, AI systems in these countries may be more vulnerable to phishing, malware, ransomware, and DDoS (Distributed Denial of Service) attacks, where attackers can easily exploit security weaknesses to gain unauthorized access, steal sensitive data, or even disrupt normal system operations. The inability to identify and respond to these threats promptly and accurately not only endangers data security but can also negatively impact public trust and acceptance of AI technology.

Resource Limitations

In developing countries, financial and technical resource constraints often pose a major obstacle in developing robust cybersecurity infrastructure. Financially, limited budgets mean less investment in advanced cybersecurity solutions, such as the latest antivirus software, firewalls, and intrusion detection systems. For example, a government institution in a developing country might only have a limited budget for cybersecurity, thus only able to use basic security software that is not effective enough in combating increasingly complex cyberattacks.

On the technical side, a lack of skilled labour in cybersecurity is also a problem. This means that even if the hardware and security software are available, there may not be enough trained professionals to operate and maintain them effectively. For example, a hospital in a developing country might have medical equipment connected to the Internet, but lack sufficient or trained IT staff to manage network security, increasing the risk of cyberattacks such as ransomware that can encrypt patient data and disrupt hospital operations. This limitation results in AI systems implemented in developing countries often lacking adequate protection, potentially leading to data breaches and other cybersecurity risks.

Lack of Expertise

A lack of awareness and expertise in cybersecurity is one of the major challenges in developing countries, significantly increasing vulnerability to risks associated with AI. This lack of awareness relates to the understanding of the importance of cybersecurity among decision-makers and technology users. Many organizations and individuals in developing countries may not realize how vulnerable their data and systems are to cyberattacks, and are unaware of the importance of practices such as using strong passwords, securing networks, and preventing phishing. In terms of expertise, specialization is needed in various aspects of cybersecurity, including but not limited to network management, malware analysis, application security, and digital forensics. Unfortunately, in many developing countries, there is a shortage of experts with these skills, largely due to a lack of specialized training and education in this field.

Weak Regulations and Compliance

In many developing countries, the absence or immaturity of legal frameworks and regulations specifically for AI and cybersecurity is one of the major challenges in ensuring data security and protecting citizens’ privacy. Regulations and legal frameworks that are often lacking include laws specific to personal data management, which govern how data can be collected, used, and shared. Additionally, rules that govern the responsibility of technology companies in protecting their users’ data are often missing. This includes provisions for mandatory notification in the event of a data breach, which compel companies to promptly report security incidents to relevant authorities and affected data subjects. Furthermore, regulations related to the use of AI, such as ethics in AI development and limitations on its use to ensure the absence of bias or discrimination, are often nonexistent or not clearly regulated. The absence of a comprehensive legal framework makes it difficult to enforce security standards and ensure that AI technology is used in an ethical and responsible manner.

Decentralization and Democratization of AI Access

The current ease of access to AI technology, combined with its relatively low acquisition costs compared to complex technologies like nuclear power, significantly facilitates cross-border AI development by individuals or small groups. This accessibility poses substantial risks, particularly in developing countries. A key concern is security and privacy, as these independent developers often lack comprehensive knowledge or access to standard cybersecurity practices. This gap heightens the likelihood of data breaches and the misuse of personal information.

The simplicity with which AI can now be developed and deployed also complicates the task of government regulation and supervision, a challenge compounded by the often underdeveloped legal and regulatory frameworks in these countries. Moreover, the lack of stringent regulation means that easily accessible AI can be exploited for unethical or illegal activities, such as disseminating misinformation or manipulating social media.

This democratization of AI technology also brings to the fore issues of fairness and access. While individuals or organizations with ample resources can create advanced AI applications, those with fewer resources are left at a disadvantage, lacking access to similar technologies. This disparity underscores the urgent need for robust legal and regulatory frameworks and international collaboration to oversee AI development and implementation. Such measures are essential to ensure the safe, ethical, and equitable use of AI, particularly in an era where its development transcends national borders and becomes a tool accessible to many, rather than a few.

Potential Bias & Dependence on Foreign AI Products

The potential for bias and reliance on AI products developed in other countries is a significant issue faced by developing countries in the current digital era. AI products developed in advanced countries are often based on data and contexts specific to those countries, which may not be entirely relevant or applicable to conditions in developing countries. This can result in bias in the outcomes produced by such AI, which may be inaccurate or even discriminatory when applied in different social, economic, or cultural contexts. Additionally, dependence on foreign AI products can pose risks to security and data sovereignty. Developing countries using imported AI products may not have full control over how their data is processed and used. This could lead to privacy breaches and data security issues, especially if the supplying country has different cybersecurity policies or standards.

This dependence also reduces the ability of developing countries to develop their own local AI industry. Without independent AI technology development, developing countries may find themselves stuck in a cycle of technological dependency, hindering local innovation and long-term capacity building.

In conclusion, developing countries face a unique set of challenges in managing cybersecurity risks associated with the use of AI. These include unprepared infrastructure, resource limitations, lack of expertise, immature laws and regulations, and challenges from the decentralization of AI development and dependence on foreign AI products. All these factors pose risks to security and privacy, as well as the potential for unethical AI use and issues of fairness and access. Addressing these challenges requires a comprehensive and coordinated approach, which will be further discussed in the second part of this article, where I will explore practical solutions and strategies that can be implemented to tackle these issues, with the hope of providing useful guidance for developing countries in navigating the complexities of the AI era.

Tuhu Nugraha
Tuhu Nugraha
Digital Business & Metaverse Expert Principal of Indonesia Applied Economy & Regulatory Network (IADERN)