Sustainable AI Development in Developing Countries on Data Policy

Artificial Intelligence (AI) promises significant growth and innovation, yet it carries serious risks associated with data management.

Authors:Tuhu Nugraha and Annanias Shinta D*

Artificial Intelligence (AI) promises significant growth and innovation, yet it carries serious risks associated with data management that can damage public trust and hinder technological progress if not properly addressed. According to the Google AI Sprinters report from April 2024, although it is expected that developing markets will adopt AI more slowly and benefit less in the next decade compared to developed countries, there are strong reasons to believe that these regions can exceed these expectations with the right drivers in place. Recent surveys show higher optimism in developing markets about the economic impact of AI, with more than 71% of respondents stating that AI has had a positive impact on access to information, health, education, and employment, compared to less than 56% in Europe and less than 51% in the United States (US).

Faced with challenges such as incomplete infrastructure and still-developing policies, developing countries often remain consumers and importers of technology, rather than developers or producers. This issue is exacerbated by the population’s low level of digital intelligence, which limits their ability to adapt and innovate with AI technology. Therefore, these countries must develop strategies that not only focus on sustainability and ethics in AI development but also strengthen the internal capacity to produce and customize technology according to local needs. This step is essential to maximize the utilization of AI, reduce dependence on technology imports, and independently enhance innovation capabilities.

When dealing with AI as a technology that heavily depends on data, there are three key factors that policymakers in developing countries must prioritize: privacy, security, and data authenticity. Focusing on these aspects will ensure that AI development can proceed within a safe and responsible framework while supporting public trust and technological growth.

Data Privacy: Key to Trust and AI Adoption

Data privacy is a critical issue in the implementation of AI, especially in developing countries that often lack comprehensive regulations like those in developed nations. The absence of adequate regulation can lead to the misuse of personal data for unethical or illegal purposes. An example is the Cambridge Analytica case in 2018, where the political consulting firm accessed the personal data of millions of Facebook users without permission. This data was then used to target political advertising during the 2016 US presidential election, highlighting the vulnerability of personal data to exploitation without the owner’s consent.

Given these risks, it is vital for developing countries to adopt laws that prohibit data collection without explicit consent and provide individuals with greater control over their data. Adapting frameworks like the GDPR in the European Union—which includes the right to be forgotten and the right to access data—in a local context will strengthen data protection, build public trust, and support the ethical and responsible adoption of AI.

As of April 2024, according to a Google search, here are several developing countries that have established Personal Data Protection Laws:

Indonesia: Law No. 27 of 2022 on Personal Data Protection, ratified on September 9, 2022, effective from September 9, 2024.

Brazil: General Data Protection Law (LGPD), ratified on September 14, 2020, effective from September 18, 2021.

South Africa: Protection of Personal Information Act (POPIA), ratified on June 4, 2013, effective from July 1, 2021.

Kenya: Data Protection Act, 2019, ratified on November 27, 2019, effective from November 27, 2020.

Philippines: Republic Act No. 10173, ratified on August 18, 2012, effective from August 18, 2013.

Morocco: Law No. 09-09, ratified on February 10, 2009, effective from February 10, 2010.

Mexico: Federal Law on Protection of Personal Data Held by Private Parties, ratified on September 5, 2018, effective from January 6, 2019.

Peru: Personal Data Protection Law, ratified on July 28, 2018, effective from September 27, 2018.

Although these countries have established personal data protection laws, the level of implementation and enforcement still varies. Other developing countries like India, Vietnam, and Thailand are also in the process of drafting or updating their personal data protection laws, indicating a significant step towards better data protection in the digital era.

Data Confidentiality: Ensuring Information Security

Data confidentiality and data privacy are often considered together, yet they focus on different aspects. Data privacy relates to an individual’s right to control their personal information and how it is used or shared. On the other hand, data confidentiality emphasizes protecting sensitive information from unauthorized access, ensuring that such data remains secure from unauthorized parties. In the context of AI, data confidentiality ensures that all data input into the system or generated by it is protected through cybersecurity infrastructure, professional training, and encryption technology. This includes investments in advanced security technologies and policies that govern how data should be managed and safeguarded.

The types of data considered confidential can vary from the individual level to organizations and nations. At the individual level, confidential data includes financial information, medical records, or personal identities that could be used for identity theft or fraud. For organizations, confidential data might include trade secrets, business strategy information, or client data that, if leaked, could result in competitive harm or contractual breaches. At the national level, information considered confidential often relates to national security, such as intelligence data, strategic military positions, or diplomacy that, if exposed, could threaten a country’s stability or security. Therefore, it is crucial for all entities, from individuals to governments, to implement stringent security practices to protect the data they consider confidential.

Data Authenticity: The Foundation for Accurate and Fair AI

Data authenticity is a critical aspect of AI usage, emphasizing the truthfulness and accuracy of the data utilized. In developing countries, challenges often arise concerning the quality and availability of data. Developing local capacity for data verification and validation is crucial, and it is also important to implement technology that can automatically detect and correct inaccurate data. This not only enhances AI performance but also ensures that its outputs are fair and unbiased.

According to data from the Global Initiative Against Transnational Organized Crime, in the Asia-Pacific region, several instances of AI technology misuse have been recorded, including the use of deepfakes that raise serious concerns. In December 2023, videos featuring Lee Hsien Loong, Prime Minister of Singapore, and Lawrence Wong, Deputy Prime Minister, were widely circulated online to promote cryptocurrency and investment products. These videos turned out to be deepfakes—AI-generated videos designed to fake their identities. In early 2022, criminals in Thailand used deepfakes to impersonate police officers in extortion video calls. In February 2024, the Hong Kong office of a multinational company lost US$25.6 million due to a deepfake video conference call impersonating its chief financial officer. These cases are just a few examples from the region where AI-generated images and audio have been used for malicious purposes, including fake kidnappings, sexual abuse material, and fraudulent schemes.

Enhancing literacy and public awareness about deepfake technology is extremely important. Education on how to recognize and verify the authenticity of digital content can help the public be more vigilant against potential fraud and manipulation. Socialization activities through media, workshops, and online educational campaigns can be effective in raising public awareness about the risks and implications of deepfake technology, thus helping individuals protect themselves from its negative impacts.

Conclusion

Sustainable AI development in developing countries requires a comprehensive approach that includes not only strong regulation and investment in security but also the strengthening of digital literacy. A deep understanding of how personal data is collected, processed, and used within AI systems is key to helping the public make informed decisions regarding privacy and security. This education is crucial for enhancing engagement and awareness about AI.

Training programs and public awareness campaigns need to be enhanced to build technical expertise and understanding of AI among the public. Collaboration between governments, educational institutions, and the private sector will help create relevant and effective training modules. This comprehensive approach not only strengthens public trust in AI but also positions developing countries at the forefront of global innovation, ensuring inclusive and sustainable growth.

*Annanias Shinta D,  Passionate professional with a strong background in research, communication, and business management. Experienced in collaborating with public and private companies, as well as NGOs, to drive positive change and create a better future.

Tuhu Nugraha
Tuhu Nugraha
Digital Business & Metaverse Expert Principal of Indonesia Applied Economy & Regulatory Network (IADERN)