Adapting AI Regulation for the Global South: Lessons from UNESCO’s Consultation Paper

The rapid advancement of artificial intelligence (AI) has brought transformative benefits as well as complex challenges to societies worldwide.

The rapid advancement of artificial intelligence (AI) has brought transformative benefits as well as complex challenges to societies worldwide. Recognizing the need for a comprehensive and inclusive approach to AI regulation, UNESCO has released a Consultation Paper that explores various regulatory approaches to AI. This paper developed through an extensive literature review and consultations with parliamentarians, legal experts, and AI governance specialists, aims to guide legislators globally in designing policies that balance innovation with ethical considerations, human rights, and social impact.

Purpose and Summary of the Consultation Paper

The primary objective of UNESCO’s Consultation Paper is to inform and assist lawmakers in understanding the global landscape of AI regulation and to explore emerging approaches to AI governance. The paper, authored by Juan David Gutiérrez, PhD, an Associate Professor at the University of Los Andes, Colombia, emphasizes the importance of creating regulatory frameworks that not only promote technological innovation but also protect fundamental human rights, ensure transparency, and address the specific challenges posed by AI. This paper is set to be presented at the Inter-Parliamentary Union (IPU) session at UNESCO, which will be held in Geneva from October 13-17, 2024.

The paper outlines nine regulatory approaches that vary in their degree of intervention and coerciveness. These nine approaches are:

  1. Principles-Based Approach
  2. Standards-Based Approach
  3. Agile and Experimental Approach
  4. Facilitating and Enabling Approach
  5. Adapting Existing Laws Approach
  6. Access to Information and Transparency Mandates Approach
  7. Risk-Based Approach
  8. Rights-Based Approach
  9. Liability Approach

The Need for AI Regulation

AI regulation is not just a policy option; it is a necessity. As AI systems increasingly influence critical aspects of society—ranging from healthcare and finance to justice and public administration—the potential risks associated with misuse, bias, and lack of transparency have become more evident. Market failures, state failures, and the unacceptable risks posed by unregulated AI systems can lead to significant social, economic, and political consequences.

Regulation is crucial to safeguarding fundamental rights, ensuring fair competition, and preventing the exploitation of vulnerable populations. Furthermore, in developing countries where the capacity to manage the unintended consequences of AI may be limited, regulation plays a vital role in mitigating risks and ensuring that AI technologies contribute positively to societal development.

Relevance of Regulatory Approaches for Developing Countries

While the Consultation Paper presents a broad spectrum of regulatory approaches, not all are equally applicable in the context of developing countries. These nations often face unique challenges such as limited resources, inadequate technological infrastructure, and constrained regulatory capacities. Additionally, there is often a digital literacy gap among the population and a lack of strong legal frameworks to govern emerging technologies like AI. These challenges make some regulatory approaches more relevant and practical to implement than others.

Principles-Based Approach: This approach is highly adaptive and can serve as a foundational framework for AI regulation in developing countries. By establishing ethical principles and general guidelines, it allows flexibility in implementation, enabling countries with varying regulatory infrastructures to adopt AI regulations that align with their societal values and developmental goals. This approach is particularly beneficial in contexts where rigid regulations might stifle innovation or be difficult to enforce due to resource constraints. Some developing countries have begun to adopt this principles-based approach in their AI regulations, either explicitly or implicitly. For example, Peru’s Law No. 31814 of 2023 explicitly includes AI principles such as risk-based security standards, a multi-stakeholder approach, ethical development for responsible AI, and privacy. Brazil’s Bill No. 2238/2023 establishes fundamental principles and guidelines for AI development and implementation. Colombia’s Bill No. 059/2023 integrates principles like inclusive growth, sustainable development, human-centered values, transparency, security, and responsibility. Similarly, Costa Rica’s “Law for the Regulation of Artificial Intelligence” includes principles such as equity, responsibility, transparency, privacy and data protection, and security.

Adapting Existing Laws Approach: Developing countries can benefit from this approach by leveraging and modifying existing legal frameworks to include specific provisions related to AI. This method is efficient and pragmatic, allowing countries to improve existing laws, such as data protection and consumer rights, without the need to create entirely new regulatory systems. Some countries that have implemented or are developing AI regulations based on existing legal frameworks include India, which updated its 2018 Personal Data Protection Act to encompass AI provisions, and Kenya, which developed a National AI Strategy focused on modifying existing legal frameworks. Brazil has also updated its Lei Geral de Proteção de Dados (LGPD) to include AI, while South Africa is developing a comprehensive AI regulatory framework based on existing laws. Although not a developing country, Singapore is often cited as an example with its principles-based approach focusing on accountability, transparency, and fairness, which can serve as a valuable reference for developing countries.

Facilitating and Enabling Approach: To create an environment that supports responsible AI development, this approach encourages the development of supportive infrastructure such as research centers, educational programs, and public-private partnerships. In developing countries, where resources are often limited, this approach is crucial for building the capacity needed to harness AI’s potential. It also aligns with broader developmental goals, such as enhancing digital literacy, strengthening technological infrastructure, and promoting innovation in AI-related fields. Some developing countries building supportive AI infrastructure include Rwanda, which has launched an AI Innovation Center and partnered with global technology companies; Ghana, which established the Ghana Artificial Intelligence Center for AI research and development; Tunisia, which launched a National AI Strategy focusing on skill development and innovation; Malaysia, which launched a National AI Initiative to become a hub for AI innovation in Southeast Asia; and Mexico, which developed a National AI Strategy with an emphasis on skill development and innovation in AI.

Access to Information and Transparency Mandates Approach: Transparency is critical for ensuring public trust and accountability in AI systems, especially in developing countries where trust in technology may be low. Some countries have implemented regulations emphasizing transparency in AI operations, particularly in the public sector. India, for example, launched the “Responsible AI for India” principles, which promote the use of explainable AI, with states like Karnataka and Telangana adopting AI ethics frameworks. Brazil regulates transparency in automated decision-making through its General Data Protection Law (LGPD) and has established a special committee to study the ethical impacts of AI. The national AI strategy in South Africa emphasizes transparency, supported by AI literacy initiatives and public participation. Although not a developing country, Singapore’s AI governance model emphasizes transparency and explainability in AI operations and is often cited as a model for others. This approach helps democratize AI technology and empowers citizens to understand and challenge AI-driven decisions.

Conclusion As AI continues to reshape economies and societies, the need for thoughtful regulation becomes increasingly urgent. UNESCO’s Consultation Paper provides a relevant framework for developing countries to adopt flexible and adaptive regulatory strategies. By focusing on principles-based approaches, adapting existing laws, facilitating and enabling infrastructure, and ensuring transparency, these countries can mitigate AI-related risks and leverage this technology for positive change and inclusive development.

Tuhu Nugraha
Tuhu Nugraha
Digital Business & Metaverse Expert Principal of Indonesia Applied Economy & Regulatory Network (IADERN)