Beyond the Illusion of Sovereignty
Across the Global South, governments find themselves pulled into the race to build national large language models (LLMs). These efforts often stem from the belief that technological sovereignty can be measured by computing capacity or model size. Over the past two years, a dominant global narrative has taken hold—echoed through multilateral forums, tech summits, and policy briefs—that every nation must build its own LLM if it wishes to survive the AI era. This framing implicitly equates sovereignty with sheer scale, positioning foundational models as the new litmus test of geopolitical maturity.
Yet this impulse risks falling into what we call algorithmic Malthusianism—a worldview shaped by fear rather than imagination. The term draws from Thomas Malthus’s famously pessimistic prediction that population growth would outstrip human capacity to adapt. He underestimated innovation, creativity, and societal resilience. In the same way, today’s narrative assumes that nations lacking massive compute or trillion-parameter models will inevitably be crushed by technological Darwinism. It overlooks a central truth: the Global South has always advanced through adaptation, contextual ingenuity, and strategic leapfrogging—not by copying the developmental paths of the wealthiest nations.
Algorithmic Malthusianism, therefore, reflects a belief that Global South nations must mimic the North or be left behind—ignoring their own capacity to innovate differently, more efficiently, and more sustainably. It is a worldview reinforced by market incentives and geopolitical pressure: governments fear that without a national LLM, they will lose digital bargaining power, fail to attract investment, or appear technologically inferior.
Echoing concerns raised by Chatham House about “digital asymmetries” within emerging economies, the pursuit of LLMs without a contextual strategy can deepen dependency rather than reduce it. The infrastructure required—energy-hungry data centers, elite AI talent, and costly procurement pipelines—may drain national budgets that could instead support more inclusive, sustainable innovation.
The question becomes unavoidable: Is sovereignty found in scale or in relevance?
When Large Models Cannot See the Local World
Training LLMs on predominantly Western datasets introduces systemic blind spots. UNESCO’s 2023 Guidance on AI Ethics notes that when languages, cultures, and ways of living are underrepresented, AI systems begin to misread the very communities they are meant to support. In many parts of the Global South, daily life is shaped by informal economies, local customs, and unwritten norms—subtleties that global models, trained far from these realities, often fail to grasp.
This disconnect does more than reduce accuracy; it quietly produces a growing Digital Stigma.
Insights from the World Bank’s World Development Report (2021) on Data for Better Lives show how gaps in data quality can harden into “development inequalities.” When predictive models start labeling countries as high‑risk or low‑growth based on incomplete or biased data, those labels ripple outward—shaping investor sentiment, influencing government priorities, and gradually affecting how societies view their own potential. The OECD describes this dynamic as “algorithmic discrimination in financial systems,” where flawed inputs lead to distorted judgments about entire nations.
Over time, this creates what can be called an ethics of probability: a sense that the future is already fixed by yesterday’s imperfect data, rather than open to human agency, innovation, or collective imagination.
SLMs: A More Sovereign and Sustainable Path
True epistemic sovereignty—the ability to define one’s own knowledge, narrative, and technological priorities—requires shifting from LLMs to Small Language Models (SLMs).
This approach aligns with recommendations from the UNDP Digital Strategy and ITU’s AI for Good guidelines, which emphasize localization, modularity, and resource efficiency as cornerstones of sustainable digital transformation.
SLMs offer three strategic advantages:
1. Localized Relevance
SLMs can be trained on:
- regional dialects and indigenous languages (aligned with UNESCO’s linguistic preservation agenda),
- national legal frameworks,
- public service workflows,
- climate adaptation practices.
2. Sustainability
Consistent with UNEP’s 2024 report on digital environmental footprints, SLMs drastically reduce computational energy and water usage—an essential factor for climate-vulnerable countries. In a world already struggling to meet the targets of the Paris Agreement, the unchecked race to build ever-larger AI models risks intensifying environmental degradation. Training frontier‑scale LLMs requires enormous electricity consumption, substantial water use for data‑center cooling, and expanded land use for compute infrastructure—impacts that are often externalized to regions least equipped to absorb them. Without a strategic pivot, the global competition for AI dominance may inadvertently accelerate the very climate crisis it seeks to manage.
3. Decolonizing Knowledge Systems
SLMs support the shift from external dependency to internal capability, enabling countries to build context-first AI systems. This is especially critical given that most existing LLMs are trained on what psychologists and anthropologists call WEIRD data—originating from societies that are Western, Educated, Industrialized, Rich, and Democratic. In practice, this means the dominant models reflect the values, linguistic patterns, social assumptions, and epistemic norms of what I call the Local North.
Such models, while powerful, are poorly aligned with the lived realities of the Global South. They often misinterpret cultural nuance, flatten indigenous knowledge, and reproduce normative biases that were never designed with Southern societies in mind. This aligns with CSIS’s 2024 technology governance brief, which argues that smaller, customized models are more likely to produce equitable and locally legitimate outcomes compared to monolithic global architectures.
Crucially, SLMs operationalize the Ethics of Possibility, enabling governments to imagine futures not restricted by historical limitations—or by the cultural assumptions coded into global datasets.
Third Space Statecraft: A Diplomatic Strategy for Fragmented Geopolitics
Choosing SLMs is not merely a technical preference; it is a geopolitical act.
The world is increasingly polarized, and the UN Secretary-General’s 2023 Policy Brief on Global Digital Cooperation warns of a growing “AI governance divide” that risks marginalizing developing nations. To resist this, Global South countries must cultivate a new diplomatic posture: Third Space Statecraft.
This involves creating collaborative spaces outside great-power rivalry—coalitions driven by shared needs such as
- AI safety that reflects local realities,
- sustainable compute infrastructure,
- interoperable public digital goods,
- and ethical, community-owned data ecosystems (aligned with the Digital Public Goods Alliance framework).
Before examining these systemic risks, it is helpful to imagine a near-future scenario—one often raised in Europol and UNOCT strategic threat exercises—not as a sensational prediction, but as a quiet reminder of how quickly fragility can emerge in a hyperconnected world.
It begins with a single deepfake livestream. A fabricated video of a senior minister in a small coastal nation surfaces online at 2:13 AM, subtly edited to appear urgent but authentic. Within minutes, automated trading systems—unable to distinguish truth from synthetic manipulation—react instinctively, triggering a cascade of currency sell-offs. As markets wobble, an AI-generated voice clone contacts the country’s largest port authority at dawn, issuing a calm, precise threat that forces precautionary shutdowns. Meanwhile, a small malware package—assembled by an unaffiliated actor using publicly available AI-assisted code tools—quietly nests inside the national electricity grid. Over the next 72 hours, uncertainty spreads faster than any confirmed fact. Intelligence agencies cannot attribute the source; regional partners cannot coordinate a response.
This is not a scene from dystopian fiction. It is a glimpse of a world where AI-enabled micro-actors, scattered across borders and driven by grievance rather than ideology, can momentarily unanchored entire societies.
The consequences of ignoring this path are global. Reports from Europol and the UN Office of Counter-Terrorism (UNOCT) highlight that algorithmic disenfranchisement, economic pessimism, and identity alienation are fertile ground for:
- mass migration pressures,
- digitally enabled extremism,
- exploitation through deepfakes and synthetic manipulation.
Supporting SLM-led development in the Global South is, therefore, not charity. It is an investment in global stability.
But the stakes for the Global North are far greater than is often acknowledged. AI-enabled extremism represents a potential Black Swan threat whose contours we cannot fully imagine. As highlighted in analyses by UNOCT, Europol, and recent OECD risk assessments, the next generation of transnational terrorism will not rely on territorial control or traditional weapons. Instead, it will exploit the open accessibility of AI systems—tools available to anyone, anywhere.
The danger lies not only in the speed and scale of AI misuse but in the fact that the actors are no longer exclusively states or organized groups. Individuals or small networks—distributed across continents, invisible to conventional intelligence frameworks—can now weaponize AI in ways previously unimaginable:
- Targeting critical infrastructure through AI-assisted cyber intrusion or ransomware strategies.
- Deploying hyper-realistic deepfakes to destabilize elections, ignite ethnic conflict, or erode trust in democratic institutions.
- Generating biological threat models or automated experimentation pathways using open scientific datasets, raising concerns echoed in UN biosecurity briefings.
- Engineering autonomous weapon prototypes, enabled by open‑source robotics and AI simulators.
Because AI systems are widely accessible and increasingly powerful, these threats transcend borders. They cannot be contained within one geography. A climate of sustained instability—fuelled by inequality, disenfranchisement, and the absence of technological inclusion—creates precisely the conditions in which such actors emerge.
This is why global stability is inseparable from the technological empowerment of the Global South. When vast regions feel economically excluded, digitally marginalized, or misrepresented by global AI systems, the resulting alienation becomes fertile ground for radicalization. In this sense, the North’s security is tied directly to the South’s sense of possibility.
Mitigating AI-enabled Black Swan risks requires a world in which all nations—not only the wealthiest—have the tools, skills, and governance capacity to build safe, context-specific AI systems. Supporting SLM-centered strategies ensures that the benefits of AI are distributed, while the risks are collectively managed rather than globally amplified.
Conclusion: Sovereignty in What Works
Sovereignty is not built through the brute force of computation but through the clarity of purpose.
By adopting SLMs—efficient, contextual, and deeply rooted in local wisdom—the Global South can escape the Algorithmic Malthusian Trap and craft its own digital trajectory. For partners in the Global North, supporting this shift is a strategic imperative aligned with global security, sustainable development, and equitable technological governance.
In the end, the nations that thrive will not be those that build the largest models, but those that build the truest ones—models that see their people, hear their languages, and honor their realities.
As the Brazilian educator Paulo Freire once wrote, “We make the road by walking.” In the age of AI, the Global South must make its road by coding—crafting technologies that do not mimic someone else’s future but illuminate its own.
Sovereignty, after all, is not measured in teraflops or parameters, but in the courage to imagine differently.

