Claude Mythos Preview is an experimental AI model built by Anthropic, the company behind the Claude family of models. In early April 2026, Anthropic chose not to release Mythos publicly after internal testing suggested that its cyber capabilities were unusually strong, including the ability to identify and exploit vulnerabilities at a level the company considered too risky for open deployment. Instead, it restricted access to selected organizations under a defensive initiative called Project Glasswing.
That is what made the episode consequential. This was not merely another model update. It was a moment when a leading AI lab effectively signaled that some advanced systems may now be too operationally dangerous for broad release. Once that happens, AI governance stops being only a question of ethics, transparency, or principle. It becomes a question of who gets early access, who gets time to prepare, and who is left to manage the consequences after the capability frontier has already moved.
For the Global South, the implication is straightforward: when the most capable models are controlled through selective access and closed safety perimeters, AI governance is no longer only about universal principles. It is also about unequal readiness.
Safety is no longer the whole story.
For several years, international debate on AI governance has revolved around fairness, transparency, accountability, and safety. Those concerns remain essential. UNESCO’s Readiness Assessment Methodology, for example, recognizes that trustworthy AI depends not only on principles but also on institutional readiness and the practical ability of states to absorb and govern these systems.
The Mythos case shows that frontier AI governance is becoming more strategic. Once a leading lab decides a model is too sensitive for general release, the issue is no longer just norms, but control. The real divide is not simply between countries with rules and those without them, but between those who can evaluate frontier capability before it spreads and those who can only react after it enters products and markets. For the Global South, the challenge is simple: not just how to regulate AI after diffusion, but how to govern technologies whose first layer of evaluation and control was built elsewhere.
The Global South faces a double asymmetry.
Much of the Global South does not enter the frontier AI era from a position of authorship. Most countries are not building the largest models, setting the default testing standards, or participating in the earliest closed-circle exercises around capability containment. But the asymmetry is technological, temporal, and normative.
The first asymmetry is preparedness. Frontier AI risks may spread globally, but early warning, testing access, and evaluation capacity remain concentrated in a few hands. Others must prepare from the outside, with less information and less time.
The second asymmetry is representation. Many countries in the Global South still enter as users or markets, not as co-designers of risk. They do not meaningfully shape what counts as harm, where thresholds are set, or how trade-offs are judged. As a result, systems built elsewhere arrive carrying assumptions about acceptable risk, regulatory capacity, and social context that may not fit them.
Why this matters more in the Global South
The Global South is often described as if it were simply behind in AI adoption. That is too shallow. The World Bank has stressed that low- and middle-income countries face steep challenges in deploying AI effectively because foundational conditions such as connectivity, infrastructure, skills, and institutional capacity remain uneven.
The more serious issue is that many countries are being asked to govern technologies they did not shape, under conditions they did not define, and on timelines they do not control. They may be asked to regulate systems whose most important safety evaluations took place elsewhere. They may also be urged to adopt “best practice” frameworks built on assumptions formed in stronger institutional settings. And they may become early bearers of risk precisely because their public institutions, digital infrastructures, and sectoral safeguards are less resilient to rapid capability diffusion. Too often, then, the Global South enters frontier AI not as a co-designer of risk, but as an early bearer of consequences.
This is why recent moves such as India’s AI Impact Summit deserve closer attention. Held in February 2026 and drawing delegations from more than 100 countries, the summit mattered not only as symbolism but also as a sign that the Global South may begin to claim a stronger collective voice in shaping AI governance. If frontier AI risks are increasingly governed through selective access and concentrated readiness, then countries in the South cannot remain merely downstream managers of standards, harms, and timelines set elsewhere. They need stronger political coordination over digital systems that are rapidly becoming the backbone of economic life, public services, and strategic security.
The problem, after all, is not only that frontier AI risks are growing. It is that too many governance frameworks still assume a linear future: labs release models, regulators respond, and stability follows. For much of the Global South, that assumption is no longer credible.
What universal governance language still leaves open
None of this makes global governance principles irrelevant. Shared baselines remain essential. But a baseline is not a blueprint. In frontier AI, a universal vocabulary can still conceal unequal authorship.
This gap is becoming more visible in wider policy discussions. Brookings has argued that much still gets lost in the global AI summit circuit, even as middle powers and global Majority countries demand a stronger voice in shaping the AI agenda. Chatham House, meanwhile, has warned that AI governance is already constrained by geopolitical fragmentation and asymmetries between public and private power.
The Mythos case makes the problem harder to ignore. AI governance is not only about principles after capability emerges. It is also about who gets early warning, who has access to evaluation, and who can prepare before risks spread. Without those pathways, many countries are left managing downstream effects shaped upstream by labs, vendors, and security coalitions they never joined. For the Global South, the question is whether governance can work under unequal conditions without hardening them further.
In frontier AI, the governance gap no longer lies only in regulation. It increasingly appears in who has time to prepare, who gets to participate early, and who is left responding after the fact. If the real gap lies not only in principles but also in unequal authorship and unequal readiness, then the answer cannot be regulation alone. It must also be structural preparation.
Governance must include preparedness.
For much of the global South, AI governance cannot stop at regulatory drafting or imported best practices. It also requires preparedness: early warning, pathways to evaluation, and practical mechanisms for structured assessment, intelligence-sharing, limited testing partnerships, and sector-focused readiness exercises, even in countries that cannot build frontier labs of their own.
Preparedness also requires prioritization. Finance, energy, telecommunications, public administration, and digital identity systems should already be treated as frontline governance zones. In such sectors, the challenge is not merely adoption. It is whether institutions can identify emerging risk through stress testing, sector-specific readiness exercises, and localized auditing capacity before they are forced into reactive management.
Preparedness cannot remain purely national. As AI and digital systems become core infrastructure, the Global South will need coalition-based readiness. That makes a Digital Non-Aligned Movement—a proposed form of Global South coordination on digital governance and strategic autonomy—less a diplomatic idea than a practical necessity: a way to bargain collectively over preparedness, intelligence sharing, capacity transfer, and the governance terms of systems shaped elsewhere. Brookings has suggested that India’s summit pointed to a more sovereignty-conscious and less Western-centric direction in AI diplomacy, even if much remains unresolved.
If Digital NAM is to matter in the AI era, it cannot remain a slogan. It must evolve into a shared early-warning system for sovereign digital risk across critical sectors. Countries in the South should not only exchange political statements but also pool scenario mapping, compare cross-sector vulnerabilities, and build common alertness around how frontier shocks could cascade across banking, energy, telecommunications, or public administration.
But the Global South cannot stop at mapping worst-case permutations. It also needs a clear horizon. Preparedness should do more than reduce exposure; it should widen strategic choice. Southern states can build regional regulatory sandboxes—controlled environments for testing new technologies under regulatory supervision—open safer pathways for open-source collaboration, share secure datasets, and design institutional arrangements that reduce dependence on external closed systems. The goal is not autarky, but greater agency.
This is not merely a cyber story.
It would be easy to read Claude Mythos as a cybersecurity story and leave it there. That would be too narrow. What is being restricted is not just a model. It is also participation in the first layer of preparedness. As frontier capabilities become more dangerous, governance itself may become increasingly stratified. Access, testing, containment, and readiness may all become more selective, even as the consequences of capability diffusion become more widely distributed.
The deepest divide in AI governance may not be between innovators and regulators. It may be between those who shape the first perimeter of risk and those who inherit its consequences.
Frontier AI is unlikely to slow down simply because governance remains uneven. The systems will continue to spread, interdependencies will deepen, and digital infrastructures will become even more central to economies, public services, and strategic resilience. The question is not whether the Global South can stand outside this transition, because it cannot. The real question is whether it remains connected on terms set elsewhere, managing risks after they arrive, or whether it builds the coalitions, preparedness, and bargaining capacity needed to mitigate those risks before they harden into permanent dependency. In the AI age, preparedness is no longer just a technical matter. It is increasingly a test of who gets to shape the terms of interdependence.

