The most important security threats of the twenty-first century do not announce themselves anymore by troop movements or missile tests. They appear noiselessly, in the form of algorithms, data sets, and source codes which promise improvement, but contain the danger of damage. Such a risk is exactly the intersection of artificial intelligence and the life sciences. Hailed as a medical and global health breakthrough, it has also unveiled one of the core failures in international governance: the world is managing biological threats, but intelligence remains slow, scarce, and tied to the state.
The fundamental threat to this issue is a quintessential issue of dual-use, amplified in magnitude and velocity. With AI, biology has become a predictive science rather than an experimental science. Computation-driven analytical tasks can now be performed that previously took years of laboratory experimentation to understand the behavior of proteins, how to model the evolution of pathogens, finding the molecular weak links. This transformation has helped save lives and has increased innovation. But it has also brought down the curtains that fettered abuse. Capability has been more convenient, mobile and difficult to follow.
The United Nations apparatus does not turn a blind eye to biological danger. The Biological Weapons Convention (BWC) is still at the center of the worldwide initiative to avert biological warfare. However, the BWC was designed in an environment where threats were real: laboratories, stockpiles and state programs. It forbids results and not facilitators. Algorithms do not breach treaties. Open-source models are not subjects of inspections. The different research platforms that are privately owned are not within the normal verification logic. Consequently, there is an AI-facilitated biological capability in regulatory shadow, which is technically lawful, strategically real, and institutionally uncontrolled.
It is not just a legal omission, but an ideal one. The BWC takes into consideration that the danger starts when the intent becomes malicious. AI refutes this assumption by defining capability itself as a strategic variable. Intent is more difficult to discern and simpler to take action on when the instruments to design or optimize biological systems are so readily available. The deterrence, which has always rest based on attribution and retaliation, starts to weaken.
Other UN-related structures strive to achieve the vacuum, albeit to a limited extent. The ethical guidelines suggested by UNESCO on AI focus on transparency and human control, and fairness, which are crucial aspects of social trust but not of biosecurity. They consider AI as a social risk, and not a strategic one. In the meantime, the World Health Organization is concerned with surveillance, preparedness, and response. These are measures that are essential, but essentially responsive. They predict the occurrence of harm and seek to reduce its impact instead of how the emerging technologies transform the risk environment higher on the stack.
One vivid example of this laggardness in governance can be observed in the manner in which the international community has reacted to the advances in the field of computational biology. The success of AI systems in making inaccurate predictions of the structure of proteins was rightfully declared to be a transformative achievement. Information and the instruments were published freely, seamlessly integrated into the worldwide research infrastructure practically overnight. It was quite obvious that openness is progress. The question that was not raised much was whether exposure could also be equated to openness without guardrails. No UN mechanism needed a shared evaluation on whether such capabilities had a significant impact on the biological threat environment. It was a case of celebration rather than caution since no institution was tasked in practicing caution.
This trend shows a more fundamental structural issue: the global governance is structured by sectors but risk has taken the shape of an intersection at this point. Arms control institutions seek to control weapons. AI regulators seek discrimination and responsibility. Health institutions seek outbreaks. All of these and none of them are at the same time dual-use AI in biology. The outcome is disintegration, numerous actors working on some variations of the problem, but no single platform is given the power to bring the entire issue to a resolution.
But even here, opportunity is hidden behind this disintegration.
An opportunity is in the increased acceptance of risk-based governance in the international arena. Instead of applying identical sensitivity to all AI or all biological research, risk-based approaches aim at understanding the impact of particular capabilities on threat dynamics. When used in the right way, it would enable the international community to differentiate between regular applications of AI in biomedical fields and high-impact tools that would go a long way in minimizing the obstacles to biological abuse. These types of differentiation are in line with the UN principles of proportionality and would likely prevent the misguided dilemma between innovation and security.
The other opportunity is the norm creation, in which the UN influence has not been regarded as much as it should be. Norms against chemical and biological weapons determined how to behave long before the regimes of verification worked because they defined what was intolerable. Other norms might arise in the life sciences, including in responsible AI usage, such as discouraging publication of enablement step-by-step, institutional review of potentially high-risk models, and the introduction of biosecurity into research culture. It is not the question of legitimacy, but the question of urgency.
Another potential that remains untapped is the possibility of changing the approach to the problem of collective resilience instead of control. AI governance that is connected to global health security- pandemic preparedness, early-warnings, and capacity-building would help change the discussion that is oriented at restriction to collective protection. In the case of most states, especially in the Global South, this framing would sound much closer to the truth than abstract discussions of technological restraint.
However, the most significant gap is the institutional one. There is no obvious mandate of a UN body to regulate the AI, biology, and security intersection holistically. Accountability is not clear, coordination ad hoc, and responsibility is dispersed. This will be a more expensive deficiency in the near future as AI capabilities keep on changing, as this is not necessarily due to the impending disaster, but rather to the fact that avoidable risks will remain unaddressed.
Biological risk that is AI enabled is unlikely to declare itself dramatically. It will build up unobtrusively, by eroded defenses and customized competence. When thresholds are topped, it will be much more challenging to restore them than preserve them nowadays. The international community can still change its structures, but that is closing in.
The issue of whether AI needs to be regulated is central, yet there is no disagreement as to its necessity. The question is whether the global institutions can develop quickly enough to control what is possible with AI but not what the past generations were afraid of. Even in a world where the intelligence itself is a strategic asset, it is impossible to have governance that is blind to the invisible.

