Global Insights on AI Safety: A Citizen’s Perspective on the Landmark International Report

As I prepare for the inaugural AI Action Summit 2025 in Paris, I reflect on my recent in-depth review of the groundbreaking International AI Safety Report.

As I prepare for the inaugural AI Action Summit 2025 in Paris, I reflect on my recent in-depth review of the groundbreaking International AI Safety Report.[1]

This document, set to be presented to global leaders at the Summit, is the result of collaboration among 30 nations, the OECD, UN, and EU, bringing together insights from 100 distinguished experts, including Nobel laureates, Turing Award winners, and other leading figures in artificial intelligence.

The report’s development began at Bletchley, where a framework for understanding AI safety was established. Its goal is to provide a rigorous scientific foundation for AI policy discussions. The interim version, unveiled at the AI Seoul Summit, marked the first major international effort to address the risks of advanced AI. The establishment of the world’s first national AI Safety Institutes around Bletchley further reinforced this initiative, culminating in the formation of the 10-member Network of AI Safety Institutes. This network plays a crucial role in ensuring that governments take responsibility for testing and evaluating new AI models, particularly as AI safety increasingly intersects with national security concerns.

Rather than offering direct policy recommendations, the International AI Safety Report consolidates scientific findings on the safety of general-purpose AI. Its overarching goal is to foster a shared global understanding of AI risks and explore potential mitigation strategies. The exponential growth of general-purpose AI, capable of performing diverse tasks across consumer and business sectors, necessitates this approach.

The report examines three core questions:

  • What are the capabilities of general-purpose AI?
  • What risks does it pose?
  • What strategies can mitigate these risks?

Its key objectives are as follows:

  • providing scientific insights to support evidence-based policymaking without advocating specific policy stances;
  • encouraging informed discussions on the uncertainties surrounding general-purpose AI and its potential societal impacts; and
  • contributing to a globally recognized scientific framework for understanding AI safety.

Commissioned by the UK as host of the AI Safety Summit, the report was developed under the leadership of Yoshua Bengio, supported by a secretariat within the UK AI Safety Institute. The UK, in collaboration with participating nations, has committed to maintaining this secretariat until a permanent international institution is established.

Set to be formally presented at the AI Action Summit in Paris in February 2025, the report follows a preliminary version released in May 2024, which sparked significant discussions at the AI Seoul Summit.

The Rapid Advancement of AI

The report underscores the extraordinary pace at which AI is evolving. As noted by Daniel Privitera, Lead Writer of the report, AI models have progressed remarkably within just 18 months—advancing from “random guessing” performance to “PhD-level expertise” in programming, scientific reasoning, and other complex domains. This rapid evolution is transforming industries such as software development, scientific research, and education.

A key driver of this advancement is inference scaling – a technique that enhances AI performance by allocating increasing computational resources at runtime. While this approach boosts AI capabilities, it raises concerns about sustainability, energy consumption, and accessibility, as not all nations or organizations can afford such computationally expensive methods.

Existing and Emerging AI Risks

The report highlights both existing and emerging risks of AI, including:

  • Economic Disruption: Large-scale job displacement and widening inequality due to AI automation.
  • Cybersecurity Threats: AI-powered hacking techniques that increase vulnerabilities.
  • Biological and Geopolitical Risks: AI-assisted development of biological threats and its potential to destabilize global security.
  • Loss of Control: Concerns that AI systems may become too autonomous or powerful for society to regulate effectively.

These risks underscore the urgency of proactive governance, risk assessment, and international cooperation to prevent unintended consequences.

The ‘Evidence Dilemma’ in AI Policymaking

Policymakers must regulate AI while facing an inherent evidence dilemma—AI capabilities are advancing faster than our ability to fully understand their risks and benefits. With limited long-term data available, they must strike a delicate balance between innovation and caution. Overregulation could stifle progress, while underregulation could expose society to unforeseen dangers.

This brings to mind Lewis Carroll’s famous line from Alice in Wonderland: “Sentence first, verdict afterwards.” The global rush to adopt AI technologies often mirrors this sentiment—nations prioritize AI dominance without fully comprehending its long-term societal, geopolitical, and ethical implications. Policymakers must pause and engage in thoughtful deliberation, as their decisions will shape the world future generations inherit.

The Democratization and Localization of AI

AI’s evolution is not just about technological prowess—it is about inclusivity and accessibility. The rise of indigenous AI models from organizations such as OpenAI, DeepSeek, and emerging players in India reflects a shift toward democratizing AI. Locally developed, lower-cost models could bridge the digital divide, empowering small businesses, educators, and individuals in developing regions.

However, accessibility alone is insufficient. AI must not be reduced to a tool for geopolitical rivalry. Instead, it should be harnessed to address global challenges, such as enhancing education, workforce training, democratic institutions, and human rights protections. The key question is not which nation will dominate AI but rather how AI can be used to elevate humanity as a whole.

The Role of Governments in AI’s Future

Governments must play a central role in shaping AI’s trajectory responsibly. Their responsibilities include:

  1. Enhancing Public Services: Leveraging AI to improve governance, healthcare, education, and infrastructure.
  2. Investing in Ethical Innovation: Funding research that prioritizes societal well-being.
  3. Regulating AI for Public Welfare: Ensuring safety, privacy, and fairness in AI deployment.
  4. Safeguarding National and Global Interests: Protecting citizens from predatory corporate and geopolitical practices.

Achieving these goals requires governments to deepen their understanding of AI, engage top experts in policymaking, and foster international collaboration. The objective is to create an environment where AI innovation thrives while upholding ethical and societal safeguards.

The Urgent Need for AI Literacy

A significant challenge in AI governance is the lack of AI literacy among the public. Just as we teach people how to drive cars, we must educate them about AI’s societal impact. Without foundational AI knowledge, individuals remain vulnerable to manipulation, misinformation, and loss of autonomy.

AI education must be prioritized in schools, universities, and public discourse. A well-informed public can demand transparency, accountability, and responsible AI practices from governments and corporations alike.

Conclusion: Sustaining Momentum Beyond Paris

While the Summit Series has laid a strong foundation, sustaining its progress requires addressing structural and institutional challenges. The Summit must carve out a clear scope that complements, rather than competes with, other AI governance initiatives. Advanced AI trust and safety is a natural focus, as no other international forum exclusively addresses this domain.

Returning from Paris with a clear roadmap for future summits and defined thematic priorities could inspire renewed optimism about international AI cooperation. The stakes are high, but with collaboration, ethical leadership, and public engagement, AI can be harnessed as a force for global good. The choices we make today will determine whether AI uplifts humanity or exacerbates existing divisions. Let us choose wisely – for the sake of future generations.


[1] Independent report International AI Safety Report 2025 available at https://www.gov.uk/government/publications/international-ai-safety-report-2025

Cristina Vanberghen
Cristina Vanberghen
Dr Cristina Vanberghen, Senior Expert at the European Commission, EUI, WICCI’s India-EU Business Council and the Indian Society of Artificial Intelligence and Law.