In the current rapidly evolving technological landscape, General Purpose Technologies (GPTs) such as artificial intelligence (AI), synthetic biology, and quantum computing are transforming every aspect of human life. Nic Bostrom predicts that Artificial General Intelligence (AGI) could be achieved by mid-century, while others, like Ray Kurzweil, believe it may happen by the end of this decade. In parallel, genetic engineering has seen tremendous advancements, with scientists such as neurologist Paul Knoepfler suggesting that human genetic engineering could be a reality within 10 to 20 years. This raises critical questions: Is it ethical to pursue these technologies? Can we manage their development responsibly? What strategy should the EU adopt to navigate these challenges?
In a world transformed by technologies in an unpredictable way, it is now the time to demand that governments invest significant resources in understanding and governing technologies. Note that the international Biological Weapons Convention is being implemented with an annual budget of just $2.1 million and four staff, less than the size of a typical cafeteria. Regardless of whether we change our relationship with the future and invest the state budget more rationally, GPTs are expected to offer unprecedented opportunities for progress in all areas of life, while at the same time beginning to introduce significant risks.
General Purpose Technologies
GPTs are characterized by their wide applicability and potential to cause significant transformations across economic and social structures. Historical examples of GPTs include the steam engine, electricity, the printing press, and the internet. These technologies have not only driven productivity and efficiency but have also created new markets and industries, fundamentally reshaping the global economy. A key feature of these technologies is their transformative power, not just for one sector but for the entire economy, globally, and they tend to improve productivity and efficiency and create new markets and industries that previously did not exist.
Today, technologies such as AI and synthetic biology hold similar transformative potential. AI’s ability to perform tasks that require human intelligence can enhance productivity but also poses risks such as job displacement, while synthetic biology, with advancements like CRISPR technology, revolutionizes genetic engineering but also raises ethical and safety concerns due to its ability to precisely edit DNA at lower costs and higher speeds. Although DNA synthesis was once confined to costly laboratories, the price of a DNA synthesizer is now rapidly decreasing, currently around 35,000 €, and it won’t be long before biohackers or amateur scientists can synthesize DNA and experiment with creating new viruses, potentially leading to new pandemics.
Regardless of whether we change our relationship with the future and invest the state budget more rationally, the rapid advances in several General Purpose Technologies (GPTs) are expected to offer unprecedented opportunities for progress in all areas of life, while at the same time beginning to introduce significant risks. Regulations alone, although it is the most preferable tool (and probably the cheaper) are not sufficient to address the emerging and uncertain challenges. So what approach is required for an effective governance strategy? An integrated framework for AI and the rest GPTs should include the following aspects:
- Integrated regulatory frameworks. Developing integrated and dynamic regulatory frameworks is crucial. The EU AI Act is an example of an attempt to categorise AI systems according to their risk levels and regulate them accordingly. Nevertheless, the dynamic nature of AI development requires that these regulations have a global reach and are constantly updated and improved.
- Global cooperation and standards[1]. International cooperation is vital for setting global standards and regulations. The international nature of technological development makes international cooperation on standards and regulations necessary. The Montreal Protocol, an international treaty designed to protect the ozone layer by phasing out the production of ozone-depleting substances, is a successful example of global cooperation on technology-related issues, however that treaty had a much narrower focus.
- Public participation and education. Public understanding and participation in technology issues is vital for democratic governance. It is not enough to have TED talks, books and conferences in Davos that engage a very small part of society. Public consultations on AI regulation, like those initiated by the EU, allow citizens to voice their concerns and help shape policies, ensuring that technological developments align with societal values and ethical standards. Active engagement of the broader society should be a substantial part of any technology governance process and should be organically integrated in the technology governance process.
- Innovation in governance mechanisms. Adaptive governance approaches, such as flexible policymaking and regulatory sandboxes, allow new technologies and policies to be tested in controlled environments. A recent example is Singapore’s approach[2] to regulating autonomous vehicles through incremental testing is a good practice for other countries. In parallel, in the corporate world new accountable models are needed to promote ethical values and safety. We need to experiment with models that combine profit with social purpose and an interesting example is the B Corps[3] initiative that supports social purpose and positive change through a legal binding process.
- Ethical and safety standards. High ethical and safety standards are of paramount importance for the development and deployment of new technologies. The legacy of the Asilomar conference (1975) on recombinant DNA, which established guidelines for biotechnology research, underlines the importance of precautionary measures to set safety standards before a technology is widely adopted. The establishment of ethical standards for industry sectors at a very early stage is essential. In this context both the private and the public sector shall invest suitable resources in this direction, as at the moment there are only a few hundred AI safety engineers compared to the 30-40,000 AI engineers, a disparity that limits our ability to effectively address potential emerging challenges[4].
Anticipatory Governance
Developing the expertise and tools to assess and manage the risks associated with new technologies is essential. The creation in the past of organisations such as the International Atomic Energy Agency, which oversees nuclear technology, could be mirrored in other technological areas to monitor developments and enforce safety protocols. Nevertheless, as the landscape becomes more ramplex (rapidly changing and complex), it becomes urgent to establish a more integrated framework for anticipatory governance.
This new polycrisis[5] reality introduces strategic risks and “wicked” systemic challenges—issues that are complex, interconnected, and difficult to solve, such as climate change, automation, artificial intelligence, emerging diseases, and social pathologies. Traditionally, governments have been adept at addressing static social issues within isolated “silos.” However, the dynamic and intertwined nature of contemporary challenges often overwhelms conventional planning and governance approaches, necessitating a paradigm shift towards more proactive and integrated solutions. This approach involves foresight to identify emerging issues, agility to respond swiftly to new information, and adaptability to evolve in response to changing circumstances.
In order, to establish anticipatory governance on national or supranational level, governments need to develop several key abilities. This includes fostering a culture of anticipating change within their ranks. This foresight culture would allow them to spot early signs of emerging trends, engage actively with citizens, think in terms of interconnected systems (systems thinking), and experiment with new approaches.
Today, as AI, synthetic biology, and other new general-purpose technologies continue to reshape our world, the need for strong, adaptive governance frameworks is a given. By learning from past experiences and considering future challenges, policymakers, scientists and society can work together to ensure that the benefits of new technologies are maximised for all.
[1] Christophilopoulos E., Modern Diplomacy, https://moderndiplomacy.eu/2023/11/14/what-can-the-current-eu-ai-approach-do-to-overcome-the-challenges-at-its-core/
[2] Si Ying Tan & Araz Taeihagh, Adaptive and experimental governance in the implementation of autonomous vehicles: The case of Singapore, https://www.ippapublicpolicy.org/file/paper/5cea683b9a45b.pdf
[3] https://www.bcorporation.net/en-us/
[4] Suleyman, M. (2023). The Coming Wave. Crown.
[5] European Commission, Directorate-General for Research and Innovation, Dixson-Declève, S., Dunlop, K., Renda, A. et al., Research and innovation to thrive in the poly-crisis age, Publications Office of the European Union, 2023, https://data.europa.eu/doi/10.2777/92915