2025. The air in the hall hummed with an unusual kind of anticipation—not the applause-and-lights kind, but rather the quiet, electric awareness that something significant was unfolding. The Queen Elizabeth Prize for Engineering had brought together six of the world’s leading minds in artificial intelligence: Yoshua Bengio, Geoffrey Hinton, Yann LeCun, Fei-Fei Li, Bill Dally, and Jensen Huang. For once, they were not hunched over code, nor standing at a podium presenting. They were simply there together, reflecting on the worlds they’d already transformed and imagining the ones still to come. Will the AI bubble burst? Are today’s valuations reality or hype? How long before machines reach human-level intelligence?
It is clear that the world needs thinkers who weigh ambition against responsibility and vision against consequence. Innovation without reflection is a machine without a compass. True humanism demands courage—the courage to ask the questions that truly matter: Where are we going? Who do we serve? Are we building a better world, or merely a faster one?
And I write this not only as a speaker at the FT AI Summit in London, where I met some of the very architects whose “aha” moments shaped modern AI—Yoshua Bengio, Geoffrey Hinton, Yann LeCun, Fei-Fei Li, and Jensen Huang—but also as someone who has spent years at the intersection of technology, law, and society, through my PhD research in cyberspace law and the books that I have authored since. I have long believed that scientists have lessons to learn from lawyers, and lawyers from scientists.

Ideas cannot remain confined to coffee chats or conference rooms; they must ripple outward, shaping ethics, policy, and innovation alike. AI is a matter not of speed, nor of scale, nor of raw intelligence alone; it is a matter of purpose. And to secure the primacy of humanity in this technological revolution requires dialogue, not isolation, and reflection, rather than mere reaction. The future shall be determined not by who can make the quickest machine but by those who have the courage to ask—and to answer—the toughest of questions: What is intelligence? And what should it become?
It is this relentless pursuit of meaning—of purpose aligned with responsibility—that gives rise to the defining “Aha” moments of AI pioneers, moments that light up the path of discovery, the frontiers of knowledge, and the ethical responsibilities attending transformative innovation.
For Yoshua Bengio, there are two defining moments. The first traces back to his days in graduate school when he came across Jeff Hinton’s pioneering work. Reading through those early papers evoked a feeling of wonder: could there be some fundamental principles—simple, elegant rules almost like the laws of physics—that describe human intelligence and guide the construction of truly intelligent machines? That curiosity set him on an exploratory path in machine learning.
The second turning point came much later, roughly two and a half years ago, in the wake of ChatGPT’s release. Confronted with the reality that machines could by then understand language and pursue goals beyond our direct control, he faced a stark realization: what if these systems one day surpassed human intelligence or fell into the wrong hands? The moment was a wake-up call; he pivoted his research agenda entirely, dedicating himself to addressing profound ethical and societal challenges of advanced AI. Put together, these vignettes capture both the exhilaration of discovery and the weight of responsibility that define Bengio’s journey: a mix of visionary curiosity with urgent pragmatism.
Bill Dally remembers two moments in his trajectory. At Stanford in the 1990s, he hit the “memory wall,” an expensive disconnect between the speed of access to data and the pace of calculation. The problem was a stimulating challenge to reorganize computation into kernels loosely connected by streams. This was stream processing, the forerunner of today’s GPU computing, at first called the key to scientific computing beyond graphics.
It wasn’t until a casual breakfast in 2010 with his colleague, Andrew Ng, that the second epiphany struck. As he saw Ng running neural networks on 16,000 CPUs to recognize cats online, Dally knew GPUs could revolutionize deep learning.
Geoffrey Hinton reflects on one of his key breakthroughs: applying backpropagation to predict the next word in a sequence in the 1980s. This small language model, trained on 100 examples, showed that machines could learn meaningful features of language, one of the very early precursors to large language models today. The principles were sound, but it took 40 years for computing power, data, and infrastructure to catch up.
Jensen Huang recalls a parallel journey in hardware: how he, as a first-generation chip designer, said back in 2010 that frameworks for deep learning had parallels with structured chip design. And it was that insight—leading to scaling Nvidia’s GPUs across many processors and data centers—that provided the raw computational muscle for modern AI applications, including LLMs. For Huang, that was the spark of deep learning’s promise; the rest was just engineering at scale: optimize, extrapolate, and imagine limits on what these systems are capable of.
Fei-Fei Li remembers two key turning points in her career. The first came around 2006-2007, when she made the transition from graduate student to assistant professor, working on visual recognition for machines. She realized that algorithms weren’t the bottleneck—data was. Her second defining moment came in 2018, when she was named Google Cloud’s first AI chief scientist. As technology began to transform lives—from healthcare breakthroughs to financial services—she witnessed firsthand both its promise and its responsibility. Inspired by milestones such as AlphaGo, Fei-Fei reached one inexorable conclusion: this powerful technology needed some guiding framework that would ensure it truly served humanity. And that conviction led her full circle back to Stanford, where she co-founded the Human-Centered AI Institute, driven by one simple yet profound belief: technology should always place people, and their capacity for kindness and ethical judgment, at its heart.
According to Yann LeCun, he got interested in artificial intelligence as an undergraduate because the idea that machines could learn without being programmed was appealing. He soon came to understand that intelligence organizes itself; thus, his curiosity led him to Jeff Hinton’s work in the early 1980s. Their shared obsession with training multi-layer neural networks—then thought impossible—set the stage for decades of breakthroughs in deep learning. Initially, Yann was all for supervised learning, teaching machines through examples with labels, while in later years, the strengths of self-supervised and unsupervised methods—allowing AI to find their own patterns—have grown on him. Today, this is the foundational approach to large language models, although there are still some challenges when it comes to applying such methods to more difficult data, such as video or sensor input. According to Yann, this rapid rise is not only a technical revolution; it is changing business and society, as well as geopolitics. Yet, he reminded us that this is still evolving technology whose full potential can be properly exploited with future breakthroughs in store.
When asked if AI was in a bubble, Jensen Huang argued that this is not like the dot-com era: today, almost every GPU is fully utilized powering real applications, not just speculative hype. Two factors cause this exponential growth of AI: rapid growth in the need for more and more computations and the explosive adoption of models in many diverse fields. Unlike traditional software, AI generates intelligence in real time, requiring massive infrastructure to do so, says Jensen, likening it to factories producing value on a scale never seen before.
Fei-Fei Li added that AI is still young—less than 70 years old compared to centuries of physics—with huge frontiers ahead. Current models excel at language but lag in spatial reasoning and broader cognitive tasks, leaving huge opportunities for innovation. Bill emphasized that AI should enhance, not replace, people—supercharging our unique capabilities while doing things at scales impossible for humans.
Estimates of time vary: While some capabilities are now well beyond human-level performance for narrow domains, such as recognizing thousands of objects or translating many languages, timescales for full AGI remain uncertain. On this gradual and uneven trajectory, systems capable of rivaling human reasoning could be developed within the next 5-20 years at most, say experts. One thing there is consensus on: AI is not a bubble in the classical sense.
We’re standing at the beginning of a huge buildout in intelligence with LLMs and other models as just one piece of this greater, living ecosystem. The markets may, and probably will, ebb and flow, but the sustained technological growth, applications, and infrastructure development hint more toward a transformative trajectory rather than an abrupt collapse.

