AI Leaders Debate Progress, Safety, and Global Impact at TIME100 Summit

Harry Booth

Is the pursuit of human-like AI unlocking scientific breakthroughs—or diverting attention from the technology’s real-world risks? This central question framed a discussion among AI visionaries and researchers during a panel at the TIME 100 Summit in New York City on April 23. The panel featured Demis Hassabis, co-founder and CEO of Google DeepMind; Kate Crawford, an AI scholar, author, and research professor at USC Annenberg, and Glenn Fogel, CEO and president of Booking Holdings. (Booking.com, a subsidiary of Booking Holdings, sponsored the TIME 100 Summit.) Their conversation, moderated by TIME executive editor Nikhil Kumar, explored the practical and ethical dimensions of AI’s real-world rollout.

Hassabis shared his vision for AI’s immense potential for good. “I’m very excited over the next decade of more scientific abundances being enabled and facilitated by AI,” he said. In October, Hassabis and his colleague John Jumper were awarded the Nobel Prize in chemistry for their work on AlphaFold, an AI algorithm that can predict the 3D shape of proteins with astonishing accuracy. Not only did AlphaFold solve a 50-year-old problem in biology, but it’s now being used to assist research into new drugs, materials, and advance understanding of the human body. “I think that's just a taste of what's to come,” Hassabis told the audience, saying that he hopes that in a decade AlphaFold will be viewed, not as an isolated success, but the beginning of a “new golden era of discovery.” 

But the optimism surrounding AI's potential was tempered by stark warnings about its real-world risks. In the U.S., for example, there have been reports of technology that could be used to aid in tracking and detaining immigrants, Crawford said. Both the materials and energy needed to build and maintain datacenters also entail environmental costs, she added. The Silicon Valley mantra coined more than a decade ago, “move fast and break things,” may not be appropriate when considering the transformative power of AI, Fogel said.  “That may have been okay back then, but now we're playing with much more dangerous things,” he said.

Even as Hassabis championed AI's potential to conquer disease and climate change, he acknowledged its double-edged nature, saying that AI itself ranks among “the huge challenges in the world" humanity must confront. However, the panelists were split over the pursuit of artificial general intelligence (AGI). Hassabis estimated a 50% chance of its arrival within 5 to 10 years, a relatively conservative timeline compared to predictions from peers like OpenAI's Sam Altman or Anthropic's Dario Amodei. Underlying the differing forecasts, however, is a fundamental disagreement over what AGI truly means. While OpenAI focuses on systems outperforming humans at economically valuable tasks, for example, Hassabis defines it more broadly as matching all human capabilities, “from creativity to reasoning”.   

That lack of shared definition for AGI is part of the problem, Crawford argued. “It’s become a marketing term. There is no benchmark for when we reach AGI,” she said. “Frankly, we should be thinking about a different question: how are [we] creating systems that actually benefit everyone,” she said. Fogel later noted, however, that “we live in market economies where the incentives are, unfortunately, tilted much more towards progress on the bottom line than society.”

Amid the debate, the panelists converged on the need for international cooperation, yet acknowledged the hurdles posed by AI's sheer speed and complexity, which often leave governments struggling to understand the issues, let alone regulate them effectively. “We’re at this pivotal moment,” Crawford said, noting AI’s recent rapid progress. Yet, the most recent AI Summit, where world leaders and tech CEOs met in Paris in February, “didn’t have that vision or a way to address real risk,” Crawford said.
While Hassabis advocated for technical solutions to maintain control over increasingly powerful AI systems, such as interpretability—a nascent field of research that seeks to decode the inner workings of AI algorithms— he stressed that fragmented efforts are insufficient against such a pervasive force. “Even things like regulation are not really of much use if it’s only one small part of the world,” Hassabis cautioned. “These technologies are international, they’re digital, they’re going to affect everyone,” he said.

---

The TIME100 Summit convenes leaders from the global TIME100 community to spotlight solutions and encourage action toward a better world. This year’s summit features a variety of speakers across a diverse range of sectors, including business, health and science, AI, culture, and more.

Speakers for the 2025 TIME100 Summit include human rights advocate Yulia Navalnaya; Meghan, Duchess of Sussex; comedian Nikki Glaser; climate justice activist Catherine Colman Flowers; Netflix CEO Ted Sarandos, and many more, plus a performance by Nicole Scherzinger.

The 2025 TIME100 Summit was presented by Booking.com, Circle, Diriyah Company, Prudential Financial, Toyota, Amazon, Absolut, Pfizer, and XPRIZE.