Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
This moonshot podcast episode dives deep into the transformative discussions from Davos 2026, where AI dominated the global conversation like never before. The hosts—Peter Diamandis, Dave Blundin, Salim Ismail, and Dr. Alexander Wissner-Gross—share their firsthand experiences from the World Economic Forum, revealing how artificial intelligence has become the central focus for world leaders, billionaires, and tech executives. (03:00)
Peter is the founder and executive chairman of the XPRIZE Foundation, which leads the world in designing and operating large-scale incentive competitions. He is also the co-founder of Singularity University and has founded over 20 companies in the areas of longevity, space, venture capital, and education.
Dave is the founder and General Partner of Link Ventures, focusing on early-stage technology investments. He's a repeat entrepreneur with extensive experience in scaling technology companies and has been a regular participant at Davos for multiple years.
Salim is the founder of OpenExO and a renowned expert on exponential organizations. He's the bestselling author of "Exponential Organizations" and has been at the forefront of understanding how technology transforms business models at unprecedented scale.
Dr. Wissner-Gross is a computer scientist and founder of Reified. He holds advanced degrees from MIT and Harvard, and his work focuses on the intersection of artificial intelligence, physics, and complex systems with applications to technology and finance.
The most striking revelation from Davos was the convergence of AI leaders on AGI timelines. Both Dario Amodei from Anthropic and Demis Hassabis from DeepMind discussed artificial general intelligence arriving within 5-10 years, with this representing their outer bound estimates. (14:27) What's particularly significant is that these leaders, who previously had different timeline predictions, are now essentially agreeing that the difference between 1-2 years versus 8-10 years doesn't matter from a policy preparation standpoint. The key insight is that global leaders need to start preparing now for massive disruption that's guaranteed to arrive this decade.
Dario Amodei provided a staggering economic framework: global labor represents roughly $50 trillion annually, and if AI captures even 10% of that market, we're looking at $5 trillion in annual revenue for the AI industry. (07:43) Jensen Huang reinforced this with his observation about trillions of dollars in infrastructure buildout happening now. This isn't just another tech boom—it's a fundamental restructuring of the global economy where intelligence becomes the primary commodity, potentially creating the first $100 trillion company valuations by 2030.
The podcast revealed a fascinating tension around energy solutions for AI's massive computational needs. While traditional voices advocate for natural gas as the only viable solution for energy-dense applications, forward-thinking leaders like Elon Musk are pushing for space-based solar solutions. (40:21) The key insight is that whoever solves the energy equation for AI will control the future—whether through SpaceX's solar-powered AI satellites or massive terrestrial renewable buildouts. This represents both the biggest constraint and the biggest opportunity in the AI race.
A paradigm shift is emerging where AI agents will conduct economic transactions primarily through cryptocurrency and stablecoins rather than traditional banking systems. (51:02) CZ from Binance and Jeremy Allaire from Circle both emphasized that when billions of AI agents begin conducting continuous economic activity, they need a financial system that operates at the speed of internet rails, not the slow, bureaucratic traditional banking system. This isn't just about payments—it's about creating an entirely new economic infrastructure designed for autonomous agents.
Anthropic's release of Claude's co-authored constitution marks a historic moment in AI development—the first instance of recursive self-improvement in ethics. (69:02) Rather than simply following human-written rules, Claude participated in writing its own ethical framework, moving toward what Anthropic calls "reflective equilibrium." This development suggests we're witnessing the early stages of AI personhood and self-determination, which could fundamentally change how we think about AI rights, responsibilities, and governance.