Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this compelling conversation, Mustafa Suleiman, CEO of Microsoft AI and co-founder of DeepMind, provides extraordinary insights into the future of artificial intelligence with Peter Diamandis, Dave Blundin, and Alex Wissner-Gross. (02:45) Suleiman discusses how we're transitioning from a world of operating systems and apps to one dominated by AI agents and companions. The episode explores Microsoft's ambitious mission to build the world's safest superintelligence, the coming challenges of AI containment versus alignment, and predictions for when AI will pass his proposed "modern Turing test" - turning $100,000 into $1 million through autonomous economic activity. (22:18)
CEO of Microsoft AI and co-founder of DeepMind, Suleiman spent over a decade at the forefront of AI development before the field became mainstream. He previously founded Inflection AI and led groundbreaking work on conversational AI, including early contributions to Google's Lambda model. Now leading Microsoft's 10,000-person AI division, he's responsible for building the company's frontier superintelligence capabilities and the Copilot product suite.
Entrepreneur and futurist focused on exponential technologies, founder of multiple companies including XPRIZE Foundation. Known for his expertise in identifying and analyzing technology metatrends that will transform industries over the coming decade.
Founder and General Partner of Link Ventures, an experienced entrepreneur who previously built and sold companies, with deep expertise in technology investing and startup ecosystem dynamics.
Computer scientist and founder of Reified, with extensive background in AI research and development. Has long-standing connections in the AI community, including relationships dating back to early AI safety conferences.
Suleiman predicts that by 2027, AI agents will successfully turn $100,000 into $1 million, marking what he calls the "modern Turing test" for economic capability. (22:18) Unlike academic benchmarks, this test measures real-world economic performance - the ability to 10x an investment through autonomous decision-making. This represents a fundamental shift from recognition and generation capabilities to true agentic action in complex economic environments. The implications are profound: when AI can reliably create economic value at this scale, it signals the arrival of artificial general intelligence with practical economic impact.
Suleiman emphasizes a critical distinction between containment and alignment, arguing we must solve containment first. (61:05) Containment involves formally limiting AI agency and putting boundaries around capabilities, while alignment focuses on ensuring AI shares human values. He warns that without proper containment mechanisms - including technical safety measures, global regulations, and hardware supply chain controls - even aligned AI could pose existential risks if misused by bad actors. This requires unprecedented global cooperation and surveillance capabilities.
The future of AI for science lies in creating autonomous systems that can generate hypotheses and validate them in real-world experiments without human intervention. (83:19) While AI excels at hypothesis generation, the limiting factor remains physical validation. Companies like Lila are building "dark cycle" laboratories where AI runs experiments 24/7, mining nature for data. This represents a shift from AI as a tool to AI as an autonomous explorer, potentially accelerating scientific discovery by orders of magnitude across medicine, materials science, and energy.
Suleiman reveals that inference costs have dropped by 100x in just two years, with some estimates showing 1000x improvements for certain model classes. (27:00) This dramatic cost reduction, combined with open-source model releases, has fundamentally changed the AI landscape. What once required billions in capital can now be accessed at near-zero marginal cost, creating both opportunities for democratization and challenges for startups trying to maintain competitive advantages. This deflationary trend in intelligence-as-a-service will have profound economic implications.
The moment AI systems can improve themselves without human intervention marks a potential inflection point toward uncontrollable intelligence explosion. (56:58) Currently, human software engineers remain in the loop for model improvement, but closing this loop through AI-generated training data, automated evaluation, and self-directed optimization could lead to rapid capability gains. Suleiman warns this represents the threshold moment that requires maximum international cooperation on safety, as unbounded recursive improvement could quickly exceed human ability to maintain control or oversight.