Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this dynamic episode of BG2, Brad Gerstner sits down with Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman to unpack their groundbreaking $3 trillion AI partnership that's reshaping the technology landscape. The conversation dives deep into their restructured deal, revealing how Microsoft invested approximately $13-14 billion for 27% ownership in OpenAI, with the creation of a $130 billion nonprofit foundation. (02:28)
CEO of Microsoft since 2014, Nadella has transformed the company into a cloud-first, AI-driven powerhouse with a market capitalization exceeding $3 trillion. Under his leadership, Microsoft has become one of the world's largest cloud providers and a dominant force in enterprise software, making strategic AI investments including the pivotal OpenAI partnership that began in 2019.
CEO and co-founder of OpenAI, Altman has led the company from its nonprofit origins to becoming one of the fastest-growing companies in history with $13+ billion in annual revenue. He previously served as president of Y Combinator and has been instrumental in democratizing AI through products like ChatGPT, which sparked the current AI revolution.
Founder and CEO of Altimeter Capital, a technology-focused investment firm, and host of the BG2 podcast. Gerstner is known for his deep analysis of technology trends and his ability to facilitate insightful conversations with industry leaders about the future of technology and its economic impact.
The Microsoft-OpenAI partnership demonstrates how strategic alliances can unlock massive value through complementary strengths. Microsoft provides the compute infrastructure and global distribution platform while OpenAI delivers cutting-edge AI models. (04:13) This partnership has enabled both companies to achieve scale and capabilities neither could have reached independently. The exclusive API distribution on Azure through 2032 and revenue-sharing agreements create aligned incentives that benefit both parties while maintaining competitive positioning in the market.
Both companies emphasize that compute scarcity is actually accelerating innovation rather than hindering it. Sam Altman notes that if they had 10x more compute, revenue would increase substantially but perhaps not proportionally. (15:13) This constraint forces teams to optimize algorithms, improve inference efficiency, and make strategic decisions about resource allocation. Satya highlights how software improvements often deliver more exponential gains than hardware advances alone, creating a competitive advantage for teams that can maximize efficiency.
The traditional SaaS model of tightly coupled data, logic, and UI layers is being replaced by an agent-centric architecture where AI handles business logic. (53:58) Nadella explains that successful SaaS companies will need to transition from "high ARPU, low usage" to "low ARPU, high usage" models that generate rich data for AI grounding. This fundamental shift means companies must rethink their entire technology stack and business model to remain competitive in the AI era.
The true productivity gains from AI come not from simply adding AI tools to existing processes, but from fundamentally rethinking how work gets done. (67:47) Nadella shares examples of Microsoft employees building agent-powered solutions to automate complex operational tasks that would have required massive headcount increases. The key insight is that teams must "unlearn and relearn" their workflows to maximize AI leverage, similar to how businesses adapted to Excel and email in previous technological shifts.
As Nadella emphasizes, "nothing is a commodity at scale" - the hyperscale cloud providers who can achieve maximum utilization and efficiency in their token factories will maintain significant competitive advantages. (48:16) This scale advantage extends beyond just having more compute to include software optimizations, supply chain efficiencies, and the ability to run diverse, fungible workloads across training, inference, and various AI pipeline components. Companies that can achieve this scale will be positioned to offer better pricing and performance to customers.