Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
This episode explores the massive infrastructure build-out powering the AI revolution with Dylan Patel, founder of SemiAnalysis. Dylan provides an unmatched granular view of the AI ecosystem, from semiconductor supply chains to data center construction tracked through satellite imagery. (05:23) The conversation covers three core areas: the strategic dynamics between major players like OpenAI, NVIDIA, and Oracle; the evolution from pre-training to reinforcement learning and reasoning; and the economic realities of building AI at scale.
Patrick is the CEO of Positive Sum and host of Invest Like The Best. He leads one of the most respected investment podcasts, focusing on markets, ideas, and strategies that help investors better allocate their time and money.
Dylan is the founder and CEO of SemiAnalysis, where he tracks semiconductor supply chains and AI infrastructure build-out with exceptional granularity. His team uses satellite imagery to monitor data center construction and maps hundreds of billions in capital flows across the AI ecosystem, making SemiAnalysis a crucial intelligence source for understanding the physical reality of AI development.
Success in AI fundamentally depends on having massive compute capacity before everything else. (06:04) As Dylan explains, "You have to have the cluster before you can run models on it for inference. You have to have the cluster to train the model." This creates a sequential dependency where compute infrastructure must precede business model validation. The magic of OpenAI's early success came from spending significantly more compute on single model runs than competitors, but now the stakes are exponentially higher. Companies need to secure gigawatts of capacity costing tens of billions annually, creating an unprecedented capital allocation challenge where the biggest tech giants are essentially in an arms race for compute resources.
Traditional SaaS economics are breaking down under AI's high cost of goods sold, fundamentally changing software business models. (115:00) Unlike classical SaaS where R&D costs stay relatively flat and customer acquisition costs dominate, AI software introduces massive ongoing compute costs for every transaction. Dylan argues this creates a challenging dynamic: "You have this high customer acquisition cost and you have this high COGS, and then the cost of anyone developing it themselves or competitors in the market means you're gonna have a very fragmented SaaS market." This shift means software companies may struggle to reach the escape velocity that made SaaS so profitable, as they can't easily amortize high acquisition costs against low marginal costs.
While internet-scale pre-training may be reaching maturity for text, reinforcement learning and environment-based training represent the next frontier with massive untapped potential. (30:31) Dylan emphasizes, "I think we've like thrown the first ball" when it comes to post-training sophistication. The breakthrough isn't just making models bigger, but teaching them through interactive environments - from fake Amazon stores to complex data manipulation tasks. This paradigm shift mirrors human learning, where we learn through trial and error in diverse environments rather than just reading. The scalability of this approach means creating vast amounts of synthetic training data tailored to specific domains, potentially unlocking capabilities that pure internet training never could.
The AI boom is creating unprecedented demand for electrical infrastructure, but the constraint isn't total power consumption - it's the supply chain and labor to build power generation rapidly. (77:28) Data centers currently represent only about 4% of US power consumption, with AI representing roughly half of that. The real challenge lies in supply chain bottlenecks for transformers, turbines, and specialized equipment, plus labor shortages. As Dylan notes, "electrician wages have like doubled" for those willing to work on data center projects. Companies are resorting to creative solutions like using diesel truck engines in parallel for power generation because industrial turbine capacity is fully allocated. This infrastructure build-out is teaching America how to construct power systems again after decades of limited expansion.
The geopolitical AI race fundamentally comes down to compute capacity and supply chain control, with Taiwan representing a critical vulnerability for US technological leadership. (83:47) Dylan argues that without AI success, "The US probably would be behind China and no longer the world hegemon by the end of the decade." China has talent advantages - potentially half the world's AI engineers are Chinese - and superior infrastructure development speed. However, they lack access to cutting-edge chips due to export controls. The concerning scenario: if China gains control of Taiwan's semiconductor production while maintaining their talent and manufacturing advantages, they could potentially deploy larger AI clusters than the US. This makes AI development not just an economic imperative but a national security priority for maintaining global leadership.