Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this deep dive into AI infrastructure, Nathan Lands interviews Evan Conrad, CEO of SF Compute, about the massive physical build-out powering the AI revolution. (00:19) Conrad explains how the AI industry faces a potential credit risk bubble due to misaligned contract structures between GPU providers and customers. While GPU cloud providers need long-term contracts to secure financing, AI companies prefer short-term flexibility, creating a dangerous financial mismatch that could cascade through the entire ecosystem if venture funding tightens.
Nathan Lands is the host of The Next Wave podcast and co-founder of Lore.com. He has experience studying Mandarin in Taiwan and connections with Chinese government officials, giving him unique insights into US-China tech competition. Nathan is currently exploring opportunities in data center acquisition and operations.
Evan Conrad is the CEO of SF Compute, a company creating a spot market for AI compute that transforms supercomputers into tradable commodities. Previously, he founded June Lark, an AI audio model company similar to Suno or Udio, before pivoting to solve the compute financing crisis. Conrad has deep expertise in data center economics and GPU cluster management, positioning SF Compute as a critical infrastructure provider for the AI industry.
The AI compute industry faces a fundamental mismatch between financing needs and customer demands. (02:42) GPU cloud providers need long-term contracts (often 3+ years) to secure financing for expensive clusters, but AI customers want short-term flexibility to avoid getting locked into potentially obsolete technology. This creates a credit risk bubble where venture-backed startups with thin margins are essentially backing the entire compute infrastructure through their ability to raise capital. When venture funding tightens, this house of cards could collapse, taking down inference providers, GPU clusters, and their debt providers in sequence.
Unlike traditional CPU-based cloud services where providers enjoy 60-70% margins, GPU compute operates on razor-thin margins of around 20%. (08:57) This happens because AI companies are extremely price-sensitive and care deeply about which specific GPUs they're using, unlike traditional software companies that don't even think about CPU specifications. The thin margins mean GPU providers have little buffer for demand fluctuations, requiring much more careful financial engineering and longer-term contracts to remain viable.
The biggest constraint facing AI development isn't chip manufacturing but power generation and distribution. (15:24) Current AI expansion plans would require adding 100+ gigawatts of power capacity—equivalent to doubling the US's 100 nuclear power plants—within just a few years. This represents a potential 10-20% increase in total US electricity load from a single industry. Countries that can rapidly deploy power infrastructure, particularly China with its centralized decision-making, may gain significant advantages in the AI race regardless of chip technology.
Environmental regulations like California's CEQA (California Environmental Quality Act) require extensive documentation for every possible impact of new construction projects, even hypothetical ones like identifying every bird species that might fly into a bridge. (23:54) While these laws had good intentions, they've evolved into bureaucratic obstacles that slow critical infrastructure development. This regulatory burden, combined with cultural shifts toward work-life balance in tech hubs, gives countries like China structural advantages in rapidly deploying AI infrastructure.
Successful compute infrastructure requires thinking like a real estate investor rather than a tech entrepreneur. (26:37) The key is securing long-term off-take agreements (essentially pre-signed rental contracts) before purchasing expensive GPU clusters, then using those contracts to secure favorable financing. Unlike software startups where product differentiation creates margin opportunities, GPU customers only care about hardware costs, making operational efficiency and financial engineering the primary competitive advantages rather than product innovation.