Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
This engaging conversation features Liam Vedas (co-creator of ChatGPT) and Doge Chubuk (former DeepMind physics team member) discussing their new venture, Periodic Labs - a frontier AI research company building "experiment in the loop" systems for physics and chemistry. (04:41) The duo explains how they're moving beyond traditional AI training methods that rely on digital rewards to create physically grounded reward functions through real-world experimentation.
Co-creator of ChatGPT at OpenAI, where he helped develop the revolutionary conversational AI system that transformed how we interact with language models. He has deep expertise in reinforcement learning from human feedback (RLHF) and the technical architecture behind modern conversational AI systems.
Former physics team leader at DeepMind, where he worked on applying machine learning to fundamental physics problems. He brings extensive experience in quantum mechanics, materials science, and the intersection of AI with physical sciences, particularly in areas like superconductivity research.
General Partner at Andreessen Horowitz (a16z), focusing on frontier technology investments. He has a background in evaluating and supporting cutting-edge AI companies and brings perspective on the commercial viability of advanced AI research.
The biggest limitation of current AI systems is their reliance on digital reward functions like math graders and code checkers. (08:40) For true scientific advancement, AI needs to be optimized against real-world experimental results. As Liam explains, early ChatGPT versions weren't mathematically strong because the reward function encoded "be a friendly assistant" rather than mathematical correctness. The same principle applies to science - you can't discover new physics by training only on existing literature and simulations.
Even the smartest humans require multiple attempts before making significant discoveries. (11:12) Current LLMs, despite their intelligence, lack the ability to iterate on scientific problems through real experimentation. As Doge emphasizes, "if they're not iterating on science, they won't discover science." This iterative process involves simulations, theoretical calculations, experiments, getting results (often incorrect initially), and then refining the approach - something that requires actual laboratory work, not just computational modeling.
Scientific literature suffers from publication bias toward positive results, but negative results provide crucial learning signals that are often context-dependent. (12:56) What appears as a negative result for one researcher might be positive under different conditions. Traditional AI training misses this valuable data entirely, creating a fundamental gap in understanding. Periodic's lab generates both positive and negative results, providing more complete training data for AI systems.
Modern scientific problems require knowledge spanning multiple domains that no single human can master. (34:28) As Doge notes, even leading experts have much more to learn than they know in their fields. Discovering breakthrough materials like superconductors requires expertise in chemistry, physics, synthesis, and characterization - necessitating collaborative teams where everyone continuously learns from each other across disciplines.
The biggest differentiator for recruiting at frontier research labs is genuine passion for the mission rather than just technical skills. (35:52) As Liam explains, there's high overlap in technical requirements between companies, but the determining factor is whether candidates care deeply about accelerating scientific discovery. This mission-driven approach attracts researchers who view scientific advancement as their primary goal rather than just improving existing products.