Command Palette

Search for a command to run...

PodMine
a16z Podcast
a16z Podcast•September 30, 2025

Building an AI Physicist: ChatGPT Co-Creator’s Next Venture

Periodic Labs, co-founded by ChatGPT creator and former DeepMind physicist, is building an AI physicist to accelerate scientific discovery by combining large language models, simulations, and experimental data to tackle challenges like high-temperature superconductivity.
AI & Machine Learning
Indie Hackers & SaaS Builders
BioTech & HealthTech
Liam Vedas
Doge Chubuk
Anjane Mitha
OpenAI
Andreessen Horowitz

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

This engaging conversation features Liam Vedas (co-creator of ChatGPT) and Doge Chubuk (former DeepMind physics team member) discussing their new venture, Periodic Labs - a frontier AI research company building "experiment in the loop" systems for physics and chemistry. (04:41) The duo explains how they're moving beyond traditional AI training methods that rely on digital rewards to create physically grounded reward functions through real-world experimentation.

  • Core Focus: Building AI systems that can conduct actual scientific experiments, moving from talking about science to doing science, with initial targets in superconductivity and magnetism discovery.

Speakers

Liam Vedas

Co-creator of ChatGPT at OpenAI, where he helped develop the revolutionary conversational AI system that transformed how we interact with language models. He has deep expertise in reinforcement learning from human feedback (RLHF) and the technical architecture behind modern conversational AI systems.

Doge Chubuk

Former physics team leader at DeepMind, where he worked on applying machine learning to fundamental physics problems. He brings extensive experience in quantum mechanics, materials science, and the intersection of AI with physical sciences, particularly in areas like superconductivity research.

Anjane Mitha

General Partner at Andreessen Horowitz (a16z), focusing on frontier technology investments. He has a background in evaluating and supporting cutting-edge AI companies and brings perspective on the commercial viability of advanced AI research.

Key Takeaways

Physical Reward Functions Trump Digital Ones for Science

The biggest limitation of current AI systems is their reliance on digital reward functions like math graders and code checkers. (08:40) For true scientific advancement, AI needs to be optimized against real-world experimental results. As Liam explains, early ChatGPT versions weren't mathematically strong because the reward function encoded "be a friendly assistant" rather than mathematical correctness. The same principle applies to science - you can't discover new physics by training only on existing literature and simulations.

Iteration is Essential for Scientific Discovery

Even the smartest humans require multiple attempts before making significant discoveries. (11:12) Current LLMs, despite their intelligence, lack the ability to iterate on scientific problems through real experimentation. As Doge emphasizes, "if they're not iterating on science, they won't discover science." This iterative process involves simulations, theoretical calculations, experiments, getting results (often incorrect initially), and then refining the approach - something that requires actual laboratory work, not just computational modeling.

Negative Results Are Valuable Learning Signals

Scientific literature suffers from publication bias toward positive results, but negative results provide crucial learning signals that are often context-dependent. (12:56) What appears as a negative result for one researcher might be positive under different conditions. Traditional AI training misses this valuable data entirely, creating a fundamental gap in understanding. Periodic's lab generates both positive and negative results, providing more complete training data for AI systems.

Cross-Domain Expertise Requires Collaborative Teams

Modern scientific problems require knowledge spanning multiple domains that no single human can master. (34:28) As Doge notes, even leading experts have much more to learn than they know in their fields. Discovering breakthrough materials like superconductors requires expertise in chemistry, physics, synthesis, and characterization - necessitating collaborative teams where everyone continuously learns from each other across disciplines.

Mission-Driven Culture Attracts Top Talent

The biggest differentiator for recruiting at frontier research labs is genuine passion for the mission rather than just technical skills. (35:52) As Liam explains, there's high overlap in technical requirements between companies, but the determining factor is whether candidates care deeply about accelerating scientific discovery. This mission-driven approach attracts researchers who view scientific advancement as their primary goal rather than just improving existing products.

Statistics & Facts

  1. The best ambient pressure superconductor today operates at 135 Kelvin. (13:34) This serves as a clear benchmark for measuring Periodic's progress - any discovery above this temperature would represent a significant breakthrough.
  2. Periodic Labs currently employs approximately 30 people. (36:33) The team is split roughly evenly between machine learning scientists and physical scientists (physics/chemistry backgrounds).
  3. Formation enthalpy labels in existing literature are so noisy that machine learning models trained on this data fail to make accurate predictions for new materials synthesis. (22:00) This highlights why new experimental data generation is necessary rather than relying solely on published research.

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription

More episodes like this

In Good Company with Nicolai Tangen
January 14, 2026

Figma CEO: From Idea to IPO, Design at Scale and AI’s Impact on Creativity

In Good Company with Nicolai Tangen
Uncensored CMO
January 14, 2026

Rory Sutherland on why luck beats logic in marketing

Uncensored CMO
We Study Billionaires - The Investor’s Podcast Network
January 14, 2026

BTC257: Bitcoin Mastermind Q1 2026 w/ Jeff Ross, Joe Carlasare, and American HODL (Bitcoin Podcast)

We Study Billionaires - The Investor’s Podcast Network
This Week in Startups
January 13, 2026

How to Make Billions from Exposing Fraud | E2234

This Week in Startups
Swipe to navigate