Command Palette

Search for a command to run...

PodMine
Dwarkesh Podcast
Dwarkesh Podcast•December 30, 2025

Adam Marblestone – AI is missing something fundamental about the brain

Adam Marblestone explores how the brain learns efficiently through complex reward functions and omnidirectional inference, discussing potential insights for AI development from neuroscience and the importance of understanding the brain's learning mechanisms.
Creator Economy
AI & Machine Learning
Tech Policy & Ethics
Neuroscience & Brain-Computer Interfaces
Dwarkesh Patel
Terry Tao
Yann LeCun
Adam Marblestone

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

In this fascinating episode, Adam Marblestone, CEO of Convergent Research and former Google DeepMind research scientist, explores how the brain achieves extraordinary learning efficiency compared to modern AI systems. (00:24) The conversation delves into the central mystery of human intelligence: why humans can learn complex skills with dramatically less data than large language models require. (00:56) Marblestone argues that the key difference lies not in architecture or learning algorithms, but in the sophisticated reward functions that evolution has encoded in our brains.

  • Main theme: The brain's superior learning efficiency stems from evolution-encoded reward functions and a dual-system architecture consisting of a learning subsystem (cortex) and a steering subsystem (subcortical regions) that work together to create intelligent behavior.

Speakers

Adam Marblestone

Adam Marblestone is CEO of Convergent Research, a organization focused on advancing scientific breakthroughs through focused research organizations (FROs). He previously worked as a research scientist at Google DeepMind on their neuroscience team and has extensive experience across diverse fields including brain-computer interfaces, quantum computing, nanotechnology, and formal mathematics. His work bridges neuroscience and artificial intelligence, with particular interest in understanding how biological systems achieve remarkable learning efficiency.

Dwarkesh Patel

Dwarkesh Patel hosts in-depth conversations with leading researchers, entrepreneurs, and thinkers, exploring cutting-edge developments in AI, science, and technology through detailed technical discussions.

Key Takeaways

Evolution Encoded Complex Reward Functions, Not Simple Architectures

The brain's remarkable learning efficiency doesn't come from having a superior architecture compared to neural networks, but from evolution encoding incredibly sophisticated loss functions and reward signals. (02:14) While machine learning uses mathematically simple loss functions like "predict the next token," evolution has built complex Python-like code that generates specific curricula for what different brain regions need to learn at different developmental stages. This allows the brain to bootstrap learning through very targeted reward signals that guide attention to the most relevant information for survival and social success.

The Brain Uses Omnidirectional Inference Rather Than Unidirectional Prediction

Unlike large language models that predict the next token in a sequence, the cortex appears designed for omnidirectional inference - the ability to predict any subset of variables from any other subset. (03:38) This means cortical areas can fill in missing information bidirectionally, predict sensory input from motor commands, or predict motor responses from sensory input. This flexibility provides much greater generalization capability than the fixed input-output mappings used in current AI systems, enabling more efficient learning from limited data.

Learning and Steering Subsystems Work Together Through Predictive Modeling

The brain contains two major subsystems: a learning subsystem (primarily cortex) that builds world models, and a steering subsystem (subcortical regions) with innate reward functions and reflexes. (10:09) The key insight is that parts of the learning subsystem learn to predict what the steering subsystem will do, creating a bridge between learned concepts and innate drives. For example, when you hear "spider on your back," cortical neurons that predict the innate spider-flinch response get activated, allowing abstract concepts to trigger appropriate emotional and behavioral responses without evolution needing to anticipate every possible scenario.

Biological Hardware May Have Unique Advantages for Intelligence

While biological brains face constraints like low power consumption and inability to be copied, they may have advantages that current digital systems lack. (49:12) Neurons naturally generate stochastic samples needed for probabilistic inference, co-locate memory and computation, and can perform complex temporal computations within individual cells. The brain's energy efficiency at 20 watts compared to massive GPU clusters suggests there may be algorithmic insights about efficient computation that could be applied to future AI systems.

Connectomes Could Revolutionize Neuroscience and AI Development

Mapping complete neural connections (connectomes) could provide the empirical foundation needed to test theories about brain algorithms and guide AI development. (1:08:00) While a complete human brain connectome remains expensive, mapping mouse brains and human subcortical regions could be achievable with low billions in focused investment. This biological "ground truth" data could help resolve debates about whether brains use backpropagation, energy-based models, or other learning algorithms, potentially informing the next generation of AI architectures.

Statistics & Facts

  1. The human genome contains approximately 3 gigabytes of information, with only a small fraction relevant to brain development. (27:52) This constraint helps explain how evolution can encode complex reward functions efficiently.
  2. A mouse brain connectome would cost several billion dollars using current technology, but new approaches aim to reduce this to tens of millions of dollars. (1:12:00) A human brain, being about 1000 times larger, would still cost billions even with improved technology.
  3. The brain operates on approximately 20 watts of power while achieving capabilities that require massive GPU clusters consuming orders of magnitude more energy. (49:12) This suggests significant room for improvement in computational efficiency.

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription

More episodes like this

In Good Company with Nicolai Tangen
January 14, 2026

Figma CEO: From Idea to IPO, Design at Scale and AI’s Impact on Creativity

In Good Company with Nicolai Tangen
Uncensored CMO
January 14, 2026

Rory Sutherland on why luck beats logic in marketing

Uncensored CMO
We Study Billionaires - The Investor’s Podcast Network
January 14, 2026

BTC257: Bitcoin Mastermind Q1 2026 w/ Jeff Ross, Joe Carlasare, and American HODL (Bitcoin Podcast)

We Study Billionaires - The Investor’s Podcast Network
This Week in Startups
January 13, 2026

How to Make Billions from Exposing Fraud | E2234

This Week in Startups
Swipe to navigate