Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this fascinating episode, Adam Marblestone, CEO of Convergent Research and former Google DeepMind research scientist, explores how the brain achieves extraordinary learning efficiency compared to modern AI systems. (00:24) The conversation delves into the central mystery of human intelligence: why humans can learn complex skills with dramatically less data than large language models require. (00:56) Marblestone argues that the key difference lies not in architecture or learning algorithms, but in the sophisticated reward functions that evolution has encoded in our brains.
Adam Marblestone is CEO of Convergent Research, a organization focused on advancing scientific breakthroughs through focused research organizations (FROs). He previously worked as a research scientist at Google DeepMind on their neuroscience team and has extensive experience across diverse fields including brain-computer interfaces, quantum computing, nanotechnology, and formal mathematics. His work bridges neuroscience and artificial intelligence, with particular interest in understanding how biological systems achieve remarkable learning efficiency.
Dwarkesh Patel hosts in-depth conversations with leading researchers, entrepreneurs, and thinkers, exploring cutting-edge developments in AI, science, and technology through detailed technical discussions.
The brain's remarkable learning efficiency doesn't come from having a superior architecture compared to neural networks, but from evolution encoding incredibly sophisticated loss functions and reward signals. (02:14) While machine learning uses mathematically simple loss functions like "predict the next token," evolution has built complex Python-like code that generates specific curricula for what different brain regions need to learn at different developmental stages. This allows the brain to bootstrap learning through very targeted reward signals that guide attention to the most relevant information for survival and social success.
Unlike large language models that predict the next token in a sequence, the cortex appears designed for omnidirectional inference - the ability to predict any subset of variables from any other subset. (03:38) This means cortical areas can fill in missing information bidirectionally, predict sensory input from motor commands, or predict motor responses from sensory input. This flexibility provides much greater generalization capability than the fixed input-output mappings used in current AI systems, enabling more efficient learning from limited data.
The brain contains two major subsystems: a learning subsystem (primarily cortex) that builds world models, and a steering subsystem (subcortical regions) with innate reward functions and reflexes. (10:09) The key insight is that parts of the learning subsystem learn to predict what the steering subsystem will do, creating a bridge between learned concepts and innate drives. For example, when you hear "spider on your back," cortical neurons that predict the innate spider-flinch response get activated, allowing abstract concepts to trigger appropriate emotional and behavioral responses without evolution needing to anticipate every possible scenario.
While biological brains face constraints like low power consumption and inability to be copied, they may have advantages that current digital systems lack. (49:12) Neurons naturally generate stochastic samples needed for probabilistic inference, co-locate memory and computation, and can perform complex temporal computations within individual cells. The brain's energy efficiency at 20 watts compared to massive GPU clusters suggests there may be algorithmic insights about efficient computation that could be applied to future AI systems.
Mapping complete neural connections (connectomes) could provide the empirical foundation needed to test theories about brain algorithms and guide AI development. (1:08:00) While a complete human brain connectome remains expensive, mapping mouse brains and human subcortical regions could be achievable with low billions in focused investment. This biological "ground truth" data could help resolve debates about whether brains use backpropagation, energy-based models, or other learning algorithms, potentially informing the next generation of AI architectures.