Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this fascinating episode, Dr. Andrew Huberman sits down with Dr. Poppy Crum, a neuroscientist, Stanford professor, and former chief scientist at Dolby Laboratories. The conversation explores the intersection of technology and neuroplasticity, revealing how emerging "hearable" technologies will soon monitor our cognitive states and automatically optimize our environments for better focus, relaxation, and human connection. (00:42)
Professor of neurobiology and ophthalmology at Stanford School of Medicine. Host of the Huberman Lab Podcast, where he translates neuroscience research into practical tools for everyday life. He teaches auditory physiology and has extensive experience in vision research across multiple species.
Neuroscientist and professor at Stanford, former chief scientist at Dolby Laboratories. She has absolute pitch and discovered this ability at a young age, which shaped her unique approach to understanding human-technology interfaces. Her work focuses on how technology can accelerate neuroplasticity and learning, with expertise spanning audio engineering, computer vision, and AI applications for human performance optimization.
Every technology we engage with daily shapes our neural pathways through neuroplasticity. Dr. Crum explains that our brains allocate resources to help us succeed in our environments, like how London taxi drivers developed enlarged hippocampi for spatial navigation before GPS. (04:08) The key insight is that we need to be conscious of how technologies are "architecting" our brains rather than passively consuming them. This means choosing technologies that enhance our cognitive abilities rather than simply replacing them.
There's a critical distinction between using AI to make yourself cognitively smarter versus using it to simply speed up tasks. Dr. Crum emphasizes that when we use AI for learning (like having it create tests to identify our knowledge gaps), we engage in "germane cognitive load" - the mental effort required to build lasting neural schemas. (59:54) However, when we use AI to replace cognitive processes entirely (like having it write papers for us), we miss the crucial brain work needed for true learning and understanding.
Digital twins aren't complete replicas of ourselves, but rather digital representatives that use data from multiple sources to provide insights about our physical and cognitive states. Dr. Crum gives examples ranging from simple applications like reef tank monitoring to sophisticated systems that could track our focus patterns and environmental factors. (103:07) The goal is to gain situational intelligence by integrating data from our body, local environment, and external environment to make better decisions proactively rather than reactively.
Drawing from research on owls adapting to prism glasses, Dr. Crum reveals that neural map changes happen much faster when survival or critical performance depends on it. (129:36) This means we can form new habits and capabilities much quicker than commonly believed when the incentives are high enough. The traditional "21 days to form a habit" is actually quite flexible - our brains can adapt as quickly as necessary when the stakes matter to us.
Future hearable devices will be able to detect our cognitive and emotional states through signals like pupil diameter, heart rate variability, and even chemical signatures in our breath like CO2, acetone, and isoprene. (77:20) These technologies will then automatically adjust our environment - from HVAC systems to lighting to background sounds - to optimize our performance for specific goals like focus, relaxation, or creativity, all without requiring multiple wearables on our bodies.