Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode of Infinite Tech, Preston Pysh sits down with renowned robotics expert Ken Goldberg to examine the gap between artificial intelligence breakthroughs and real-world robotic capabilities. Ken agrees with MIT's Rodney Brooks that the field may have "lost its way" by assuming language model advances automatically translate to physical intelligence. (02:37)
Ken Goldberg is a leading robotics researcher at UC Berkeley who bridges academic AI research with real-world commercial applications. He co-founded Ambi Robotics, which has successfully deployed robotic systems that have sorted over 100 million packages. Goldberg has worked in robotics for 45 years and is also an accomplished artist, having created installations like the Telegarden project that stayed online for nine years starting in 1995.
Preston Pysh is the host of Infinite Tech, exploring breakthrough technologies through the lens of abundance and sound money. He brings a business-focused perspective to technical discussions, helping translate complex robotics concepts for professionals and investors.
The assumption that large language models automatically solve robotics is fundamentally flawed. (03:43) While AI systems excel at language and creativity, the physical manipulation of objects requires entirely different capabilities. Ken explains that people see language AI achievements and logically assume robots will follow, but there's no clear pathway from language processing to physical dexterity. This disconnect has created inflated expectations about humanoid robot timelines.
Robotics faces a massive data disadvantage compared to language models. (20:19) If you calculated the time needed to read all text used to train language models at average human reading speed, it would take 100,000 years. Meanwhile, robotics has virtually no equivalent dataset for manipulation tasks. Unlike text that exists freely on the internet, robot manipulation data must be generated through physical interaction with real environments, creating an enormous bottleneck for advancement.
Contrary to popular belief, simple grippers are often more effective than human-like robotic hands. (25:05) Surgeons perform incredibly complex operations using basic gripper tools rather than multi-jointed hands, and Ken's company Ambi Robotics uses simple suction cups to sort millions of packages. The challenge isn't hardware sophistication but software control of nuanced interactions. This insight suggests the path forward may involve specialized tools rather than general-purpose humanoid hands.
Robotic surgery proves that complex manipulation is possible without tactile feedback. (12:57) Surgeons successfully perform operations like appendectomies using cameras instead of touch, watching tissue deformation and inferring force through visual cues. This demonstrates that visual-tactile interaction understanding might be more achievable than recreating human-like touch sensing, offering a promising alternative pathway for robotic dexterity development.
Academic lab testing cannot capture the full complexity of commercial environments. (34:41) Ken's team thoroughly tested their bin-picking system on countless objects but failed to adequately test with bags – one of the most common shipping items. Bags fold unpredictably and lose suction at fold points, requiring complete system adaptations. This experience highlights how real-world deployment inevitably exposes limitations that laboratory testing misses, regardless of how comprehensive the initial testing appears.