Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode, Yash Narang, Senior Research Manager at NVIDIA and head of the Seattle Robotics Lab, explores how artificial intelligence is revolutionizing robotics through NVIDIA's "three computer concept." (05:18) The discussion reveals how robots are evolving beyond traditional factory automation to achieve true intelligence through advanced simulation, learning paradigms, and the emergence of humanoid robotics. Yash explains the critical role of synthetic data generation, the sim-to-real gap challenges, and how breakthrough technologies like Omniverse and Cosmos are enabling robots to learn complex behaviors in virtual environments before deployment in the real world.
Yash is Senior Research Manager at NVIDIA and head of the Seattle Robotics Lab, which was established in October 2017. He completed his PhD in material science and mechanical engineering from Harvard University and holds a master's in mechanical engineering from MIT. His team conducts fundamental and applied research across the full robotics stack, including perception, planning, control, reinforcement learning, imitation learning, simulation, and vision language action models.
Noah is the host of the NVIDIA AI podcast, bringing complex AI and technology topics to a broad audience through engaging conversations with industry leaders and researchers.
NVIDIA's three computer concept provides a comprehensive framework for modern robotics development. (05:08) The first computer (DGX systems with GB 200 and Grace Blackwell chips) handles AI model training and inference. The second computer combines Omniverse and Cosmos for simulation, data generation, and robot evaluation. The third computer (Jetson AGX Thor) enables real-time AI inference directly on robots. This integrated approach allows professionals to leverage cutting-edge hardware at each stage of robot development, from initial training through deployment.
Understanding when to use imitation versus reinforcement learning can dramatically impact your robotics projects. (15:53) Imitation learning excels when you can demonstrate desired behaviors and want human-like robot actions, making it ideal for tasks like precise manipulation or customer-facing applications. Reinforcement learning shines when demonstrations are difficult or when you need superhuman performance, such as high-speed assembly or navigating complex environments. The key insight: reinforcement learning can discover behaviors that surpass human capabilities, while imitation learning provides reliable, predictable outcomes based on human expertise.
The simulation-to-reality transfer remains one of robotics' biggest challenges, but systematic approaches can minimize this gap. (32:29) Domain randomization involves varying visual backgrounds, physics parameters, and environmental conditions during simulation training to create robust behaviors. Domain adaptation focuses on making simulations closely match your specific deployment environment. Domain invariance removes unnecessary information that could cause transfer failures. Combining these strategies with the "data pyramid" approach—using YouTube videos at the base, synthetic simulation data in the middle, and real-world data at the top—creates a comprehensive training foundation.
The choice between modular (perceive-plan-act) and end-to-end approaches significantly impacts robotics system development. (21:00) Modular systems excel in safety-critical applications, offer easier debugging, and allow specialized teams to work on different components. End-to-end approaches reduce human engineering overhead and can discover optimal solutions without predetermined module boundaries. Following autonomous driving's evolution, the future likely belongs to hybrid architectures that combine the reliability of modular systems with the learning efficiency of end-to-end models.
Unlike other AI domains, robotics lacks internet-scale data sources, making synthetic data generation crucial for advancement. (31:03) High-fidelity simulation environments can generate vast amounts of training data that would be impractical to collect in the real world. This approach is particularly powerful for dangerous scenarios, rare edge cases, or when you need to test thousands of variations quickly. The key is balancing simulation quality with data quantity—sometimes approximate simulations with massive scale outperform perfect simulations with limited data.