Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
Former Tesla AI Director and OpenAI Co-founder Andre Karpathy discusses why this will be the "decade of agents" rather than the "year of agents," sharing his perspective on AI timelines based on fifteen years of experience in the field. (00:27) He traces the evolution of AI through major paradigm shifts - from the deep learning revolution with AlexNet to the reinforcement learning era with Atari games, and finally to the current LLM breakthrough. Throughout the conversation, Karpathy emphasizes his practical engineering mindset, explaining why he sees AI as fundamentally an extension of computing rather than a magical leap toward superintelligence.
Andre Karpathy is a renowned AI researcher and educator who co-founded OpenAI and served as the Director of AI at Tesla's Autopilot division from 2017-2022, where he led the development of self-driving car technology. (02:42) He has nearly two decades of experience in AI and deep learning, having worked closely with Geoffrey Hinton at University of Toronto during the early days of the deep learning revolution. Currently, he's building Eureka, an educational platform he describes as "Starfleet Academy" for technical knowledge, and recently released NanoChat - a simplified but complete ChatGPT implementation in 1,000 lines of code.
Current LLMs lack many cognitive components that would make them truly intelligent agents. (19:40) Karpathy draws analogies between AI models and brain regions, suggesting that transformers might represent "cortical tissue" while reasoning traces could be like the "prefrontal cortex." However, many brain regions remain unexplored in current AI systems, including the equivalent of the hippocampus for memory formation, the amygdala for emotions and instincts, and other ancient nuclei that handle different aspects of cognition. This is why these models still feel cognitively deficient despite impressive capabilities.
Karpathy argues that current reinforcement learning methods are "terrible" and create noisy learning signals. (40:01) He uses the vivid analogy of "sucking supervision through a straw" - where an agent might do hundreds of rollouts solving a problem, but only gets a single reward signal at the end indicating success or failure. This means every action in a successful trajectory gets upweighted, even wrong turns and mistakes along the way. Humans don't learn this way - they have sophisticated review processes and can assign partial credit to different parts of their problem-solving approach.
Unlike biological intelligence that evolved through natural selection, LLMs are trained through imitation of human data on the internet, creating what Karpathy calls "ghost" or "spirit" entities. (08:42) This process does two things: it accumulates knowledge from internet documents, but more importantly, it develops intelligence by learning algorithmic patterns and developing capabilities like in-context learning. The knowledge component may actually be holding back neural networks by making them too reliant on memorization rather than reasoning from first principles.
From his experience leading Tesla's self-driving efforts, Karpathy learned that impressive demos can be misleading about timeline to deployment. (105:15) He describes a "march of nines" where each additional nine of reliability (90% to 99% to 99.9%) requires constant amounts of work. This is especially true in domains with high cost of failure, like self-driving cars or production software systems. Many AI applications today are impressive demos but still face years of engineering work to become reliable products that can handle edge cases and real-world complexity.
Post-AGI, Karpathy envisions education becoming like physical fitness - something people pursue for personal fulfillment rather than economic necessity. (128:04) Just as people go to the gym despite not needing physical strength for work, people will pursue learning because it's "fun, healthy, and makes you look hot." His vision for Eureka is to build perfect AI tutors that can understand exactly where a student is, serve appropriately challenging material, and make learning feel effortless. This could unlock human potential in ways similar to how modern fitness culture has made physical capabilities that were once rare become commonplace.