Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this deeply unsettling episode, Dr. Roman Yampolskiy, the computer scientist who coined the term "AI safety," delivers an alarming wake-up call about humanity's imminent encounter with artificial general intelligence. Drawing from two decades of research, he explains why we're rapidly approaching a world with 99% unemployment (11:14), why superintelligence could trigger human extinction within years, and his near-certainty that we're already living in a simulation. From predicting AGI by 2027 (10:20) to questioning Sam Altman's true motivations (45:29), Yampolskiy presents a compelling case for why this technological race represents the most critical challenge humanity has ever faced—one where traditional solutions like "just unplug it" (30:53) reveal dangerous naivety about what we're actually building.
Computer science PhD with 15+ years in AI safety research, associate professor and author. He coined the term "AI safety" and focuses on the terrifying reality of uncontrolled superintelligence threatening human existence.
Host of The Diary of a CEO podcast with over 1.4 million downloads. He conducts in-depth interviews with industry experts on topics ranging from technology to business leadership.
Distinguish between challenging but solvable computer science problems and genuinely impossible ones. AI safety control isn't just difficult—it's fundamentally impossible. (50:48) Recognizing this shifts strategy from "how do we solve it?" to "how do we avoid building something we can't control?" This prevents wasted resources and misguided confidence in unsolvable challenges.
When anyone claims they can control superintelligence, demand peer-reviewed papers with specific scientific explanations. (53:00) Don't accept vague promises like "we'll figure it out" or "AI will help us control AI." Press for concrete methodologies—if no one can provide them after years and billions in funding, that's your answer.
Previous technological revolutions created better tools; AI creates autonomous agents that make their own decisions. (26:33) This isn't like the Industrial Revolution where displaced workers found new roles—we're creating meta-inventors that can automate any conceivable job, including the job of inventing new solutions. Plan accordingly.
In a world moving toward infinite digital abundance, only truly scarce resources maintain value. Bitcoin represents the sole asset with mathematically guaranteed scarcity—you know exactly how much exists in the universe. (73:00) While AI makes everything else abundant and cheap, genuine scarcity becomes exponentially more valuable.
Think in centuries, not decades. If life extension becomes reality, your career strategies, investment horizons, and learning approaches need fundamental recalibration. (71:00) Consider skills and knowledge that compound over vastly longer timeframes, and make financial plans that pay out across multiple centuries rather than retirement-focused decades.
No specific statistics were provided in this episode.