Command Palette

Search for a command to run...

PodMine
The Diary Of A CEO with Steven Bartlett
The Diary Of A CEO with Steven Bartlett•September 4, 2025

Roman Yampolskiy: These Are The Only 5 Jobs That Will Remain In 2030 & Proof We're Living In a Simulation!

Dr. Roman Yampolskiy, a computer science professor and AI safety expert, warns that artificial general intelligence (AGI) could arrive by 2027, potentially leading to 99% unemployment and posing an existential threat to humanity. He argues that we cannot control superintelligent AI and that its development could result in human extinction, while also discussing his belief that we are likely living in a simulation created by a more advanced intelligence.
AI & Machine Learning
Tech Policy & Ethics
Developer Culture
Sam Altman
Jeff Hinton
Roman Yampolskiy
Ilya Sutskever
OpenAI

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Compelling StoriesPremium
  • Strategies & FrameworksPremium
  • Thought-Provoking QuotesPremium
  • Statistics & Facts
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Similar StrategiesPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

In this deeply unsettling episode, Dr. Roman Yampolskiy, the computer scientist who coined the term "AI safety," delivers an alarming wake-up call about humanity's imminent encounter with artificial general intelligence. Drawing from two decades of research, he explains why we're rapidly approaching a world with 99% unemployment (11:14), why superintelligence could trigger human extinction within years, and his near-certainty that we're already living in a simulation. From predicting AGI by 2027 (10:20) to questioning Sam Altman's true motivations (45:29), Yampolskiy presents a compelling case for why this technological race represents the most critical challenge humanity has ever faced—one where traditional solutions like "just unplug it" (30:53) reveal dangerous naivety about what we're actually building.

Speakers

Dr. Roman Yampolskiy

Computer science PhD with 15+ years in AI safety research, associate professor and author. He coined the term "AI safety" and focuses on the terrifying reality of uncontrolled superintelligence threatening human existence.

Steven Bartlett (Host)

Host of The Diary of a CEO podcast with over 1.4 million downloads. He conducts in-depth interviews with industry experts on topics ranging from technology to business leadership.

Key Takeaways

Study Impossible Problems, Not Just Difficult Ones

Distinguish between challenging but solvable computer science problems and genuinely impossible ones. AI safety control isn't just difficult—it's fundamentally impossible. (50:48) Recognizing this shifts strategy from "how do we solve it?" to "how do we avoid building something we can't control?" This prevents wasted resources and misguided confidence in unsolvable challenges.

Challenge Unfounded Safety Claims with Scientific Rigor

When anyone claims they can control superintelligence, demand peer-reviewed papers with specific scientific explanations. (53:00) Don't accept vague promises like "we'll figure it out" or "AI will help us control AI." Press for concrete methodologies—if no one can provide them after years and billions in funding, that's your answer.

Recognize the Paradigm Shift from Tools to Agents

Previous technological revolutions created better tools; AI creates autonomous agents that make their own decisions. (26:33) This isn't like the Industrial Revolution where displaced workers found new roles—we're creating meta-inventors that can automate any conceivable job, including the job of inventing new solutions. Plan accordingly.

Invest in Scarcity, Not Abundance

In a world moving toward infinite digital abundance, only truly scarce resources maintain value. Bitcoin represents the sole asset with mathematically guaranteed scarcity—you know exactly how much exists in the universe. (73:00) While AI makes everything else abundant and cheap, genuine scarcity becomes exponentially more valuable.

Design for Longevity Beyond Current Timelines

Think in centuries, not decades. If life extension becomes reality, your career strategies, investment horizons, and learning approaches need fundamental recalibration. (71:00) Consider skills and knowledge that compound over vastly longer timeframes, and make financial plans that pay out across multiple centuries rather than retirement-focused decades.

Compelling Stories

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Statistics & Facts

No specific statistics were provided in this episode.

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Similar Strategies

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription

More episodes like this

The Prof G Pod with Scott Galloway
January 14, 2026

Raging Moderates: Is This a Turning Point for America? (ft. Sarah Longwell)

The Prof G Pod with Scott Galloway
Young and Profiting with Hala Taha (Entrepreneurship, Sales, Marketing)
January 14, 2026

The Productivity Framework That Eliminates Burnout and Maximizes Output | Productivity | Presented by Working Genius

Young and Profiting with Hala Taha (Entrepreneurship, Sales, Marketing)
On Purpose with Jay Shetty
January 14, 2026

MEL ROBBINS: How to Stop People-Pleasing Without Feeling Guilty (Follow THIS Simple Rule to Set Boundaries and Stop Putting Yourself Last!)

On Purpose with Jay Shetty
Tetragrammaton with Rick Rubin
January 14, 2026

Joseph Nguyen

Tetragrammaton with Rick Rubin
Swipe to navigate