Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
Cal Newport dismantles tech-philosopher Eliezer Yudkowsky's apocalyptic AI warnings in a comprehensive breakdown of their debate on Ezra Klein's podcast. (01:00) Newport systematically addresses Yudkowsky's claims that current AI systems are uncontrollable and destined to evolve into humanity-destroying superintelligence.
Computer science professor at Georgetown University who directs the country's first integrated computer science and ethics academic program. He holds a doctorate in computer science from MIT and is a regular contributor to The New Yorker on AI and technology topics. Newport is the author of several bestselling books including "Slow Productivity" and is known for his critical analysis of technology's impact on society.
Techno-philosopher and AI critic who has been warning about artificial intelligence dangers since the early 2000s. Co-author of the book "If Anyone Builds It, Everyone Dies," Yudkowsky is considered a leading voice in the AI safety community and has been influential in shaping discussions about superintelligence risks within Silicon Valley and effective altruism circles.
Newport clarifies that current AI systems consist of language models (word guessers) paired with control programs written by humans. (24:00) When people say AI is "hard to control," they really mean it's unpredictable - we can't always predict what text the language model will generate. However, there are no alien intentions or goals beyond trying to guess the next word in a sequence. The perceived unpredictability comes from not understanding how the training process works, not from some emergent consciousness trying to break free.
The core assumption behind superintelligence fears - that AI will build better AI in an exponential loop - lacks technical foundation. (44:00) Newport explains that for an AI to code systems smarter than humans ever could, it would need to have seen examples of such superior code during training. Since humans aren't smart enough to create superintelligent systems, no such training data exists. Current evidence shows AI coding capabilities plateauing at relatively basic levels, contradicting claims of imminent coding breakthroughs.
Contrary to predictions of exponential AI improvement, the industry has encountered diminishing returns from making models larger. (50:30) Starting about two years ago, simply adding more computing power and data stopped yielding significant capability jumps. Instead of fundamental breakthroughs, companies are now focusing on narrow tuning for specific tasks and benchmark optimization. This technical reality undermines predictions of inevitable superintelligence emergence.
Newport identifies a critical thinking error where extended analysis of hypothetical scenarios causes people to forget the original assumption was speculative. (56:00) The AI safety community spent years exploring "what if superintelligence existed" thought experiments, eventually treating the assumption as fact. This parallels spending decades analyzing dinosaur containment strategies after reading Jurassic Park, then insisting dinosaur safety is humanity's top priority despite no one knowing how to clone dinosaurs.
While debating hypothetical superintelligence threats, actual AI harms affecting people today get ignored. (64:00) Current AI systems create genuine issues with deepfakes, misinformation, privacy violations, and job displacement that need immediate attention. Productive AI criticism should address these tangible problems rather than speculative scenarios that may never materialize. Resources spent on superintelligence speculation could be better directed toward solving present-day AI challenges.