Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode, Cal Newport is joined by AI critic and Better Offline podcast host Ed Zitron to analyze whether 2025 was a great or terrible year for AI. They systematically review the biggest AI stories month by month, from DeepSeek's January disruption to OpenAI's December "Code Red" crisis. (03:16) The conversation reveals troubling financial realities behind AI companies, with massive costs exceeding revenues and a shift from ambitious promises to desperate attempts at monetization.
Computer science professor at Georgetown University and author of "Slow Productivity" and "Digital Minimalism." Newport is known for his research on technology's impact on society and deep work practices. He hosts the Deep Questions podcast and writes extensively about technology criticism and productivity philosophy.
Host of the Better Offline podcast and writer of the "Where's Your Ed At" Substack newsletter. Zitron has emerged as one of the most informed AI industry commentators, known for conducting thorough investigative reporting including analyzing earnings reports, talking to sources within tech companies, and following financial data rather than just reporting on company narratives.
Ed Zitron's investigative reporting reveals that major AI companies are spending far more on compute costs than they're generating in revenue. (92:00) For example, OpenAI spent $8.67 billion on inference costs through September while generating only around $4.5 billion in revenue during the same period. This represents a fundamental business model problem where costs increase with usage, making profitability nearly impossible at scale.
By 2025, the traditional approach of making AI models better by simply making them larger hit a wall. (39:40) OpenAI's Project Orion, their attempt to scale up from GPT-4, failed to deliver expected improvements despite massive investment. This forced the industry to pivot toward "reasoning models" and test-time compute, which essentially means using more processing power to run existing models rather than training fundamentally better ones.
Despite 2025 being declared "the year of AI agents," OpenAI ended the year by deemphasizing agent development and declaring a "Code Red" to focus on making ChatGPT better. (100:00) The reality is that current AI systems cannot reliably perform multi-step autonomous tasks in real-world environments. What was marketed as revolutionary workplace automation turned out to be expensive, unreliable prototypes.
The launch of Sora video generation app and OpenAI's deals with companies like Disney represent desperate attempts to find new revenue streams. (84:00) These moves signal that core AI products aren't generating sufficient revenue to justify their development costs. The shift toward consumer entertainment applications shows the industry moving away from transformative B2B promises toward more traditional content monetization models.
Throughout 2025, technology journalism frequently repeated company marketing claims without adequate technical scrutiny. (73:00) Ed Zitron's experience of having technical stories about AI limitations rejected by multiple reporters highlights how industry access and advertising relationships can compromise journalistic independence. This creates an information asymmetry where the public receives optimistic projections rather than realistic assessments.