Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
This live episode from The Cognitive Revolution features rapid-fire conversations with nine experts analyzing AI developments in 2025 and forecasting what might define 2026. (00:31) The show includes discussions with New York Assemblymember Alex Boris on the RAISE Act and AI safety legislation, former White House AI adviser Dean Ball on emerging political coalitions around AI policy, and forecaster Peter Wildeford on chip bans, agent capabilities, and robotics predictions.
Alex Boris is a New York State Assemblymember and computer engineer by training who worked at Palantir for four years. He is the author of the RAISE Act, which focuses on AI safety standards and is currently under negotiation with the New York governor through the chapter amendments process.
Dean Ball served as a policy adviser on artificial intelligence at the White House Office of Science and Technology Policy before leaving government to focus on writing and analysis. He has been actively commenting on AI policy developments and political coalitions forming around AI regulation.
Peter Wildeford is a policy strategist at the Institute for AI Policy and Strategy and serves on the board of Metaculus. He is recognized as a top 20 globally ranked forecaster and has been actively analyzing AI policy implications, particularly around chip export controls to China.
Alex Boris's RAISE Act, which echoes many company preparedness frameworks, has come under attack from a $100 million super PAC backed by Andreessen Horowitz and Greg Brockman. (12:32) The legislation requires safety plans for models that could cause catastrophic risks (100+ deaths or $1 billion in damage via CBRN weapons or fully automated crimes), yet even these extreme thresholds have triggered coordinated opposition. Boris notes the irony that he has a CS masters degree and Palantir experience, yet is still targeted by tech industry opposition. This demonstrates how even technically-informed, moderate safety approaches face significant industry resistance, suggesting that any meaningful regulation will require sustained political will regardless of how reasonable the proposals appear.
Peter Wildeford argues that selling chips to China rather than implementing strict export controls is counterproductive because China doesn't operate like a capitalist market. (50:23) He explains that the Chinese government ensures unlimited demand for Huawei chips through policy requirements, while NVIDIA fills the remaining 96% of demand that Huawei can't supply. When Huawei's capacity increases, the government will push out NVIDIA in favor of domestic alternatives, as seen with Tesla being displaced by BYD and Apple losing market share to Huawei. Wildeford advocates for a "rent, don't sell" approach where China can access American AI capabilities through controlled cloud services rather than owning the underlying hardware, allowing economic benefits while maintaining strategic control.
Dean Ball identifies that AI political factions remain in flux, distinguishing between traditional AI safety concerns and emerging anti-AI sentiment. (26:17) He notes there are pro-AI industry groups, traditional AI safety advocates like Boris, and a growing anti-AI coalition spanning from Bernie Sanders calling for data center bans to right-wing concerns about corporate power. The critical question is whether middle-ground voters concerned about child safety and consumer protection will align with safety-focused regulation or anti-AI sentiment. Ball emphasizes this differs from fighting social media's past battles, as many view AI through a narrow consumer technology lens rather than recognizing its broader transformational potential.
Peter Wildeford predicts that 2026 will finally deliver on autonomous AI agents after 2025's disappointments in this area. (83:31) He points to Meter's evaluation showing AI currently achieves 50% reliability on 2-hour human tasks, but expects this to scale to day-long autonomous work by end of 2026. Combined with improving computer use capabilities, this could enable AI systems to reliably handle complex workflows while humans sleep or focus elsewhere. Even with 80% failure rates, the economic value would be significant due to cost and availability advantages. This progression could trigger the "ChatGPT moment" for robotics and autonomous systems, fundamentally changing how people perceive AI's practical utility and economic impact.
The hosts observe that improving model efficiency could justify massive data center investments by enabling older chips to run increasingly capable models over time. (99:09) As an example, a software engineering task requiring a Blackwell chip today might run on an H100 chip in twelve months while delivering the same economic value. This "efficiency dividend" means data centers won't become obsolete as technology advances, but rather will host progressively more capable AI at lower costs. Combined with fallback applications like personalized advertising that guarantee revenue streams, this could sustain the economics of continued massive AI infrastructure investment even if superintelligence timelines extend longer than expected.