Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode, hosts Kevin Roose and Casey Newton dive into Google's ambitious Project Suncatcher, exploring plans to build AI data centers in space to address Earth's energy and capacity limitations. The hosts then welcome Dean Ball, former White House senior policy adviser for AI, who provides insider insights into how Republican AI policy was crafted and the various factions within conservative thinking on AI regulation. Finally, history professor Mark Humphries joins to discuss his remarkable encounter with what appears to be an unreleased Gemini 3 model that demonstrated unprecedented capabilities in transcribing and interpreting 18th-century documents. (01:13)
Kevin Roose is a tech columnist at The New York Times and co-host of Hard Fork. He specializes in covering technology's impact on society and has extensive experience reporting on AI developments and their policy implications.
Casey Newton is the founder of Platformer, a newsletter covering social media and technology platforms, and co-host of Hard Fork. He brings expertise in tech policy and platform governance to the show's discussions.
Dean Ball is a senior fellow at the Foundation for American Innovation and former White House senior policy adviser for artificial intelligence and emerging technology. He led the drafting of the Trump administration's AI Action Plan and is the author of Hyperdimensional, a newsletter about AI and policy, having been recruited to the White House primarily based on his Substack writing.
Mark Humphries is a professor of history at Wilfrid Laurier University in Ontario, Canada, and author of the Generative History Substack. He specializes in researching the fur trade using AI tools to process handwritten historical documents and has become an early adopter of AI technologies for academic research.
Google's Project Suncatcher represents the industry's acknowledgment that terrestrial infrastructure may not be sufficient for future AI demands. (03:20) The project aims to build data centers in space using solar panels that can capture eight times more energy than Earth-based panels through constant sunlight exposure. This indicates that companies are willing to pursue seemingly impossible solutions rather than accept limitations on AI scaling, suggesting we may be entering a phase where the ambitions of AI development exceed the practical constraints of our planet's resources.
Contrary to popular perception, conservative approaches to AI regulation aren't simply divided between "accelerationists" and "doomers." (24:57) Dean Ball explains that there are national security hawks focused on China competition, child safety advocates concerned about social media lessons, and various positions in between. This nuanced landscape suggests that bipartisan AI policy solutions may be more achievable than commonly assumed, particularly around tail risk management and child safety issues where conservative and liberal concerns align.
The Trump administration's "woke AI" executive order specifically targets federal procurement rather than regulating how companies train models for public use. (30:28) Ball emphasizes this distinction is crucial because regulating model training for public consumption would violate First Amendment rights, while setting standards for government-purchased AI services falls within legitimate procurement authority. This approach allows the government to have ideologically neutral AI tools without infringing on private companies' speech rights.
Professor Mark Humphries' experience with what appears to be Gemini 3 suggests a qualitative leap in AI capabilities beyond pattern recognition. (58:00) The model successfully performed currency conversions from 18th-century documents, working backwards through different mathematical bases to correctly interpret "145" as "14 pounds, 5 ounces" of sugar. This type of abstract reasoning represents a potential breakthrough from probabilistic text prediction to genuine mathematical and logical inference, indicating that scaling laws may still have significant room for advancement.
The leap from 95% to 99% accuracy in document transcription represents a transition from AI as assistant to AI as independent researcher. (56:57) Humphries explains that this threshold enables AI to perform tasks that previously required human expertise, such as analyzing ledgers, tracking individuals across documents, and synthesizing historical narratives. This transformation in historical research serves as a preview for how other knowledge work sectors may experience similar automation as AI capabilities continue advancing.