Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this Decoder episode, guest host Hayden Fields interviews Heidi Klaff, chief AI scientist at the AI Now Institute, about the dramatic shift in AI companies' military policies. (02:49) The conversation explores how major AI companies like OpenAI and Anthropic removed bans on military use cases and signed lucrative Department of Defense contracts worth $200 million each. (02:58) Klaff, who previously worked at OpenAI developing safety frameworks, argues that these companies are prioritizing profit over safety by deploying unvetted AI systems in high-risk military applications.
Chief AI scientist at the AI Now Institute and a leading expert in AI safety within autonomous weapon systems. She previously worked at OpenAI from late 2020 to mid-2021 as a senior system safety engineer, where she developed safety and risk assessment frameworks for the company's Codex coding tool during a critical period in the company's development.
Senior AI reporter at The Verge and guest host for this Decoder episode. Fields specializes in covering artificial intelligence developments and serves as the Thursday episode host while Neelai Patel is on parental leave.
Traditional military procurement requires systems to meet extremely stringent safety standards, often 99% or higher accuracy rates. (12:22) However, AI companies are pushing systems with accuracy rates as low as 20-60% into defense applications. Klaff explains that defense standards are typically derived from decades of rigorous testing, yet AI companies cannot meet these basic thresholds due to the inherently inaccurate nature of foundation models. This creates a dangerous precedent where profit motives override established safety protocols.
AI models trained on publicly available data are vulnerable to "sleeper agent" attacks and web poisoning. (20:53) Even when fine-tuned on classified military data, these models retain their fundamental vulnerabilities from their original training datasets. Adversaries can potentially implement backdoors or trigger harmful behaviors through specific prompts, making them unsuitable for sensitive military operations regardless of additional security measures like air-gapping.
AI companies engage in "safety revisionism" by redefining traditional safety terminology. (33:03) Instead of focusing on preventing harm to humans and the environment, they emphasize alignment and hypothetical existential risks. This allows them to bypass established safety thresholds and democratic processes that typically determine acceptable risk levels for society. The result is a hollowing out of meaningful safety standards under the guise of winning an AI arms race.
When companies sign military procurement contracts, they lose control over how their technology is used. (24:19) Unlike commercial contracts, military procurement is governed by international law and nation-state authority, not corporate terms of service. Companies may claim their AI won't cause "direct harm," but they have no oversight or control once the technology is deployed, making such assurances meaningless in practice.
Existing AI risk assessments focus on hypothetical future threats rather than current, measurable harms. (43:44) This approach is equivalent to having no regulation because it fails to address today's actual risks while building frameworks around unmeasurable scenarios. Without addressing current safety failures, there's no foundation for handling future risks, as safety systems build incrementally upon existing safeguards.