Command Palette

Search for a command to run...

PodMine
Decoder with Nilay Patel
Decoder with Nilay Patel•September 25, 2025

How AI safety took a backseat to military money

AI safety has taken a backseat to military contracts as major tech companies pivot to selling AI technologies to defense agencies, potentially compromising safety standards and introducing significant security risks.
Corporate Strategy
AI & Machine Learning
Tech Policy & Ethics
Heidi Klaff
Hayden Fields
Neelai Patel
OpenAI
Anthropic

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

In this Decoder episode, guest host Hayden Fields interviews Heidi Klaff, chief AI scientist at the AI Now Institute, about the dramatic shift in AI companies' military policies. (02:49) The conversation explores how major AI companies like OpenAI and Anthropic removed bans on military use cases and signed lucrative Department of Defense contracts worth $200 million each. (02:58) Klaff, who previously worked at OpenAI developing safety frameworks, argues that these companies are prioritizing profit over safety by deploying unvetted AI systems in high-risk military applications.

  • Main themes include the commercialization of AI for defense purposes, the erosion of traditional safety standards, and the risks of deploying inaccurate AI systems in critical military operations

Speakers

Heidi Klaff

Chief AI scientist at the AI Now Institute and a leading expert in AI safety within autonomous weapon systems. She previously worked at OpenAI from late 2020 to mid-2021 as a senior system safety engineer, where she developed safety and risk assessment frameworks for the company's Codex coding tool during a critical period in the company's development.

Hayden Fields

Senior AI reporter at The Verge and guest host for this Decoder episode. Fields specializes in covering artificial intelligence developments and serves as the Thursday episode host while Neelai Patel is on parental leave.

Key Takeaways

Military Procurement Standards Are Being Compromised

Traditional military procurement requires systems to meet extremely stringent safety standards, often 99% or higher accuracy rates. (12:22) However, AI companies are pushing systems with accuracy rates as low as 20-60% into defense applications. Klaff explains that defense standards are typically derived from decades of rigorous testing, yet AI companies cannot meet these basic thresholds due to the inherently inaccurate nature of foundation models. This creates a dangerous precedent where profit motives override established safety protocols.

Commercial AI Models Are Already Compromised

AI models trained on publicly available data are vulnerable to "sleeper agent" attacks and web poisoning. (20:53) Even when fine-tuned on classified military data, these models retain their fundamental vulnerabilities from their original training datasets. Adversaries can potentially implement backdoors or trigger harmful behaviors through specific prompts, making them unsuitable for sensitive military operations regardless of additional security measures like air-gapping.

Safety Has Been Redefined to Accelerate Deployment

AI companies engage in "safety revisionism" by redefining traditional safety terminology. (33:03) Instead of focusing on preventing harm to humans and the environment, they emphasize alignment and hypothetical existential risks. This allows them to bypass established safety thresholds and democratic processes that typically determine acceptable risk levels for society. The result is a hollowing out of meaningful safety standards under the guise of winning an AI arms race.

Military Contracts Override Terms of Service

When companies sign military procurement contracts, they lose control over how their technology is used. (24:19) Unlike commercial contracts, military procurement is governed by international law and nation-state authority, not corporate terms of service. Companies may claim their AI won't cause "direct harm," but they have no oversight or control once the technology is deployed, making such assurances meaningless in practice.

Current Risk Assessment Frameworks Are Inadequate

Existing AI risk assessments focus on hypothetical future threats rather than current, measurable harms. (43:44) This approach is equivalent to having no regulation because it fails to address today's actual risks while building frameworks around unmeasurable scenarios. Without addressing current safety failures, there's no foundation for handling future risks, as safety systems build incrementally upon existing safeguards.

Statistics & Facts

  1. More than 85% of Fortune 500 companies use the ServiceNow AI platform, demonstrating widespread enterprise AI adoption. (00:04)
  2. OpenAI and Anthropic each received $200 million Department of Defense contracts in 2024. (02:53)
  3. AI systems used in military applications can have accuracy rates as low as 20%, with optimistic estimates reaching only 60-80%, compared to the 99% minimum typically required for safety-critical systems like nuclear power plants. (27:31)

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription

More episodes like this

In Good Company with Nicolai Tangen
January 14, 2026

Figma CEO: From Idea to IPO, Design at Scale and AI’s Impact on Creativity

In Good Company with Nicolai Tangen
Uncensored CMO
January 14, 2026

Rory Sutherland on why luck beats logic in marketing

Uncensored CMO
We Study Billionaires - The Investor’s Podcast Network
January 14, 2026

BTC257: Bitcoin Mastermind Q1 2026 w/ Jeff Ross, Joe Carlasare, and American HODL (Bitcoin Podcast)

We Study Billionaires - The Investor’s Podcast Network
This Week in Startups
January 13, 2026

How to Make Billions from Exposing Fraud | E2234

This Week in Startups
Swipe to navigate