Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
This episode explores the intersection of AI and cybersecurity with two leading experts in formal methods and automated reasoning. Kathleen Fisher, director of RAND's cybersecurity initiative and future CEO of UK's ARIA, and Byron Cook, Amazon VP and distinguished scientist, discuss how AI is reshaping cyber threats while simultaneously offering solutions through formal verification. (04:52) They explain how AI empowers attackers at every skill level but also enables defenders to build provably secure systems. (10:16)
Kathleen Fisher is the director of the cybersecurity initiative at RAND Corporation and will become CEO of The UK's Advanced Research and Invention Agency (ARIA) in February 2025. She previously served as director of the Information Innovation Office at DARPA, where she led the groundbreaking High Assurance Cyber Military Systems (HACMS) project that demonstrated formally verified helicopter systems could resist sophisticated cyberattacks even during flight operations.
Byron Cook is vice president and distinguished scientist at Amazon, where he has led the application of formal methods to distributed systems at AWS for over a decade. His work has been instrumental in maintaining AWS's strong security record despite being one of the world's largest targets for cyber attackers, including proving the correctness of AWS's policy interpreter that processes over a billion security decisions per second.
AI is fundamentally changing the cybersecurity landscape by empowering attackers across the entire spectrum - from script kiddies to nation-state adversaries. (07:03) Fisher emphasizes that AI helps everyone "do better at cyber attacks" by providing assistance at all stages of the cyber kill chain. This isn't just about making existing hackers more effective; it's enabling entirely new categories of attackers who previously lacked the technical skills. The scale and parallel capabilities that AI provides to attackers represent a qualitative shift in the threat landscape that traditional security approaches cannot adequately address.
Formal methods offer a fundamentally different approach to cybersecurity by providing mathematical proofs about software behavior rather than probabilistic defenses. (10:40) As Cook explains, it's "algorithmic search for proofs" that can reason about infinite possibilities in finite time. Unlike traditional security testing that can only check specific cases, formal verification can prove that certain classes of vulnerabilities simply cannot exist in properly verified code. This moves security from a game of whack-a-mole to establishing permanent mathematical guarantees.
The most difficult aspect of formal methods isn't the mathematical proving - it's defining what you actually want to prove. (49:49) Cook reveals he has "spent a lot of time in shuttle buses between buildings trying to get agreement amongst teams on did we get the spec right." Even simple concepts like "all data at rest is encrypted" require extensive refinement to define what constitutes "encryption," "data," and "at rest." This specification challenge has historically limited formal methods adoption, but AI is now helping bridge this gap by assisting in translating natural language policies into formal specifications.
Generative AI is revolutionizing formal methods by making proof generation dramatically more accessible and enabling a potential society-wide software rewrite. (1:13:23) Fisher explains that AI coding models can be trained to generate not just code, but secure code with formal guarantees. The combination creates a virtuous cycle: AI helps generate proofs and secure code, which becomes training data for even better AI systems. This suggests we could achieve "superhuman levels of code security" within the next generation or two of language models, potentially solving decades of accumulated technical debt.
AWS's Automated Reasoning Checks product demonstrates how formal methods can be applied to AI agent governance by translating natural language policies into formal specifications. (50:34) The system helps organizations iterate on policy formalization, then provides up to 99% verification accuracy for AI outputs against those policies. This approach tackles the "last mile" problem of AI democratizing access to information - making AI responses not just fast and cheap, but actually correct and trustworthy for critical decisions.