Command Palette

Search for a command to run...

PodMine
Big Technology Podcast
Big Technology Podcast•September 24, 2025

Is Generative AI a Cybersecurity Disaster Waiting to Happen? — With Yinon Costica

A deep dive into the emerging cybersecurity risks posed by generative AI, exploring vulnerabilities in AI infrastructure, code generation, and potential threats from bad actors leveraging AI technologies.
AI & Machine Learning
Tech Policy & Ethics
Developer Culture
Alex Kantrowitz
Ynon Kostika
Google
NVIDIA
DeepSeek

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

This episode features Ynon Kostika, co-founder of Wiz (recently acquired by Google for $32 billion), discussing the emerging cybersecurity threats in the age of generative AI. The conversation explores how AI's rapid adoption creates new vulnerabilities across three critical layers: the foundational AI technologies themselves, the cloud infrastructure supporting AI applications, and the code generated by AI tools. (01:00)

  • Key themes include the fundamental shift in attack surfaces as AI becomes ubiquitous, the attacker-defender asymmetry being amplified by AI automation, and the critical need for proactive security measures rather than reactive detection in an AI-driven world.

Speakers

Alex Kantrowitz

Host of Big Technology Podcast, technology journalist and author who covers the intersection of technology and society. He regularly interviews leading figures in the tech industry.

Ynon Kostika

Co-founder and VP of Product at Wiz, the cloud security company recently acquired by Google for $32 billion. He is an expert in cybersecurity and has extensive experience in protecting cloud infrastructure and AI applications. Kostika has been instrumental in developing proactive security solutions that help organizations identify and remediate vulnerabilities before they can be exploited by threat actors.

Key Takeaways

AI Infrastructure Security Requires Fundamental Best Practices

While AI represents new technology, it fundamentally relies on traditional infrastructure components that can be compromised through well-established attack vectors. Organizations building AI applications often expose sensitive training datasets in cloud storage buckets, use overly permissive identities, and misconfigure virtual machines or containers. (06:58) The majority of AI-related security incidents stem from basic infrastructure misconfigurations rather than sophisticated AI-specific attacks. This means the security fundamentals of patching vulnerabilities, securing configurations, and managing identities remain as critical as ever when deploying AI systems.

AI-Generated Code Demands Human Ownership and Accountability

The rise of "vibe coding" - where developers generate entire applications through AI prompts - creates a dangerous disconnect between developers and their code. (11:04) When vulnerabilities or reliability issues arise, developers who didn't write the code themselves often lack the deep understanding needed to quickly diagnose and fix problems. Kostika emphasizes that AI should accelerate development but cannot remove the fundamental responsibility of developers to understand and maintain their applications. Organizations must establish clear ownership models and ensure human oversight remains in the development lifecycle.

Proactive Risk Reduction Trumps Detection in an AI-Automated World

AI amplifies the fundamental attacker-defender asymmetry in cybersecurity. While threat actors can use AI to automate and scale their attacks with high false positive tolerance, defenders cannot afford the same luxury without being overwhelmed by noise. (21:08) The solution lies in shifting focus from detection to prevention - proactively patching vulnerabilities and fixing misconfigurations before attacks occur. This approach reduces the overall attack surface and minimizes the noise that could otherwise overwhelm security teams trying to distinguish real threats from false alarms.

Security Instruction Must Be Embedded in AI Development Workflows

AI code generators can produce secure code, but only when explicitly instructed to do so through proper prompting and guidelines. (10:02) Just as developers need to specify functional requirements, they must also provide security requirements like least privilege access and proper data handling. Wiz has developed rule sets that can be fed into AI code generators to guide them toward building more secure applications. The future may include agent-based security reviews where AI systems specifically trained on security best practices automatically audit generated code.

Rapid Technology Adoption Requires Immediate Security Scrutiny

The DeepSeek incident illustrates how quickly new AI technologies can achieve widespread adoption - nearly 10% of organizations adopted DeepSeek within a single week of its release. (42:12) This rapid adoption creates security risks when organizations fail to perform proper due diligence on new technologies. Companies must develop processes to quickly assess the security posture of new AI tools while enabling business agility. This includes understanding data handling practices, scrutinizing infrastructure security, and establishing governance frameworks for emerging technologies.

Statistics & Facts

  1. At the Pwn2Own security research competition, six AI technologies were presented, and four were found to have critical remote code execution vulnerabilities - the highest impact type of security flaw. (02:22)
  2. Nearly 10% of organizations adopted DeepSeek within just one week of its release, demonstrating the unprecedented speed of AI technology adoption in enterprises. (42:12)
  3. According to Kostika, the majority of AI-related security incidents occur at the infrastructure layer using traditional cloud attack techniques, rather than sophisticated AI-specific exploits. (08:56)

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription

More episodes like this

Tetragrammaton with Rick Rubin
January 14, 2026

Joseph Nguyen

Tetragrammaton with Rick Rubin
Finding Mastery with Dr. Michael Gervais
January 14, 2026

How To Stay Calm Under Stress | Dan Harris

Finding Mastery with Dr. Michael Gervais
The James Altucher Show
January 14, 2026

From the Archive: Sara Blakely on Fear, Failure, and the First Big Win

The James Altucher Show
In Good Company with Nicolai Tangen
January 14, 2026

Figma CEO: From Idea to IPO, Design at Scale and AI’s Impact on Creativity

In Good Company with Nicolai Tangen
Swipe to navigate