Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
This episode features Ynon Kostika, co-founder of Wiz (recently acquired by Google for $32 billion), discussing the emerging cybersecurity threats in the age of generative AI. The conversation explores how AI's rapid adoption creates new vulnerabilities across three critical layers: the foundational AI technologies themselves, the cloud infrastructure supporting AI applications, and the code generated by AI tools. (01:00)
Host of Big Technology Podcast, technology journalist and author who covers the intersection of technology and society. He regularly interviews leading figures in the tech industry.
Co-founder and VP of Product at Wiz, the cloud security company recently acquired by Google for $32 billion. He is an expert in cybersecurity and has extensive experience in protecting cloud infrastructure and AI applications. Kostika has been instrumental in developing proactive security solutions that help organizations identify and remediate vulnerabilities before they can be exploited by threat actors.
While AI represents new technology, it fundamentally relies on traditional infrastructure components that can be compromised through well-established attack vectors. Organizations building AI applications often expose sensitive training datasets in cloud storage buckets, use overly permissive identities, and misconfigure virtual machines or containers. (06:58) The majority of AI-related security incidents stem from basic infrastructure misconfigurations rather than sophisticated AI-specific attacks. This means the security fundamentals of patching vulnerabilities, securing configurations, and managing identities remain as critical as ever when deploying AI systems.
The rise of "vibe coding" - where developers generate entire applications through AI prompts - creates a dangerous disconnect between developers and their code. (11:04) When vulnerabilities or reliability issues arise, developers who didn't write the code themselves often lack the deep understanding needed to quickly diagnose and fix problems. Kostika emphasizes that AI should accelerate development but cannot remove the fundamental responsibility of developers to understand and maintain their applications. Organizations must establish clear ownership models and ensure human oversight remains in the development lifecycle.
AI amplifies the fundamental attacker-defender asymmetry in cybersecurity. While threat actors can use AI to automate and scale their attacks with high false positive tolerance, defenders cannot afford the same luxury without being overwhelmed by noise. (21:08) The solution lies in shifting focus from detection to prevention - proactively patching vulnerabilities and fixing misconfigurations before attacks occur. This approach reduces the overall attack surface and minimizes the noise that could otherwise overwhelm security teams trying to distinguish real threats from false alarms.
AI code generators can produce secure code, but only when explicitly instructed to do so through proper prompting and guidelines. (10:02) Just as developers need to specify functional requirements, they must also provide security requirements like least privilege access and proper data handling. Wiz has developed rule sets that can be fed into AI code generators to guide them toward building more secure applications. The future may include agent-based security reviews where AI systems specifically trained on security best practices automatically audit generated code.
The DeepSeek incident illustrates how quickly new AI technologies can achieve widespread adoption - nearly 10% of organizations adopted DeepSeek within a single week of its release. (42:12) This rapid adoption creates security risks when organizations fail to perform proper due diligence on new technologies. Companies must develop processes to quickly assess the security posture of new AI tools while enabling business agility. This includes understanding data handling practices, scrutinizing infrastructure security, and establishing governance frameworks for emerging technologies.