Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this Stack Overflow podcast episode, host Ryan Donovan sits down with Dan Ciruli, VP and General Manager of Cloud Native at Nutanix, to explore the evolving landscape of enterprise infrastructure. They dive deep into why virtual machines (VMs) remain critical in enterprise environments despite the rise of containerized applications, and how organizations can successfully integrate both technologies. (02:28)
Ryan Donovan is the host of the Stack Overflow podcast and editor of the Stack Overflow blog. He specializes in covering software development trends, technology discussions, and interviewing industry experts about their experiences and insights.
Dan Ciruli is the VP and General Manager of Cloud Native at Nutanix, with over 30 years of experience in enterprise technology. He spent seven years at Google during the early development of cloud-native technologies, was a founding member of the OpenAPI initiative, served on the Istio steering committee, and worked on gRPC protocol development. (00:53)
Dan emphasizes that the primary reason to adopt containers isn't just operational benefits like security or scalability—it's because they enable developers to ship software faster and innovate more rapidly. (02:28) He uses Google's early 2000s dominance over AltaVista, MapQuest, and Hotmail as an example, explaining that Google's ability to revolutionize search, maps, and email simultaneously was largely due to their early adoption of containers, which allowed them to move much faster than competitors. The key insight is that when developers can push smaller, more frequent increments of code, organizations can innovate at unprecedented speeds.
Rather than waiting for legacy systems to be rewritten, successful enterprises recognize that millions of applications written over the past 30 years will likely never be containerized. (05:13) Dan points out that major banks have tens of thousands of applications running in VMs with no plans to eliminate them, just as mainframes haven't disappeared despite decades of predictions. The practical approach is to build infrastructure that allows new containerized applications to communicate seamlessly with existing VM-based systems, rather than creating costly silos that treat them as separate environments.
While it might seem counterintuitive, running Kubernetes within VMs actually solves several major enterprise challenges. (10:36) This approach enables consistent networking policies across both container and VM workloads, eliminates hardware silos that create resource waste, and provides the flexibility to auto-scale clusters without manual hardware provisioning. Dan explains that most hyperscaler-managed Kubernetes already runs in VMs for these exact reasons, and enterprises can achieve the same benefits on-premises.
The reason Kubernetes adoption lags on-premises compared to cloud environments isn't technical—it's operational complexity. (22:19) Hyperscalers made it possible to spin up secure, automatically-updated Kubernetes clusters with a button click, while on-premises implementations required extensive care and feeding. Organizations need to focus on creating the same level of operational simplicity on-premises, including automated patching, security management, and developer tooling like container registries and CI/CD pipelines.
One of the most promising applications of AI in enterprise environments is translating legacy code written in languages like COBOL or Pascal into modern implementations. (19:13) Dan suggests a systematic approach: first use AI to generate comprehensive test cases from the original code to establish behavioral baselines, then use AI to translate the code itself. This approach provides the guardrails necessary for safe modernization and represents a practical application of AI that could finally enable organizations to decommission decades-old mainframe systems.