Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode, Ryan Donovan sits down with David Hsu, CEO and founder of Retool, to explore the transformative impact of AI on software development. They discuss how AI is shifting the role of software engineers from hands-on coders to architects who build guardrails for non-technical developers. Hsu argues that while AI makes coding more accessible through voice coding and AI agents, production-ready software still requires sophisticated safeguards and higher-level programming primitives. The conversation delves into how internal tools represent the majority of software development and why current AI coding solutions are too low-level and dangerous for non-engineers. (47:00)
Ryan Donovan is a host and editor at Stack Overflow, where he manages the blog and podcast content. He focuses on covering software development trends, technology insights, and interviewing industry leaders about the evolving landscape of programming and software engineering.
David Hsu is the CEO and founder of Retool, an enterprise AI AppGen platform for internal software development that allows users to create apps, agents, and workflows with any LLM, datasource, or API. He has previously appeared on the Stack Overflow podcast to discuss challenges around maintaining internal tools and is active in the developer community discussing the intersection of AI and software development.
The era of software engineers typing code manually is ending, especially for internal tools. (08:53) Instead, engineers will focus on building guardrails and higher-level primitives that enable non-developers to safely create software. This shift represents a fundamental change from hands-on coding to architectural oversight, where engineers define secure boundaries and reusable components rather than implementing every feature themselves.
While products like Cursor and Claude are excellent for engineers, voice coding platforms that output raw JavaScript are dangerous for non-technical users. (06:03) Hsu compares this to asking engineers to work with assembly code - technically possible but risky and difficult to debug. Non-engineers using these tools are like developers trying to read assembly, unable to recognize dangerous operations like "drop table" commands that could destroy databases.
Traditional software security relies on trusting developers to write secure code, but this approach breaks down with AI-generated code and non-technical developers. (15:09) The solution is implementing semantic data tagging and access controls at the infrastructure level, ensuring that sensitive information like SSNs cannot be exposed regardless of how applications are built. This creates "secure by default" applications where guardrails prevent data breaches even when developers make mistakes.
While LLMs may not produce Nobel Prize-winning creative work, they perform exceptionally well at focused tasks with the right inputs and context. (23:13) Hsu uses the example of recording and transcribing meetings, then querying them with an LLM for insights. When provided with specific documents, meeting transcripts, and contextual information, AI can often match or exceed human performance, especially when speed is factored in - achieving 80% quality in minutes versus 100% quality in days.
The success of tools like Cursor demonstrates the power of agentic AI - systems that can reason, act, and use tools autonomously rather than requiring constant copy-pasting. (27:08) This same approach can be applied to all knowledge work by connecting AI agents to business systems like Google Docs, Salesforce, and Service Desk. Retool predicts this could automate 10% of US labor by 2030, representing a similar revolution to what's happening in software development across all professional roles.