Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
Malte Ubl, CTO of Vercel, shares insights from the company's Ship AI conference, exploring their comprehensive approach to AI-powered development infrastructure. The conversation covers Vercel's new Workflow Development Kit, which brings durable execution patterns to serverless functions, allowing developers to write code that can pause, resume, and wait indefinitely without cost. (02:30) Ubl discusses the company's "dogfooding" philosophy - never shipping abstractions they haven't battle-tested themselves - which led to extracting their AI SDK from v0 and building production agents for anomaly detection and lead qualification. (01:57)
CTO of Vercel and former Google engineer who worked on foundational web technologies including search, AMP, and Google's internal Wiz framework. He joined Vercel nearly four years ago, before the ChatGPT era, and has been instrumental in transforming the company into an AI-first development platform while maintaining their core framework and infrastructure competencies.
Ubl advocates for identifying successful agent use cases by asking employees "what do you hate most about your job?" (26:55) This approach uncovers problems that are tedious and repetitive but haven't been automated because they require mini-judgments that only humans could make previously. These problems often represent substantial portions of people's jobs and high business impact, making them perfect candidates for AI automation while being manageable for current generation agents.
Vercel's fundamental principle is dogfooding - they extract abstractions only from tools they've built and used internally. (17:05) AI SDK was extracted from v0, and they continuously rebuild their own tools on the abstractions they ship to ensure real-world viability. This approach provides constant feedback loops and ensures high hit rates because framework builders who aren't application builders often create ivory towers that may not work in practice.
In the rapidly evolving AI application space, Ubl emphasizes the importance of restraint in creating thick abstractions. (11:24) Unlike mature spaces like web frameworks where requirements are well-understood, AI applications are still emerging. By staying low-level, AI SDK remained flexible enough to transition from chatbots to agents without requiring rewrites, while competing libraries that led with agent abstractions became limiting.
Vercel's DevOps agent demonstrates how AI can solve the classic recall-precision problem in monitoring systems. (21:37) Traditional anomaly detection requires tuning between false positives (waking people unnecessarily) and false negatives (missing real issues). AI agents can be tuned aggressively to investigate every anomaly, taking time to analyze time series, logs, and IP addresses before deciding whether to escalate to humans, effectively acting as a tireless coworker.
As AI enables more non-engineers to contribute code, security models must evolve to assume developers "cannot be trusted." (40:06) Ubl describes building systems where authentication and data access controls are extracted from applications entirely, creating minimum security guarantees independent of app quality. This represents a fundamental shift from current trust-based development models to AI-native infrastructure that protects against incompetent implementations.