Command Palette

Search for a command to run...

PodMine
Lenny's Podcast: Product | Career | Growth
Lenny's Podcast: Product | Career | Growth•January 11, 2026

What OpenAI and Google engineers learned deploying 50+ AI products in production

In this episode, Aishwarya Naresh Reganti and Kiriti Badam share insights on building successful AI products, emphasizing the importance of starting with low agency and high human control, iteratively developing AI systems, and focusing on solving specific business problems rather than getting caught up in technological complexity.
AI & Machine Learning
Tech Policy & Ethics
Developer Culture
B2B SaaS Business
Lenny Rachitsky
Aishwarya Naresh Reganti
Kiriti Badam
OpenAI

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

In this episode, AI engineers Aishwarya Naresh Reganti and Kiriti Badam share hard-earned insights from launching over 50 AI products across OpenAI, Google, Amazon, and Databricks. The conversation centers on why traditional software development approaches fail for AI products and introduces a systematic framework for building reliable AI systems. (00:36)

• Key themes: The fundamental differences between AI and traditional software development, the importance of starting small with controlled autonomy, and building continuous feedback loops for behavior calibration.

Speakers

Aishwarya Naresh Reganti

Aishwarya is an early AI researcher who worked at Alexa and Microsoft, publishing over 35 research papers. She has led and supported AI product deployments across major companies including Amazon and Databricks, and co-teaches the top-rated AI course on Maven focused on building successful AI products.

Kiriti Badam

Kiriti currently works on Codex at OpenAI and has spent the last decade building AI and ML infrastructure at Google and Kumo. Together with Aishwarya, he has been instrumental in developing frameworks for enterprise AI adoption and has hands-on experience with the challenges of scaling AI systems in production.

Key Takeaways

Start with High Control, Low Agency Systems

The most successful AI products begin with minimal autonomy and maximum human oversight. (13:19) For example, in customer support, start with AI suggesting responses rather than automatically sending them. This approach allows teams to understand system behavior patterns before increasing autonomy. As Kiriti explains, when you start small, "it forces you to think about what is the problem that I'm gonna solve" rather than getting lost in solution complexity.

Build Continuous Calibration Loops

Unlike traditional software, AI systems require ongoing behavior calibration because they're inherently non-deterministic. (45:41) Successful teams establish feedback loops that capture both explicit user signals (thumbs up/down) and implicit signals (regenerating responses, switching off features). This continuous monitoring helps identify new error patterns that weren't anticipated during development.

Leaders Must Get Hands-On with AI

Executive leadership engagement is the strongest predictor of AI adoption success. (26:31) As Aishwarya notes, the CEO of Rackspace blocks 4-6 AM daily for "catching up with AI" and has weekend coding sessions. Leaders need to rebuild their intuitions and be "comfortable with the fact that your intuitions might not be right" to guide effective AI decision-making.

Focus on Workflow Understanding Over Technology

The most successful AI implementations come from deep understanding of existing workflows rather than fascination with AI capabilities. (48:14) Enterprise data and infrastructure are messy, with complex taxonomies and undocumented rules. Teams that obsess over understanding these workflows can choose the right tool for each problem instead of defaulting to AI for everything.

Combine Evals with Production Monitoring

Neither evaluations nor production monitoring alone can catch all AI system failures. (33:39) Evals catch known error patterns you've anticipated, while production monitoring reveals emerging behaviors you couldn't predict. Successful teams use both approaches: evals for regression testing and monitoring for discovering new failure modes in real user interactions.

Statistics & Facts

  1. Aishwarya and Kiriti have helped build and launch more than 50 enterprise AI products across companies like OpenAI, Google, Amazon, and Databricks. (01:46)
  2. According to a UC Berkeley paper by Matei Zaharia's team, 74-75% of enterprises identified reliability as their biggest problem with AI systems, preventing them from deploying customer-facing products. (21:51)
  3. Even with the best data layer and infrastructure, replacing any critical workflow with AI typically takes four to six months of work to achieve significant ROI. (31:53)

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription

More episodes like this

In Good Company with Nicolai Tangen
January 14, 2026

Figma CEO: From Idea to IPO, Design at Scale and AI’s Impact on Creativity

In Good Company with Nicolai Tangen
We Study Billionaires - The Investor’s Podcast Network
January 14, 2026

BTC257: Bitcoin Mastermind Q1 2026 w/ Jeff Ross, Joe Carlasare, and American HODL (Bitcoin Podcast)

We Study Billionaires - The Investor’s Podcast Network
Uncensored CMO
January 14, 2026

Rory Sutherland on why luck beats logic in marketing

Uncensored CMO
This Week in Startups
January 13, 2026

How to Make Billions from Exposing Fraud | E2234

This Week in Startups
Swipe to navigate