Command Palette

Search for a command to run...

PodMine
The Stack Overflow Podcast
The Stack Overflow Podcast•October 23, 2025

What leaders need to know from the 2025 Stack Overflow Developer Survey

Eira May and Natalie Rotnov discuss key findings from Stack Overflow's 2025 Developer Survey, exploring how declining AI trust, persistent tool sprawl, and the need for human validation should shape enterprise leaders' strategies around AI adoption, RAG systems, and internal knowledge management.
AI & Machine Learning
Developer Culture
B2B SaaS Business
Eira May
Natalie Rotnov
Google
Anthropic
Cursor

Summary Sections

  • Podcast Summary
  • Speakers
  • Key Takeaways
  • Statistics & Facts
  • Compelling StoriesPremium
  • Thought-Provoking QuotesPremium
  • Strategies & FrameworksPremium
  • Similar StrategiesPlus
  • Additional ContextPremium
  • Key Takeaways TablePlus
  • Critical AnalysisPlus
  • Books & Articles MentionedPlus
  • Products, Tools & Software MentionedPlus
0:00/0:00

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.

0:00/0:00

Podcast Summary

In this episode of Leaders of Code, Eira May and Natalie Rotnov dive deep into the 2025 Stack Overflow Developer Survey findings, specifically focusing on what business and tech leaders need to know about their developer teams. The conversation reveals a fascinating paradox: while AI adoption continues to surge, developer trust in AI tools is actually declining. (03:40) The discussion covers three critical areas impacting enterprise decision-making: the growing skepticism around AI-generated code quality, the persistent challenge of tool sprawl (with most developers using 6-10 tools), and the rising importance of human validation and community-driven problem-solving.

  • Main Theme: Despite AI's promises to streamline development workflows, developers are becoming more discerning about AI tool reliability while still requiring robust knowledge-sharing systems and human validation processes.

Speakers

Eira May

Eira May serves as the B2B Editor at Stack Overflow, focusing on enterprise content and insights for business leaders. She hosts Leaders of Code, Stack Overflow's podcast series dedicated to exploring how tech leaders build great teams and products.

Natalie Rotnov

Natalie Rotnov is a Senior Product Marketing Manager for Stack Overflow's Enterprise Product Suite, specializing in data licensing and Stack Overflow for Teams. She brings deep expertise in helping enterprise companies leverage Stack Overflow's 60+ million Q&A pairs and knowledge-sharing model to improve developer productivity and AI application performance.

Key Takeaways

Invest in Structured Knowledge-Sharing Spaces

The survey revealed that advanced questions on Stack Overflow have literally doubled since 2023, indicating that AI isn't solving complex, context-dependent problems. (07:27) Natalie emphasizes that enterprises need dedicated spaces where developers can "curate and validate new problems and solutions" in a structured format with metadata and quality signals. This isn't just about having a wiki or Slack channel—it's about creating systems that capture the nuanced discussions and problem-solving approaches that AI tools currently struggle with. Companies should prioritize platforms that allow developers to build consensus, share perspectives, and validate solutions with proper tagging and voting mechanisms.

Implement Strategic RAG Systems for Developer Search

With 36% of professional developers actively learning about Retrieval Augmented Generation (RAG), and "searching for answers" being the most widely adopted AI use case in development workflows, companies need to double down on RAG implementations. (11:12) The key is ensuring your RAG system summarizes well-structured internal knowledge sources with helpful metadata to avoid hallucinations. This directly addresses the top developer frustration: AI solutions that are "almost right, but not quite." Successful RAG implementation requires curated, tagged, and validated internal content that provides the contextual richness AI tools need to generate accurate responses.

Pilot Low-Risk Agentic Use Cases Systematically

While only 48% of developers are actively using AI agents, those who do report significant benefits—70% say agents reduce time on specific tasks and 69% report increased productivity. (19:15) Rather than rushing into complex agentic workflows, Natalie recommends starting with "low risk agentic use cases first and rolling these out iteratively." Consider piloting with newer developers or interns on contained projects. This approach acknowledges that reasoning models powering agentic systems are still immature, while allowing organizations to capture value where agents can deliver immediate impact without compromising critical workflows.

Leverage MCP Servers for Organizational Context

Model Context Protocol (MCP) servers are having a significant moment, offering a standardized way for AI tools to learn implicit organizational knowledge—the language, culture, and ways of working unique to your company. (20:25) Natalie explains that MCP servers can help AI agents understand context from comments and discussions, which represents "a gold mine for information and context for LLMs." For example, Stack Overflow for Teams' MCP server provides read-write access and can be integrated with tools like Cursor or Gemini, immediately grounding AI outputs in vetted organizational truth. Companies should either build MCP servers in-house or evaluate existing options for their current tool stack.

Focus on Small Language Models for Domain-Specific Tasks

With the explosion of agentic AI, small language models (SLMs) fine-tuned for specific domains are gaining popularity because they're more cost-effective, environmentally friendly, and often more accurate for specialized tasks. (25:05) This is particularly relevant for companies in regulated industries like healthcare or finance where tasks require deep domain expertise. Rather than relying solely on general-purpose large language models, organizations should consider pre-trained domain-specific SLMs or building their own using proprietary internal data augmented with relevant third-party datasets. The key is ensuring this training data is "well structured and vetted by humans" to maintain accuracy and reliability.

Statistics & Facts

  1. Over 80% of developers still visit Stack Overflow regularly, and 75% want to ask another person for help when they don't trust AI's answers, demonstrating the continued need for human validation in the AI era. (07:01)
  2. The number of "advanced questions" on Stack Overflow has doubled since 2023, indicating that AI tools are not yet capable of handling complex, context-dependent programming problems that require community intervention. (07:27)
  3. Over a third of developers use 6-10 tools in their workflow, with no correlation between tool count and job satisfaction, suggesting that tool sprawl is an accepted reality rather than a productivity killer. (17:24)

Compelling Stories

Available with a Premium subscription

Thought-Provoking Quotes

Available with a Premium subscription

Strategies & Frameworks

Available with a Premium subscription

Similar Strategies

Available with a Plus subscription

Additional Context

Available with a Premium subscription

Key Takeaways Table

Available with a Plus subscription

Critical Analysis

Available with a Plus subscription

Books & Articles Mentioned

Available with a Plus subscription

Products, Tools & Software Mentioned

Available with a Plus subscription

More episodes like this

In Good Company with Nicolai Tangen
January 14, 2026

Figma CEO: From Idea to IPO, Design at Scale and AI’s Impact on Creativity

In Good Company with Nicolai Tangen
We Study Billionaires - The Investor’s Podcast Network
January 14, 2026

BTC257: Bitcoin Mastermind Q1 2026 w/ Jeff Ross, Joe Carlasare, and American HODL (Bitcoin Podcast)

We Study Billionaires - The Investor’s Podcast Network
Uncensored CMO
January 14, 2026

Rory Sutherland on why luck beats logic in marketing

Uncensored CMO
This Week in Startups
January 13, 2026

How to Make Billions from Exposing Fraud | E2234

This Week in Startups
Swipe to navigate