Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
The Model Context Protocol (MCP) has evolved from a local-only experiment into the de facto standard for agentic systems in just one year. (01:18) This podcast episode features David Soria Parra (MCP lead at Anthropic), Nick Cooper (OpenAI), Brad Howes (Block/Goose), and Jim Zemlin (Linux Foundation CEO) discussing MCP's journey from Thanksgiving hacking sessions to enterprise adoption at scale. (01:00) The conversation covers the technical evolution through four spec releases, authentication challenges, enterprise learnings, and the formation of the Agentic AI Foundation (AAIF) under the Linux Foundation. The episode explores how three competitive AI labs came together to create a neutral foundation, the explosive internal adoption at enterprises, and the vision for MCP as the communication layer for asynchronous agents. (01:03:31)
Co-creator and lead maintainer of the Model Context Protocol (MCP) at Anthropic. David is a Member of Technical Staff who leads all MCP efforts at Anthropic and serves on the technical steering committee of the newly formed Agentic AI Foundation.
Head of protocol initiatives at OpenAI with over two years at the company. Nick leads OpenAI's involvement in open ecosystem protocols and serves as their representative for the Agentic AI Foundation, focusing on agent integrations and product experiences.
Principal engineer at Block and original author of Goose, an open-source coding agent. Brad builds AI products by day and contributes to open source by night, making him one of the first non-Anthropic contributors to MCP.
CEO of the Linux Foundation for 22 years, facilitating the launch of the Agentic AI Foundation. Zemlin has extensive experience in open source governance and has never seen the level of day-one inbound interest that AAIF generated in his tenure.
The March 2024 MCP spec combined authentication servers and resource servers into a single entity, which proved unusable for enterprises. (09:04) In enterprise environments, employees authenticate through central identity providers (like Okta or Google), not individual services. David explains that combining these functions meant "you just can't do this anymore" in corporate settings. The June spec correction separated the MCP server as a resource server from the authentication server, enabling proper enterprise integration. This separation allows companies to maintain their existing identity infrastructure while adopting MCP at scale.
Practical Example: An enterprise can now integrate MCP servers with their existing Okta setup, where employees authenticate once in the morning and access all work tools seamlessly.
Rather than dumping all available tools into the model's context window, MCP enables progressive discovery where models can incrementally learn about available tools. (10:12) David notes this prevents the "tool bloat" problem where models get confused by having "five tools that look very similar to each other." The approach leverages models' ability to make intelligent decisions about what information they need next. This principle works with any model capable of tool calling, though training optimizes the behavior.
Practical Example: Instead of showing all Linear API endpoints at once, a model first discovers basic project management tools, then requests specific issue management capabilities as needed.
MCP's new Tasks primitive addresses the fundamental limitation of synchronous tool calls for long-running operations. (34:27) David explains that people were "awkwardly trying to do this with tools" for operations that might take hours or days. Tasks provide a container for asynchronous operations with intermediate results, enabling deep research and agent handoffs. Unlike simple async tools, tasks can return progress updates and complex workflows, supporting the infrastructure needed for agents that work while you sleep.
Practical Example: An agent can initiate a comprehensive market research task that runs for hours, periodically updating with findings, and finally delivering a complete analysis without requiring constant human supervision.
The most significant MCP adoption is happening invisibly within enterprises, connecting agents to internal systems like Slack, Linear, and proprietary databases. (27:00) David reveals that "90% of the MCP servers that are at Anthropic" he probably doesn't even know about because teams build them independently. Companies in financial services and healthcare are deploying MCP servers for compliance-heavy workflows, taking advantage of MCP's authentication and security features. This internal adoption pattern explains why MCP growth appears "way faster than you would think."
Practical Example: A financial services company deploys MCP servers to connect their trading systems with compliance databases, ensuring all agent interactions meet regulatory attribution requirements.
Moving MCP to the Agentic AI Foundation ensures the protocol remains permanently open and neutral, preventing any single company from making it proprietary. (01:03:31) Jim Zemlin explains this addresses historical concerns about protocols becoming controlled by individual vendors, citing HDMI as an example where proprietary control limits innovation. The foundation structure separates technical governance from business governance, allowing elite technical contributors to make decisions without pay-to-play influence. This neutrality enables competitive companies like Anthropic, OpenAI, and Microsoft to collaborate on shared infrastructure.
Practical Example: Developers can confidently build MCP-based businesses knowing the protocol won't be suddenly controlled or restricted by any single AI company.