Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
Alex Rattray, CEO of Stainless (the API company behind platforms like OpenAI and Anthropic), shares his vision for the future of AI integration with the internet through Model Context Protocol (MCP). The conversation explores why current MCP implementations struggle with scalability and context limitations, and reveals Stainless's innovative solution: replacing dozens of specialized tools with just two—a code execution tool and a documentation search tool. (34:30) Alex predicts "the future of AI is cyborgs"—systems that combine LLM intelligence with traditional code execution for maximum efficiency and minimal context usage.
Alex Rattray is the founder and CEO of Stainless, the API company that builds SDKs and APIs for major tech companies including OpenAI, Anthropic, and Stripe. Before founding Stainless, he worked at Stripe developing their API infrastructure. He's known for his unconventional approach to problem-solving and his vision for how AI will interact with the internet through improved protocols and tools.
Dan Shipper is the host and a technology entrepreneur focused on AI applications and business automation. He's an investor in Stainless and runs multiple AI-powered products, giving him practical experience with the challenges and opportunities in AI integration.
Traditional MCP implementations struggle because they try to expose every possible API endpoint as individual tools, quickly overwhelming the model's context window. (15:21) As Alex explains, translating something like the entire Stripe API into MCP tools could consume "hundreds of thousands of tokens" just in tool definitions, leaving little room for actual conversation. The solution isn't more tools—it's smarter tool design that works within context constraints while maintaining full API functionality.
The future of AI-API interaction lies in giving models just two tools: one for executing TypeScript/Python code and another for searching documentation. (35:56) This approach leverages what LLMs do best—writing code—while dramatically reducing context usage. Instead of having 50 different tools for different API endpoints, models write code like `stripe.customers.retrieve()` and execute it directly, returning only relevant results rather than massive data dumps.
Alex uses Claude Code to automatically collect and organize business intelligence in a Git repository, including customer quotes, SQL queries, and analysis results. (28:13) This creates a persistent knowledge base that doesn't require re-querying MCP servers for previously discovered insights. The practice of having AI collect notes "for the AI by the AI" while human-curating the results creates a powerful feedback loop for business intelligence.
Current security approaches that limit MCP tool exposure are fundamentally flawed because they create artificial constraints while the underlying API remains fully accessible. (41:13) True security requires implementing OAuth with granular permissions and proper scopes at the API level itself. This ensures that AI agents operate within the same security boundaries as human users, rather than relying on MCP-level restrictions that can be easily bypassed.
The natural progression of AI assistance follows the same pattern as traditional software development: tasks performed once, twice, then three times eventually get automated. (45:00) AI code execution environments will enable this evolution by allowing useful one-off scripts to be easily converted into persistent workflows. This bridges the gap between exploratory chat interfaces and structured dashboards, letting teams gradually formalize successful AI interactions into reusable business processes.