Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this episode, Ryan Donovan interviews Kayvon Beykpour, CEO and founder of Macroscope, about AI-powered code understanding and review. (02:38) Beykpour shares his journey from founding Periscope (acquired by Twitter) to leading large engineering teams at Twitter, where he experienced firsthand the challenge of understanding what 1,500 engineers were working on. (02:59) The conversation explores Macroscope's approach to solving visibility problems in large codebases through AI-powered summaries, automated PR descriptions, and intelligent code review. (04:06) Beykpour explains their technical strategy of leveraging Abstract Syntax Trees (AST) to provide LLMs with comprehensive context, resulting in more accurate and useful code analysis than simple diff-based approaches.
• Main Theme: Building AI systems that understand codebases at scale, from individual commits to executive-level project summaries, while maintaining high signal-to-noise ratios in automated code review.Kayvon Beykpour is the CEO and founder of Macroscope, an AI-powered code review and understanding platform. (01:54) He previously co-founded Periscope, the live streaming app that was acquired by Twitter in 2015, where he later served as head of product and eventually engineering for the consumer team managing over 1,500 engineers. (02:57) This experience with large-scale engineering teams directly inspired the creation of Macroscope to solve visibility and understanding challenges in massive codebases.
Ryan Donovan is the host of the Stack Overflow podcast and editor of the Stack Overflow blog. He conducts in-depth interviews with technology leaders and innovators, focusing on software engineering trends, developer tools, and the future of technology.
Macroscope began with the most basic unit of code change - individual commit summaries - before building up to more complex project-level insights. (06:27) Beykpour explains their methodical approach: "we started with the simplest possible thing, which is could we just summarize commits as they happened?" This foundation allowed them to validate their approach before tackling more complex aggregation and analysis challenges. The lesson for any technical leader is that complex AI systems should be built incrementally, proving value at each level before adding complexity.
Rather than simply sending code diffs to language models, Macroscope leverages Abstract Syntax Trees to provide comprehensive context about function callers, references, and usage patterns. (10:23) Beykpour emphasizes: "by supplying the LLM with all of those things, the diff, the references... it allows the LLM to have a more coherent and robust summary." This approach transforms generic AI outputs into insights that engineers describe as "better than I could have written myself," highlighting the importance of thoughtful context engineering over raw LLM capabilities.
The primary reason teams abandon AI code review tools is excessive false positives and verbose, unhelpful feedback. (11:51) Beykpour notes: "if you get enough false positives spammed into your PRs as review comments, it's just more of a tax than it is useful." Successful AI tools must maintain extremely high hit rates for relevant findings while avoiding noise. This principle applies beyond code review to any AI system integrated into developer workflows - the cost of false positives often outweighs the benefits of true positives.
Beykpour argues that humans shouldn't spend time reviewing PRs for bugs, as AI systems can identify correctness issues better, faster, and cheaper than humans. (24:47) He envisions a future where "humans should not be spending time reviewing PRs for bugs... we ought to be able to delegate that as soon as possible." However, humans remain essential for cultural code review aspects like education, convention enforcement, and architectural decisions. This represents a fundamental shift in how engineering teams should allocate human expertise.
Macroscope's "macros" feature allows teams to create custom prompts that run on scheduled intervals, enabling personalized automation workflows. (19:55) Beykpour describes how they generate weekly release notes with specific formatting requirements: "I've tuned that prompt to... be at a certain reading level, include emojis, not include esoteric technical changes." This approach provides a template for building flexible AI systems that adapt to organizational needs rather than forcing teams to adapt to rigid AI outputs.