Search for a command to run...

Timestamps are as accurate as they can be but may be slightly off. We encourage you to listen to the full context.
In this NVIDIA AI podcast episode, host Noah Kravitz explores how agentic AI is reshaping pharmaceutical workflows with IQVIA executives Raja Shankar and Abhinav Broy. (00:40) The conversation reveals how IQVIA processes data from over 1 billion non-identified patient records across 100+ countries to transform healthcare outcomes through intelligent automation. The discussion spans from clinical trial acceleration to commercial drug launches, highlighting the evolution from traditional machine learning to sophisticated multi-agent systems. (03:36) Key themes include breaking down data silos, accelerating drug development timelines, and ultimately serving patients better through AI-powered insights and workflow automation.
• Main focus: How agentic AI transforms pharmaceutical R&D and commercialization workflows to get life-saving drugs to patients faster and more effectively
Raja serves as Vice President of Machine Learning at IQVIA, where he spearheads the application of artificial intelligence to transform research and development workflows in the life sciences industry. His expertise lies in developing AI solutions that accelerate clinical research and drug development processes, particularly focusing on clinical trial automation and simulation.
Abhinav is Vice President of Commercial Analytics Solutions at IQVIA, focusing on how AI can revolutionize pharmaceutical commercialization strategy. He brings extensive experience in leveraging advanced analytics and machine learning to optimize brand outreach and market access in healthcare, ensuring drugs reach the right patients at the right time.
The most critical advice for organizations adopting agentic AI is to begin with a specific business challenge rather than seeking ways to implement new technology. (23:40) Abhinav emphasizes avoiding the "hammer looking for a nail" approach by first understanding how the AI use case aligns with strategic goals like reducing time-to-market or increasing HCP engagement. This problem-first methodology ensures that AI implementations deliver measurable value rather than becoming expensive experiments. Companies should establish clear KPIs such as faster product launches, improved marketing campaign cost-per-acquisition, or enhanced patient engagement before selecting AI solutions.
Successful agentic AI adoption requires a disciplined approach to piloting that includes quick decision-making gates and clear success criteria. (24:14) Organizations struggle when they extend pilots indefinitely without making go/no-go decisions, creating a cycle of proof-of-concepts that never reach operational scale. The key is running focused pilots with predetermined metrics and timelines, then making rapid decisions about scaling or pivoting. This approach prevents the common trap of perpetual testing phases that drain resources without delivering business value.
Rather than spending years building perfect data lakes, companies should focus on ensuring their existing data is accessible, compliant, and well-documented for AI applications. (24:44) Abhinav notes that 80% of AI project time typically involves data preparation, but this shouldn't delay implementation. The focus should be on having compliant data sources, proper access controls, sufficient metadata for model training, and clear process documentation that can guide agent behavior. This pragmatic approach allows organizations to begin generating value while incrementally improving their data infrastructure.
The hardest part of agentic AI implementation is the "last mile" - moving from successful pilots to full operational deployment. (25:37) This requires thinking beyond technical scalability to include organizational readiness, change management strategies, and workforce transformation. Companies need to prepare for hiring people who will work alongside agents and develop new operational processes. This transformational change affects entire organizations rather than isolated departments, requiring a village-wide approach to adoption.
Traditional performance metrics often fail when evaluating agentic AI because there's frequently no "gold standard" for comparison in manual processes. (26:11) Raja points out that different people performing the same manual task often produce different results, making it difficult to establish accuracy benchmarks. Organizations should focus on measuring agent performance against clearly defined outputs rather than trying to replicate inconsistent human performance. This shift requires developing new metrics that capture the unique value agents provide, such as consistency, speed, and the ability to process larger datasets than humans.