The QCon San Francisco 2025 conference (November 17–21) is fast approaching, and the excitement is building. As a member of the program committee, I’ve had a front-row seat to the thoughtful process of curating tracks, selecting hosts, and shaping a cohesive schedule for the three-day event. It’s been inspiring to see how each track tackles the real challenges software leaders and practitioners face today.
With AI now influencing nearly every aspect of software development, it is no surprise that AI-related topics appear across many of the 15 curated tracks. As someone working in the ML infrastructure and deeply passionate about AI, my personal top picks below ( in no particular order) naturally lean toward sessions exploring this fast-moving and transformative space.
- “Accelerating LLM-Driven Developer Productivity at Zoox” by Amit Navindgi @Zoox. A practical blueprint with tangible ideas, design patterns and organizational strategies for scaling an organization’s AI capabilities.
- “Engineering at AI Speed: Lessons from the First Agentically Accelerated Software Project” by Adam Wolff @Anthropic. An exploration of the architectural decisions in Claude Code that prioritize speed over complexity.
- “Deep Research for Enterprise: Unlocking Actionable Intelligence from Complex Enterprise Data with Agentic AI” by Vinaya Polamreddi @Glean. A case study about building Deep Research for Enterprise, an agentic AI system that transforms vast, complex enterprise data into actionable intelligence through scalable design and advanced training methods.
- “The Future of Engineering: Mindsets That Matter When Code Isn’t Enough” by Ben Greene at @Tessi. An examination of how the role of engineers is evolving in the age of AI, emphasizing the need to rethink engineering mindsets beyond coding skills to stay relevant.
- “Dynamic Moments: Weaving LLMs into Deep Personalization at DoorDash” by Sudeep Das and Pradeep Muthukrishnan @DoorDash. A deep dive into how DoorDash is redefining personalization by tightly integrating LLMs.
- “Designing Fast, Delightful UX with LLMs in Mobile Frontends” by Bala Ramdoss @Amazon. Insights on how to design fast, reliable, and engaging LLM-powered experiences in mobile apps by combining thoughtful frontend architecture with smart UX design.
- “One Platform to Serve Them All: Autoscaling Multi-Model LLM Serving” by Meryem Arik @Doubleword. A technical breakdown of how one platform can autoscale multi-model serving with shared base weights, hot-swapped adapters, dynamic loading, and smart eviction.
- “From Content to Agents: Scaling LLM Post-Training Through Real-World Applications and Simulation” by Faye Zhang @Pinterest and Andi Partov @Veris AI. A journey through AI post-training techniques that make large language models effective in real-world applications, from Pinterest’s content generation to simulation-based agent training.
- “Powering the Future: Building Your GenAI Infrastructure Stack” by Maggie Hu and Merrin Kurian @Intuit. An inside look at how Intuit’s GenOS team uses vector stores, prompt management, RAG pipelines, and agent orchestration come together to serve ~100 million users.
- “AI-Driven Productivity: From Idea to Impact” by Jyothi Nookula @Stealth Startup. A pragmatic framework for turning GenAI enthusiasm into an enterprise ready blueprint for real product productivity gains.
QCon San Francisco 2025 will be held from November 17–21 at the Hyatt Regency San Francisco. The conference brings together practitioners who share real-world insights and hard-earned lessons from solving complex problems at scale. The full schedule is available on the QCon San Francisco website.