events
18 TopicsMicrosoft at NVIDIA GTC 2026
Microsoft returns to NVIDIA GTC 2026 in San Jose with a strong presence across conference sessions, in‑booth theater talks, live demos, and executive‑level ancillary events. Together with NVIDIA and our partner ecosystem, Microsoft is showcasing how Azure AI infrastructure enables AI training, inference, and production at global scale. Visit us at Booth #521 to see the latest innovations in action and connect with Azure and NVIDIA experts. Exclusive GTC Experiences LEGO® Datacenter Model Explore Azure AI infrastructure at the Park Container. Candy Lounge Visit the high-traffic candy wall for co-branded treats all day long. Networking Lounge Relax and recharge with comfy seating and vital charging options. Outdoor Juice Truck Free, refreshing beverages served during outdoor park hours. Sponsored Breakout Sessions Microsoft Featured Reinventing Semiconductor Design with Microsoft Discovery S82398 · Mon, Mar 16 · 4:00 PM Prashant Varshney Microsoft · Semiconductor & AI Engineering Abstract: Semiconductor teams face exploding design complexity and shrinking verification windows. This session shows how the Microsoft Discovery AI for Science platform, combined with Synopsys Agent Engineers, introduces an agentic approach to EDA that automates routine steps and accelerates expert decision-making on Azure. Microsoft Featured Operationalizing Agentic AI at Hyperscale S82399 · Tue, Mar 17 · 1:00 PM Nitin Nagarkatte Microsoft · Azure AI Infrastructure Anand Raman Microsoft · Azure AI Vipul Modi Microsoft · AI Systems Abstract: As enterprises move to agentic systems, the challenge shifts to operating intelligent agents reliably at scale. This session demonstrates how Microsoft builds AI Factories on Azure using NVIDIA technology and explores Microsoft Foundry as the control plane for deploying and operating coordinated AI agents. Live from GTC: AI Podcast Dayan Rodriguez Corporate Vice President Global Manufacturing and Mobility Alistair Spiers General Manager Azure Infrastructure Live Special Feature A conversation with Microsoft Azure Listen & Subscribe: aka.ms/GTC2026Podcast Scan to Listen Earned Conference Sessions Don't miss these high-impact sessions where Microsoft and NVIDIA leaders discuss the future of AI factories and infrastructure. Mon · Mar 16 5:00 PM Drive Optimal Tokens per Watt on AI Infrastructure Using Benchmarking Recipes Speakers: Paul Edwards, Emily Potyraj Microsoft, NVIDIA Tue · Mar 17 9:00 AM Autonomous AI Factories: Technical Preview of Agent-Native Production Speakers: JP Vasseur, César Martinez Spessot NVIDIA, Microsoft Research Tue · Mar 17 4:00 PM The Road to Intelligent Mobility: Vehicle GenAI Speakers: Raj Paul, Thomas Evans, Bryan Goodman Microsoft, NVIDIA, Bosch Wed · Mar 18 9:00 AM Supercharging AI with Multi-Gigawatt AI Factories Speakers: Gilad Shainer, Peter Salanki, Evan Burness NVIDIA, CoreWeave, Meta, Microsoft Daily Booth Theater Schedule Visit the Microsoft Theater for lightning talks from engineering leaders and partners. Monday, March 16 2:00 PM BTH208 · NVIDIA Accelerate AI Innovation on Azure with NVIDIA Run:ai — Rob Magno 2:30 PM BTH202 · General Robotics Models to Machines: Deploying Agentic AI in Real-World Robotics — Dinesh Narayanan 3:00 PM BTH200 · Fractal Analytics From Generalist to Enterprise-Ready: Fractal Builds Domain AI — C. Chaudhuri 3:30 PM BTH109 · Microsoft Agentic cloud ops - Smarter Operations with Azure Copilot — Jyoti Sharma 4:00 PM BTH103 · Microsoft Build a Deep Research Agent for Enterprise Data — D. Casati, A. Slutsky, H. Alkemade 4:30 PM BTH205 · NetApp Azure NetApp Files: Powering Your Data for AI Capabilities — Andy Chan 5:00 PM BTH207 · NVIDIA The Agentic Commerce Stack: Open Models on Azure — Antonio Martinez 5:30 PM BTH217 · OPAQUE Confidential AI on Azure Unlocks Sovereign AI at Scale — Aaron Fulkerson 6:00 PM BTH218 · Simplismart Making BYOC work at scale with modular inference — Amritanshu Jain 6:30 PM Expo Reception Tuesday, March 17 1:30 PM BTH100 · Microsoft From Open Weights to Enterprise Scale: Open-Source Models — Sharmila Chockalingam 2:00 PM BTH212 · Personal AI Unlocking the power of memory in Teams with Personal AI — Sam Harkness 2:30 PM BTH111 · Microsoft / NVIDIA Scalable LLM Inference on AKS Using NVIDIA Dynamo — Mohamad Al jazaery, Anton Slutsky 3:00 PM BTH204 · Mistral AI Innovate with Mistral AI on Microsoft Foundry — Ian Mathew 3:30 PM BTH104 · Microsoft GPU-Accelerated CFD at Scale: Star-CCM+ on Azure — Jason Scheffelmaer 4:00 PM BTH206 · NeuBird AI Agentic AI for Incident Response on Microsoft Azure — Grant Griffiths 4:30 PM BTH101 · GitHub Agentic DevOps: Evolving software with GitHub Copilot — Glenn Wester 5:00 PM BTH209 · Rescale Real-World AI Physics: GM & NVIDIA on Rescale — Dinal Perera 5:30 PM BTH107 · Microsoft Intro to LoRA Fine-Tuning on Azure — Christin Pohl 6:30 PM Raffle Wednesday, March 18 1:00 PM BTH219 · VAST Data Scaling AI Infrastructure on Azure with VAST Data — Jason Vallery 1:30 PM BTH110 · Microsoft Physical AI and Robotics: The Next Frontier — F. Miller, C. Souche, D. Narayanan 2:00 PM BTH105 · Microsoft Sovereign AI options with Azure Local — Kim Lam 2:30 PM BTH108 · Microsoft Automating HPC Workflows with Copilot Agents — Param Shah 3:00 PM BTH102 · Microsoft Trustworthy Multi-Agent Workflows with Microsoft Foundry — Brian Benz 4:00 PM BTH106 · Microsoft Scaling Enterprise AI on ARO with NVIDIA H100 & H200 — Lachie Evenson 4:30 PM BTH211 · WEKA Hybrid AI Data Orchestration with WEKA NeuralMesh™ — Desiree Campbell 5:00 PM BTH202 · Hammerspace NVIDIA AI Enterprise Software with NIM — Mike Bloom 5:30 PM BTH203 · Kinaxis Reimagining Global Supply Planning with Azure — Dane Henshall 6:00 PM BTH214 · AT&T Connected AI on Azure for Manufacturing — Brad Pritchett 6:30 PM Raffle Thursday, March 19 11:00 AM BTH210 · Wandelbots Physical AI: Powering Software-Defined Automation in Robotics — Marwin Kunz, Martin George 11:30 AM Raffle Explore Our Demo Pods Visit the Microsoft booth to see our technology in action with live demonstrations across four dedicated pod areas. POD 1 Azure AI Infrastructure End‑to‑end AI infrastructure for training and inference at scale, featuring the latest NVIDIA GPU integrations on Azure. POD 2 Microsoft Foundry Our comprehensive platform for building, deploying, and operating agentic AI systems with enterprise reliability. POD 3 Building AI Together Showcasing joint Microsoft and NVIDIA solutions across diverse industries, from manufacturing to retail. POD 4 Startups Powering AI Discover how innovative startups are running next‑generation AI workloads on the Azure platform. Ancillary Events & Networking Join Microsoft leadership and our partner ecosystem at these curated networking experiences. Click the location to view on Bing Maps. Sun · Mar 15 6:00 PM Microsoft for Startups Executive Leadership Dinner 📍 Morton’s Steakhouse, San Jose Exclusive gathering for startup leaders and Microsoft executives. Mon · Mar 16 1:30 PM Microsoft × NVIDIA Open Meet 📍 Signia by Hilton · International Suite Strategic alignment session for Microsoft and NVIDIA executives. Mon · Mar 16 7:30 PM Microsoft + NVIDIA Executive Dinner 📍 Il Fornaio, San Jose Executive dinner for key customers and leadership teams. Tue · Mar 17 11:00 AM to 1:00 PM Microsoft AI Luncheon: Research, Robotics, & Real‑World AI 📍 Signia by Hilton · International Suite Invite-only: A curated executive lunch exploring the journey from AI research to physical enterprise deployments in robotics and manufacturing. Tue · Mar 17 7:30 PM Networking in AI & Tech 📍 San Pedro Square Market Community networking mixer for Microsoft teams, partners, and customers. Wed · Mar 18 10:00 AM to 1:00 PM AI Innovator’s Circle Brunch: Powering Intelligent Systems Across the Ecosystem 📍 Il Fornaio, San Jose Hosted by Microsoft & NVIDIA at GTC. Join us for an exclusive brunch and discussion on the intelligent ecosystem.Microsoft Discovery: The path to an agentic EDA environment
Generative AI has been the buzz across engineering, science and consumer applications, including EDA. It was the centerpiece of the keynotes at both SNUG and CadenceLive, and it will feature heavily at DAC. Very impressive task specific tools and capabilities powered by traditional and generative AI are being developed by both industry vendors and customers. However, all these solutions are point solutions addressing specific tasks. This leaves the question of how customers will tie it all together and how customers will be able to run and access the LLMs, AI and data resources needed to power these solutions. While our industry has experience developing, running, and maintaining high-performance EDA environments, an AI centric data center running GPUs and low latency interconnect like Infiniband, is not an environment many chip development companies already have or have experience executing. Unfortunately, because LLMs are so resource hungry, it’s difficult to “ease into” a deployment. The Agentic Platform for EDA At the Microsoft Build conference in May, Microsoft introduced the Microsoft Discovery Platform. This platform aims to accelerate R&D across several industry verticals, specifically Biology (Life science and drug discovery), Chemistry (materials and substance discovery), and Physics (semiconductors and multi-physics). Microsoft Discovery provides the platform and capabilities to help customers implement a complete agentic AI environment. Being a cloud-based solution means customers won’t need to manage the AI models or RAG solutions themselves. Running inside the customer’s cloud tenant, the AI models, the data they use, and the results they produce all remain under the customer's control and within the customer’s environment. No data goes back to the Internet and all learning remains with the customer. This gives customers the confidence that they can safely and easily deploy and use AI models while maintaining complete sovereignty over their data and IP. Customers are free to deploy any of the dozens of available AI models offered on Azure. Customers can also deploy and use Graph RAG solutions to improve context and get better LLM responses. This is all available without having to deploy additional hardware or manage a large, independent GPU deployment. Customers testing out generative AI solutions and starting to develop their flows, tools, and methodologies with this new technology can deploy and use these resources as needed. The Microsoft Discovery platform does not try to replace the EDA tools you already have. Instead, it allows you to incorporate those tools into an agentic environment. Without anthropomorphizing, these agents can be thought of as AI driven task engines that can reason and interact with each other or tools. They can be used to make decisions, analyze results, generate responses, take action, or even drive tools. Customers will be able to incorporate existing EDA tools into the platform and drive them with an agent. Microsoft Discovery will even be able to run agents from partners and help customers intelligently tie together multiple capabilities and help automate analysis and decision-making on the flow helping each engineering teams accomplish a greater number of tasks more quickly and achieve increased productivity. HPC Infrastructure for EDA Of course, to run EDA tools, customers need an effective environment to run those tools in. One of the things that has always been true in our industry but is often overlooked is that, as good as the algorithms in the tools are, they’re always limited by the infrastructure it runs on. No matter how fast your algorithm is, running on a slow processor means turn-around time is still going to be slow. No matter how fast your tools are and how new and shiny your servers are, if your file system is a bottleneck, your tool and server will have to wait for the data. The infrastructure you run on sets the speed limit for your job regardless of how fast an engine you have. Most of the AI solutions being discussed for EDA focus only on the engine and ignore the infrastructure. The Microsoft Discovery platform understands this and addresses the issue by having the Azure HPC environment at its core. The HPC core of the platform uses elements familiar to the EDA community. High performance file storage utilizes Azure NetApp Files (ANF). This shared file service uses the same NetApp technology and hardware that many in the EDA community already uses on-prem. ANF delivers unmatched performance for cloud-based file storage, especially for metadata heavy workloads, like EDA. This will help provide EDA workloads with a familiar pathway into the Discovery platform to make use of the AI capabilities for chip design. Customers will also have access to Azure’s fleet of high-performance compute, including the recently released Intel Emerald Rapids-based FXv2, which was developed with large, back-end EDA workloads in mind. FXv2 features 1.8TB of RAM and all core turbo clock speed of 4 GHz. Ideal for large STA, P&R, and PV workloads. For front-end and moderate sized back-end workloads, in addition to the existing HPC compute offerings, Microsoft recently updated the D and E series compute SKUs with Intel Emerald Rapids processors in the v6 versions of those systems, further pushing performance for smaller workloads. Design teams will have access to the required high-performance compute and storage resources to maximize their EDA tools while also taking advantage of the benefits of AI capabilities offered by the platform. The familiar EDA-friendly HPC environment makes migration of existing workloads easier and ensures that tools will run effectively and, more importantly, flows mesh more smoothly. Industry Standards and Interoperability Another aspect of the Microsoft Discovery platform that will be especially important for EDA customers is the fact that the platform will utilize A2A for agent-to-agent communication and MCP for agent-service communication. The reason this is important is because both A2A and MCP are industry standard protocols. Microsoft also expects to support the evolution of these and other newer standards that will emerge in this field, future-proofing your investment. Those of us who have been involved in the various standards and interoperability efforts in semiconductor and EDA over the years will understand that having the platform use industry standards-based interfaces makes adoption of new technology much easier for all users. With AI development rushing forward and everyone, customers and vendors alike, trying to capitalize on gen AI’s promises, there are already independent efforts by customers and vendors to develop capabilities quickly. In the past, this meant that everyone went off in different directions developing mutually exclusive solutions. Vendors would develop mutually exclusive solutions that customers would have to also develop customized solutions to leverage. The various solutions would all work slightly differently, making integration a painful process. The history of VMM, OVM, and UVM was an example of this. As the industry starts to develop AI and agentic environments, the same fragmentation is likely to also happen again. By starting with A2A and MCP, Microsoft is signaling for the industry to align around these industry standard protocols. This will make it easier for agents developed by customers and vendors to interoperate with each other and the Discovery platform. Vendor tools implementing a MCP server interface can directly communicate with customer agents using MCP as well as with the Discovery platform. This makes it easier for our industry to develop interoperable solutions. Similarly, agents that use the A2A protocol to interact with other agents can be more easily integrated if the other agents also communicate using A2A. If you’re going to be building agents for EDA or EDA tools or services that interact with agents, build them using A2A for inter-agent communication and MCP for agent-to-tool/service communication. Generative AI is likely to be the most transformative technology to impact EDA this decade. It likely will be at least as impactful, productivity wise, for us a synthesis, STA, and automatic place and route were in their own ways. To learn more about these innovations, come join the Microsoft team at the Design Automation Conference (DAC) in San Francisco on June 23. At DAC, the Microsoft team will go into depth about the Discovery platform and the larger impact that AI will have on the semiconductor industry. In his opening keynote discussion on Monday, Bill Chappell, Microsoft's CTO for the Microsoft Discovery and Quantum team will discuss AI's impact on science and the semiconductor industry. Serge Leef’s engineering track session will talk about generative AI in chip design, and don't miss Prashant Varshney's detailed explanation of the Microsoft Discovery platform in his Exhibitor Forum session. Visit the Microsoft booth (second floor, 2124) for more in-depth discussions with our team.