azure kubernetes service
189 TopicsExpanding the Public Preview of the Azure SRE Agent
We are excited to share that the Azure SRE Agent is now available in public preview for everyone instantly – no sign up required. A big thank you to all our preview customers who provided feedback and helped shape this release! Watching teams put the SRE Agent to work taught us a ton, and we’ve baked those lessons into a smarter, more resilient, and enterprise-ready experience. You can now find Azure SRE Agent directly in the Azure Portal and get started, or use the link below. 📖 Learn more about SRE Agent. 👉 Create your first SRE Agent (Azure login required) What’s New in Azure SRE Agent - October Update The Azure SRE Agent now delivers secure-by-default governance, deeper diagnostics, and extensible automation—built for scale. It can even resolve incidents autonomously by following your team’s runbooks. With native integrations across Azure Monitor, GitHub, ServiceNow, and PagerDuty, it supports root cause analysis using both source code and historical patterns. And since September 1, billing and reporting are available via Azure Agent Units (AAUs). Please visit product documentation for the latest updates. Here are a few highlights for this month: Prioritizing enterprise governance and security: By default, the Azure SRE Agent operates with least-privilege access and never executes write actions on Azure resources without explicit human approval. Additionally, it uses role-based access control (RBAC) so organizations can assign read-only or approver roles, providing clear oversight and traceability from day one. This allows teams to choose their desired level of autonomy from read-only insights to approval-gated actions to full automation without compromising control. Covering the breadth and depth of Azure: The Azure SRE Agent helps teams manage and understand their entire Azure footprint. With built-in support for AZ CLI and kubectl, it works across all Azure services. But it doesn’t stop there—diagnostics are enhanced for platforms like PostgreSQL, API Management, Azure Functions, AKS, Azure Container Apps, and Azure App Service. Whether you're running microservices or managing monoliths, the agent delivers consistent automation and deep insights across your cloud environment. Automating Incident Management: The Azure SRE Agent now plugs directly into Azure Monitor, PagerDuty, and ServiceNow to streamline incident detection and resolution. These integrations let the Agent ingest alerts and trigger workflows that match your team’s existing tools—so you can respond faster, with less manual effort. Engineered for extensibility: The Azure SRE Agent incident management approach lets teams reuse existing runbooks and customize response plans to fit their unique workflows. Whether you want to keep a human in the loop or empower the Agent to autonomously mitigate and resolve issues, the choice is yours. This flexibility gives teams the freedom to evolve—from guided actions to trusted autonomy—without ever giving up control. Root cause, meet source code: The Azure SRE Agent now supports code-aware root cause analysis (RCA) by linking diagnostics directly to source context in GitHub and Azure DevOps. This tight integration helps teams trace incidents back to the exact code changes that triggered them—accelerating resolution and boosting confidence in automated responses. By bridging operational signals with engineering workflows, the agent makes RCA faster, clearer, and more actionable. Close the loop with DevOps: The Azure SRE Agent now generates incident summary reports directly in GitHub and Azure DevOps—complete with diagnostic context. These reports can be assigned to a GitHub Copilot coding agent, which automatically creates pull requests and merges validated fixes. Every incident becomes an actionable code change, driving permanent resolution instead of temporary mitigation. Getting Started Start here: Create a new SRE Agent in the Azure portal (Azure login required) Blog: Announcing a flexible, predictable billing model for Azure SRE Agent Blog: Enterprise-ready and extensible – Update on the Azure SRE Agent preview Product documentation Product home page Community & Support We’d love to hear from you! Please use our GitHub repo to file issues, request features, or share feedback with the team4.1KViews2likes3CommentsMicrosoft Azure at KubeCon North America 2025 | Atlanta, GA - Nov 10-13
KubeCon + CloudNativeCon North America is back - this time in Atlanta, Georgia, and the excitement is real. Whether you’re a developer, operator, architect, or just Kubernetes-curious, Microsoft Azure is showing up with a packed agenda, hands-on demos, and plenty of ways to connect and learn with our team of experts. Read on for all the ways you can connect with our team! Kick off with Azure Day with Kubernetes (Nov 10) Before the main conference even starts, join us for Azure Day with Kubernetes on November 10. It’s a full day of learning, best practices, deep-dive discussions, and hands-on labs, all designed to help you build cloud-native and AI apps with Kubernetes on Azure. You’ll get to meet Microsoft experts, dive into technical sessions, roll up your sleeves in the afternoon labs or have focused deep-dive discussions in our whiteboarding sessions. If you’re looking to sharpen your skills or just want to chat with folks who live and breathe Kubernetes on Azure, this is the place to be. Spots are limited, so register today at: https://aka.ms/AzureKubernetesDay Catch up with our experts at Booth #500 The Microsoft booth is more than just a spot to grab swag (though, yes, there will be swag and stickers!). It’s a central hub for connecting with product teams, setting up meetings, and seeing live demos. Whether you want to learn how to troubleshoot Kubernetes with agentic AI tools, explore open-source projects, or just talk shop, you’ll find plenty of friendly faces ready to help. We will be running a variety of theatre sessions and demos out of the booth week on topics including AKS Automatic, agentic troubleshooting, Azure Verified Modules, networking, app modernization, hybrid deployments, storage, and more. 🔥Hot tip: join us for our live Kubernetes Trivia Show at the Microsoft Azure booth during the KubeCrawl on Tuesday to win exclusive swag! Microsoft sessions at KubeCon NA 2025 Here’s a quick look at all the sessions with Microsoft speakers that you won’t want to miss. Click the titles for full details and add them to your schedule! Keynotes Date: Thu November 13, 2025 Start Time: 9:49 AM Room: Exhibit Hall B2 Title: Scaling Smarter: Simplifying Multicluster AI with KAITO and KubeFleet Speaker: Jorge Palma Abstract: As demand for AI workloads on Kubernetes grows, multicluster inferencing has emerged as a powerful yet complex architectural pattern. While multicluster support offers benefits in terms of geographic redundancy, data sovereignty, and resource optimization, it also introduces significant challenges around orchestration, traffic routing, cost control, and operational overhead. To address these challenges, we’ll introduce two CNCF projects—KAITO and KubeFleet—that work together to simplify and optimize multicluster AI operations. KAITO provides a declarative framework for managing AI inference workflows with built-in support for model versioning, and performance telemetry. KubeFleet complements this by enabling seamless workload distribution across clusters, based on cost, latency, and availability. Together, these tools reduce operational complexity, improve cost efficiency, and ensure consistent performance at scale. Date: Thu November 13, 2025 Start Time: 9:56 AM Room: Exhibit Hall B2 Title: Cloud Native Back to the Future: The Road Ahead Speakers: Jeremy Rickard (Microsoft), Alex Chircop (Akamai) Abstract: The Cloud Native Computing Foundation (CNCF) turns 10 this year, now home to more than 200 projects across the cloud native landscape. As we look ahead, the community faces new demands around security, sustainability, complexity, and emerging workloads like AI inference and agents. As many areas of the ecosystem transition to mature foundational building blocks, we are excited to explore the next evolution of cloud native development. The TOC will highlight how these challenges open opportunities to shape the next generation of applications and ensure the ecosystem continues to thrive. How are new projects addressing these new emerging workloads? How will these new projects impact security hygiene in the ecosystem? How will existing projects adapt to meet new realities? How is the CNCF evolving to support this next generation of computing? Join us as we reflect on the first decade of cloud native—and look ahead to how this community will power the age of AI, intelligent systems, and beyond. Featured Demo Date: Wed November 12, 2025 Start Time: 2:15-2:35 PM Room: Expo Demo Area Title: HolmesGPT: Agentic K8s troubleshooting in your terminal Speakers: Pavneet Singh Ahluwalia (Microsoft), Arik Alon (Robusta) Abstract: Troubleshooting Kubernetes shouldn’t require hopping across dashboards, logs, and docs. With open-source tools like HolmesGPT and the Model Context Protocol (MCP) server, you can now bring an agentic experience directly into your CLI. In this demo, we’ll show how this OSS stack can run everywhere, from lightweight kind clusters on your laptop to production-grade clusters at scale. The experience supports any LLM provider: in-cluster, local, or cloud, ensuring data never leaves your environment and costs remain predictable. We will showcase how users can ask natural-language questions (e.g., “why is my pod Pending?”) and get grounded reasoning, targeted diagnostics, and safe, human-in-the-loop remediation steps -- all without leaving the terminal. Whether you’re experimenting locally or running mission-critical workloads, you’ll walk away knowing how to extend these OSS components to build your own agentic workflows in Kubernetes. All sessions Microsoft Speaker(s) Session Will Case No Kubectl, No Problem: The Future With Conversational Kubernetes Ana Maria Lopez Moreno Smarter Together: Orchestrating Multi-Agent AI Systems With A2A and MCP on Container Neha Aggarwal 10 Years of Cilium: Connecting, Securing, and Simplifying the Cloud Native Stack Yi Zha Strengthening Supply Chain for Kubernetes: Cross-Cloud SLSA Attestation Verification Joaquim Rocha & Oleksandr Dubenko Contribfest: Power up Your CNCF Tools With Headlamp Jeremy Rickard Shaping LTS Together: What We’ve Learned the Hard Way Feynman Zhou Shipping Secure, Reusable, and Composable Infrastructure as Code: GE HealthCare’s Journey With ORAS Jackie Maertens & Nilekh Chaudhari No Joke: Two Security Maintainers Walk Into a Cluster Paul Yu, Sachi Desai Rage Against the Machine: Fighting AI Complexity with Kubernetes simplicity Dipti Pai Flux - The GitLess GitOps Edition Trask Stalnaker OpenTelemetry: Unpacking 2025, Charting 2026 Mike Morris Gateway API: Table Stakes Anish Ramasekar, Mo Khan, Stanislav Láznička, Rita Zhang & Peter Engelbert Strengthening Kubernetes Trust: SIG Auth's Latest Security Enhancements Ernest Wong AI Models Are Huge, but Your GPUs Aren’t: Mastering multi-mode distributed inference on Kubernetes Rita Zhang Navigating the Rapid Evolution of Large Model Inference: Where does Kubernetes fit? Suraj Deshmukh LLMs on Kubernetes: Squeeze 5x GPU Efficiency with cache, route, repeat! Aman Singh Drasi: A New Take on Change-driven Architectures Ganeshkumar Ashokavardhanan & Qinghui Zhuang Agent-Driven MCP for AI Workloads on Kubernetes Steven Jin Contribfest: From Farm (Fork) To Table (Feature): Growing Your First (Free-range Organic) Istio PR Jack Francis SIG Autoscaling Projects Update Mark Rossetti Kubernetes SIG-Windows Updates Apurup Chevuru & Michael Zappa Portable MTLS for Kubernetes: A QUIC-Based Plugin Compatible With Any CNI Ciprian Hacman The Next Decoupling: From Monolithic Cluster, To Control-Plane With Nodes Keith Mattix Istio Project Updates: AI Inference, Ambient Multicluster & Default Deny Jonathan Smith How Comcast Leverages Radius in Their Internal Developer Platform Jon Huhn Lightning Talk: Getting (and Staying) up To Speed on DRA With the DRA Example Driver Rita Zhang, Jaydip Gabani Open Policy Agent (OPA) Intro & Deep Dive Bridget Kromhout SIG Cloud Provider Deep Dive: Expanding Our Mission Pavneet Ahluwalia Beyond ChatOps: Agentic AI in Kubernetes—What Works, What Breaks, and What’s Next Ryan Zhang Finally, a Cluster Inventory I Can USE! Michael Katchinskiy, Yossi Weizman You Deployed What?! Data-driven lesson on Unsafe Helm Chart Defaults Mauricio Vásquez Bernal & Jose Blanquicet Contribfest: Inspektor Gadget Contribfest: Enhancing the Observability and Security of Your K8s Clusters Through an easy to use Framework Wei Fu etcd V3.6 and Beyond + Etcd-operator Updates Jeremy Rickard GitHub Actions: Project Usage and Deep Dive We can't wait to see you in Atlanta! Microsoft’s presence is all about empowering developers and operators to build, secure, and scale modern applications. You’ll see us leading sessions, sharing open-source contributions, and hosting roundtables on how cloud native powers AI in production. We’re here to learn from you, too - so bring your questions, ideas, and feedback.169Views0likes0CommentsLeveraging Low Priority Pods for Rapid Scaling in AKS
If you're running workloads in Kubernetes, you'll know that scalability is key to keeping things available and responsive. But there's a problem: when your cluster runs out of resources, the node autoscaler needs to spin up new nodes, and this takes anywhere from 5 to 10 minutes. That's a long time to wait when you're dealing with a traffic spike. One way to handle this is using low priority pods to create buffer nodes that can be preempted when your actual workloads need the resources. The Problem Cloud-native applications are dynamic, and workload demands can spike quickly. Automatic scaling helps, but the delay in scaling up nodes when you run out of capacity can leave you vulnerable, especially in production. When a cluster runs out of available nodes, the autoscaler provisions new ones, and during that 5-10 minute wait you're facing: Increased Latency: Users experience lag or downtime whilst they're waiting for resources to become available. Resource Starvation: High-priority workloads don't get the resources they need, leading to degraded performance or failed tasks. Operational Overhead: SREs end up manually intervening to manage resource loads, which takes them away from more important work. This is enough reason to look at creating spare capacity in your cluster, and that's where low priority pods come in. The Solution The idea is pretty straightforward: you run low priority pods in your cluster that don't actually do any real work - they're just placeholders consuming resources. These pods are sized to take up enough space that the cluster autoscaler provisions additional nodes for them. Effectively, you're creating a buffer of "standby" nodes that are ready and waiting. When your real workloads need resources and the cluster is under pressure, Kubernetes kicks out these low priority pods to make room - this is called preemption. Essentially, Kubernetes looks at what's running, sees the low priority pods, and terminates them to free up the nodes. This happens almost immediately, and your high-priority workloads can use that capacity straight away. Meanwhile, those evicted low priority pods sit in a pending state, which triggers the autoscaler to spin up new nodes to replace the buffer you just used. The whole thing is self-maintaining. How Preemption Actually Works When a high-priority pod needs to be scheduled but there aren't enough resources, the Kubernetes scheduler kicks off preemption. This happens almost instantly compared to the 5-10 minute wait for new nodes. Here's what happens: Identification: The scheduler works out which low priority pods need to be evicted to make room. It picks the lowest priority pods first. Graceful Termination: The selected pods get a termination signal (SIGTERM) and a grace period (usually 30 seconds by default) to shut down cleanly. Resource Release: Once the low priority pods terminate, their resources are immediately released and available for scheduling. The high-priority pod can then be scheduled onto the node, typically within seconds. Buffer Pod Rescheduling: After preemption, the evicted low priority pods try to reschedule. If there's capacity on existing nodes, they'll land there. If not, they'll sit in a pending state, which triggers the cluster autoscaler to provision new nodes. This gives you a dual benefit: your critical workloads get immediate access to the nodes that were running low priority pods, and the system automatically replenishes the buffer in the background. Whilst your high-priority workloads are running on the newly freed capacity, the autoscaler is already provisioning replacement nodes for the evicted buffer pods. Your buffer capacity is continuously maintained without any manual work, so you're always ready for the next spike. The key advantage here is speed. Whilst provisioning a new node takes 5-10 minutes, preempting a low priority pod and scheduling a high-priority pod in its place typically completes in under a minute. Why This Approach Works Well Now that you understand how the solution works, let's look at why it's effective: Immediate Resource Availability: You maintain a pool of ready nodes that can rapidly scale up when needed. There's always capacity available to handle sudden load spikes without waiting for new nodes. Seamless Scaling: High-priority workloads never face resource starvation, even during traffic surges. They get immediate access to capacity, whilst the buffer automatically replenishes itself in the background. Self-Maintaining: Once set up, the system handles everything automatically. You don't need to manually manage the buffer or intervene when workloads spike. The Trade-Off Whilst low priority pods offer significant advantages for keeping your cluster responsive, you need to understand the cost implications. By maintaining buffer nodes with low priority pods, you're running machines that aren't hosting active, productive workloads. You're paying for additional infrastructure just for availability and responsiveness. These buffer nodes consume compute resources you're paying for, even though they're only running placeholder workloads. The decision for your organisation comes down to whether the improved responsiveness and elimination of that 5-10 minute scaling delay justifies the extra cost. For production environments with strict SLA requirements or where downtime is expensive, this trade-off is usually worth it. However, you'll want to carefully size your buffer capacity to balance cost with availability needs. Setting It Up Step 1: Define Your Low Priority Pod Configurations Start by defining low priority pods using the PriorityClass resource. This is where you create configurations that designate certain workloads as low priority. Here's what that configuration looks like: apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: low-priority value: 0 globalDefault: false description: "Priority class for buffer pods" --- apiVersion: apps/v1 kind: Deployment metadata: name: buffer-pods namespace: default spec: replicas: 3 # Adjust based on how much buffer capacity you need selector: matchLabels: app: buffer template: metadata: labels: app: buffer spec: priorityClassName: low-priority containers: - name: buffer-container image: registry.k8s.io/pause:3.9 # Lightweight image that does nothing resources: requests: cpu: "1000m" # Size these based on your typical workload needs memory: "2Gi" # Large enough to trigger node creation limits: cpu: "1000m" memory: "2Gi" The key things to note here: The PriorityClass has a value of 0, which is lower than the default priority for regular pods (typically 1000+) We're using a Deployment rather than individual pods so we can easily scale the buffer size The pause image is a minimal container that does basically nothing - perfect for a placeholder The resource requests are what matter - these determine how much space each buffer pod takes up You'll want to size the CPU and memory requests based on your actual workload needs Step 2: Deploy the Low Priority Pods Next, deploy these low priority pods across your cluster. Use affinity configurations to spread them out and let Kubernetes manage them. Step 3: Monitor and Adjust You'll want to monitor your deployment to make sure your buffer nodes are scaling up when needed and scaling down during idle periods to save costs. Tools like Prometheus and Grafana work well for monitoring resource usage and pod status so you can refine your setup over time. Best Practices Right-Sizing Your Buffer Pods: The resource requests for your low priority pods need careful thought. They need to be big enough to consume sufficient capacity that additional buffer nodes actually get provisioned by the autoscaler. But they shouldn't be so large that you end up over-provisioning beyond your required buffer size. Think about your typical workload resource requirements and size your buffer pods to create exactly the number of standby nodes you need. Regular Assessment: Keep assessing your scaling strategies and adjust based on what you're seeing with workload patterns and demands. Monitor how often your buffer pods are getting evicted and whether the buffer size makes sense for your traffic patterns. Communication and Documentation: Make sure your team understands what low priority pods do in your deployment and what this means for your SLAs. Document the cost of running your buffer nodes and why you're justifying this overhead. Automated Alerts: Set up alerts for when pod eviction happens so you can react quickly and make sure critical workloads aren't being affected. Also alert on buffer pod status to ensure your buffer capacity stays available. Wrapping Up Leveraging low priority pods to create buffer nodes is an effective way to handle resource constraints when you need rapid scaling and can't afford to wait for the node autoscaler. This approach is particularly valuable if you're dealing with workloads that experience sudden, unpredictable traffic spikes and need to scale up immediately - think scenarios like flash sales, breaking news events, or user-facing applications with strict SLA requirements. However, this isn't a one-size-fits-all solution. If your workloads are fairly static or you can tolerate the 5-10 minute wait for new nodes to provision, you probably don't need this. The buffer comes at an additional cost since you're running nodes that aren't doing productive work, so you need to weigh whether the improved responsiveness justifies the extra spend for your specific use case. If you do decide this approach fits your needs, remember to keep monitoring and iterating on your configuration for the best resource management. By maintaining a buffer of low priority pods, you can address resource scarcity before it becomes a problem, reduce latency, and provide a much better experience for your users. This approach will make your cluster more responsive and free up your operational capacity to focus on improving services instead of constantly firefighting resource issues.205Views0likes0CommentsChoosing the Right Azure Containerisation Strategy: AKS, App Service, or Container Apps?
Azure Kubernetes Service (AKS) What is it? AKS is Microsoft’s managed Kubernetes offering, providing full access to the Kubernetes API and control plane. It’s designed for teams that want to run complex, scalable, and highly customisable container workloads, with direct control over orchestration, networking, and security. When to choose AKS: You need advanced orchestration, custom networking, or integration with third-party tools. Your team has Kubernetes expertise and wants granular control. You’re running large-scale, multi-service, or hybrid/multi-cloud workloads. You require Windows container support (with some limitations). Advantages: Full Kubernetes API access and ecosystem compatibility. Supports both Linux and Windows containers. Highly customisable (networking, storage, security, scaling). Suitable for complex, stateful, or regulated workloads. Disadvantages: Steeper learning curve; requires Kubernetes knowledge. You manage cluster upgrades, scaling, and security patches (though Azure automates much of this). Potential for over-provisioning and higher operational overhead. Azure App Service What is it? App Service is a fully managed Platform-as-a-Service (PaaS) for hosting web apps, APIs, and backends. It supports both code and container deployments, but is optimised for web-centric workloads. When to choose App Service: You’re building traditional web apps, REST APIs, or mobile backends. You want to deploy quickly with minimal infrastructure management. Your team prefers a PaaS experience with built-in scaling, SSL, and CI/CD. You need to run Windows containers (with some limitations). Advantages: Easiest to use, minimal configuration, fast deployments. Built-in scaling, SSL, custom domains, and staging slots. Tight integration with Azure DevOps, GitHub Actions, and other Azure services. Handles infrastructure, patching, and scaling for you. Disadvantages: Less flexibility for complex microservices or custom orchestration. Limited access to underlying infrastructure and networking. Not ideal for event-driven or non-HTTP workloads. Azure Container Apps What is it? Container Apps is a fully managed, serverless container platform built on Kubernetes and open-source tech like Dapr and KEDA. It abstracts away Kubernetes complexity, letting you focus on microservices, event-driven, or background jobs. When to choose Container Apps: You want to run microservices or event-driven workloads without managing Kubernetes. You need automatic scaling (including scale to zero) based on HTTP traffic or events. You want to use Dapr for service discovery, pub/sub, or state management. You’re building modern, cloud-native apps but don’t need direct Kubernetes API access. Advantages: Serverless scaling (including to zero), pay only for what you use. Built-in support for microservices patterns, event-driven architectures, and background jobs. No cluster management—Azure handles the infrastructure. Integrates with Azure DevOps, GitHub Actions, and supports Linux containers from any registry. Disadvantages: No direct access to Kubernetes APIs or custom controllers. Linux containers only (no Windows container support). Some advanced networking and customisation options are limited compared to AKS. Key Differences Feature Azure Kubernetes Service (AKS) Azure App Service Azure Container Apps Best for Complex, scalable, custom workloads Web apps, APIs, backends Microservices, event-driven, jobs Management You manage (with Azure help) Fully managed Fully managed, serverless Scaling Manual/auto (pods, nodes) Auto (HTTP traffic) Auto (HTTP/events, scale to zero) API Access Full Kubernetes API No infra access No Kubernetes API OS Support Linux & Windows Linux & Windows Linux only Networking Advanced, customisable Basic (web-centric) Basic, with VNet integration Use Cases Hybrid/multi-cloud, regulated, large-scale Web, REST APIs, mobile Microservices, event-driven, background jobs Learning Curve Steep (Kubernetes skills needed) Low Low-medium Pricing Pay for nodes (even idle) Pay for plan (fixed/auto) Pay for usage (scale to zero) CI/CD Integration Azure DevOps, GitHub, custom Azure DevOps, GitHub Azure DevOps, GitHub How to Decide? Start with App Service if you’re building a straightforward web app or API and want the fastest path to production. Choose Container Apps for modern microservices, event-driven, or background processing workloads where you want serverless scaling and minimal ops. Go with AKS when you need full Kubernetes power, advanced customisation, or are running at enterprise scale with a skilled team. Conclusion Azure’s containerisation portfolio is broad, but each service is optimised for different scenarios. For most new cloud-native projects, Container Apps offers the best balance of simplicity and power. For web-centric workloads, App Service remains the fastest route. For teams needing full control and scale, AKS is unmatched. Tip: Start simple, and only move to more complex platforms as your requirements grow. Azure’s flexibility means you can mix and match these services as your architecture evolves.1.2KViews2likes0CommentsPublic preview: Confidential containers on AKS
We are proud to announce the preview of confidential containers on AKS, which provides confidential computing capabilities to containerize workloads on AKS. This offering provides strong isolation at the pod-level, memory encryption, AMD SEV-SNP hardware-based attestation capabilities for containerized application code and data while in-use, building upon the existing security, scalability and resiliency benefits offered by AKS.
7.1KViews4likes1CommentAzure Monitor managed service for Prometheus now includes native Grafana dashboards
We are excited to announce that Azure Monitor managed service for Prometheus now includes native Grafana dashboards within the Azure portal at no additional cost. This integration marks a major milestone in our mission to simplify observability reducing the administrative overhead and complexity compared to deploying and maintaining your own Grafana instances. The use of open-source observability tools continues to grow for cloud-native scenarios such as application and infrastructure monitoring using Prometheus metrics and OpenTelemetry logs and traces. For these scenarios, DevOps and SRE teams need streamlined and cost-effective access to industry-standard tooling like Prometheus metrics and Grafana dashboards within their cloud-hosted environments. For many teams, this usually means deploying and managing separate monitoring stacks with some versions self-hosted or partner-managed Prometheus and Grafana. However, Azure Monitor's latest integrations with Grafana provides this capability out-of-the-box by enabling you to view Prometheus metrics and Azure other observability data in Grafana dashboards fully integrated into the Azure portal. Azure Monitor dashboards with Grafana delivers powerful visualization and data transformation capabilities on Prometheus metrics, Azure resource metrics, logs, and traces stored in Azure Monitor. Pre-built dashboards are included for several key scenarios like Azure Kubernetes Service, Azure Container Apps, Container Insights, and Application Insights. Why Grafana in Azure portal? Grafana dashboards are widely adopted visualization tool used with Prometheus metrics and cloud-native observability tools. Embedding it natively in Azure Portal offers: Unified Azure experience: No additional RBAC or network configuration required, users Azure login credentials and Azure RBAC are used to access dashboards and data. View Grafana dashboards alongside all your other Azure resources and Azure Monitor views in the same portal. No management overhead or compute costs: Dashboards with Grafana use a fully SaaS model built into Azure Monitor, where you do not have to administer the Grafana server or the compute on which it runs. Access to community dashboards: Open-source and Grafana community dashboards using Prometheus or Azure Monitor data sources can be imported with no modifications. These capabilities mean faster troubleshooting, deeper insights, and a more consistent observability platform for Azure-centric workloads. Figure 1: Dashboards with Grafana landing page in the context of Azure Monitor Workspace in the Azure portal Getting Started To get started, enable Managed Prometheus for your AKS cluster and then navigate to the Azure Monitor workspace or AKS cluster in the Azure portal and select Monitoring > Dashboards with Grafana (preview). From this page you can view, edit, create and import Grafana dashboards. Simply click on one of the pre-built dashboards to get started. You may use these dashboards as they have been provided or edit and add panels, update visualizations and create variables to create your own custom dashboards. With this approach, no Grafana servers or additional Azure resources need to be provisioned or maintained. Teams can quickly leverage and customize Grafana dashboards within the Azure portal, reducing their deployment and management time while still gaining the benefits of dashboards and visualizations to improve monitoring and troubleshooting times. Figure 2: Kubernetes Compute Resources dashboard being viewed in the context of Azure Monitor Workspace in the Azure portal When to upgrade to Azure Managed Grafana? Dashboards with Grafana in the Azure portal cover most common Prometheus scenarios but, Azure Managed Grafana remains the right choice for several advanced use cases, including: Extended data source support for non-Azure data sources e.g. open-source and third-party data stores Private networking and advanced authentication options Multi-cloud, hybrid and on-premises data source connectivity. See When to use Azure Managed Grafana for more details. Get started with Azure Monitor dashboards with Grafana today.770Views1like0CommentsGenerally Available - High scale mode in Azure Monitor - Container Insights
Container Insights is Azure Monitor’s solution for collecting logs from your Azure Kubernetes Service (AKS) clusters. As the adoption of AKS continues to grow, we are seeing an increasing number of customers with log scaling needs that hit the limits of log collection in Container Insights. Last August, we announced the public preview of High Scale mode in Container Insights to help customers achieve a higher log collection throughput from their AKS clusters. Today, we are happy to announce the General Availability of High Scale mode. High scale mode is ideal for customers approaching or above 10,000 logs/sec from a single node. When High Scale mode is enabled, Container Insights does multiple configuration changes leading to a higher overall throughput. These include using a more powerful agent setup, using a different data pipeline, allocating more memory for the agent, and more. All these changes are made in the background by the service and do not require input or configuration from customers. High Scale mode impacts only the data collection layer (with a new DCR) – the rest of the experience remains the same. Data flows to our existing tables, your queries and alerts work as before too. High Scale mode is available to all customers. Today, High scale is turned off by default. In the future, we plan to enable High Scale mode by default for all customers to reduce the chances of log loss when workloads scale. To get started with High Scale mode, please see our documentation at https://aka.ms/cihsmode261Views1like0CommentsSecuring Cloud Shell Access to AKS
Azure Cloud Shell is an online shell hosted by Microsoft that provides instant access to a command-line interface, enabling users to manage Azure resources without needing local installations. Cloud Shell comes equipped with popular tools and programming languages, including Azure CLI, PowerShell, and the Kubernetes command-line tool (kubectl). Using Cloud Shell can provide several benefits for administrators who need to work with AKS, especially if they need quick access from anywhere, or are in locked down environments: Immediate Access: There’s no need for local setup; you can start managing Azure resources directly from your web browser. Persistent Storage: Cloud Shell offers a file share in Azure, keeping your scripts and files accessible across multiple sessions. Pre-Configured Environment: It includes built-in tools, saving time on installation and configuration. The Challenge of Connecting to AKS By default, Cloud Shell traffic to AKS originates from a random Microsoft-managed IP address, rather than from within your network. As a result, the AKS API server must be publicly accessible with no IP restrictions, which poses a security risk as anyone on the internet can attempt to reach it. While credentials are still required, restricting access to the API server significantly enhances security. Fortunately, there are ways to lock down the API server while still enabling access via Cloud Shell, which we’ll explore in the rest of this article Options for Securing Cloud Shell Access to AKS Several approaches can be taken to secure the access to your AKS cluster while using Cloud Shell: IP Allow Listing On AKS clusters with a public API server, it is possible to lock down access to the API server with an IP allow list. Each Cloud Shell instance has a randomly selected outbound IP coming from the Azure address space whenever a new session is deployed. This means we cannot allow access to these IPs in advance, but we apply them once our session is running and this will work for the duration of our session. Below is an example script that you could run from Cloud Shell to check the current outbound IP address and allow it on your AKS clusters authorised IP list. #!/usr/bin/env bash set -euo pipefail RG="$1"; AKS="$2" IP="$(curl -fsS https://api.ipify.org)" echo "Adding ${IP} to allow list" CUR="$(az aks show -g "$RG" -n "$AKS" --query "apiServerAccessProfile.authorizedIpRanges" -o tsv | tr '\t' '\n' | awk 'NF')" NEW="$(printf "%s\n%s/32\n" "$CUR" "$IP" | sort -u | paste -sd, -)" if az aks update -g "$RG" -n "$AKS" --api-server-authorized-ip-ranges "$NEW" >/dev/null; then echo "IP ${IP} applied successfully"; else echo "Failed to apply IP ${IP}" >&2; exit 1; fi This method comes with some caveats: The users running the script would need to be granted permissions to update the authorised IP ranges in AKS - this permission could be used to add any IP address This script will need to be run each time a Cloud Shell session is created, and can take a few minutes to run The script only deals with adding IPs to the allow list, you would also need to implement a process to remove these IPs on a regular basis to avoid building up a long list of IPs that are no longer needed. Adding Cloud Shell IPs in bulk, through Service Tags or similar will result in your API server being accessible to a much larger range of IP addresses, and should be avoided. Command Invoke Azure provides a feature known as Command Invoke that allows you to send commands to be run in AKS, without the need for direct network connectivity. This method executes a container within AKS to run your command and then return the result, and works well from within Cloud Shell. This is probably the simplest approach that works with a locked down API server and the quickest to implement. However, there are some downsides: Commands take longer to run - when you execute the command, it needs to run a container in AKS, execute the command and then return the result. You only get exitCode and text output, and you lose API level details. All commands must be run within the context of the az aks command invoke CLI command, making commands much longer and complex to execute, rather than direct access with Kubectl Command Invoke can be a practical solution for occasional access to AKS, especially when the cost or complexity of alternative methods isn't justified. However, its user experience may fall short if relied upon as a daily tool. Further Details: Access a private Azure Kubernetes Service (AKS) cluster using the command invoke or Run command feature - Azure Kubernetes Service | Microsoft Learn Cloud Shell vNet Integration It is possible to deploy Cloud Shell into a virtual network (vNet), allowing it to route traffic via the vNet, and so access resources using private network, Private Endpoints, or even public resources, but using a NAT Gateway or Firewall for consistent outbound IP address. This approach uses Azure Relay to provide secure access to the vNet from Cloud Shell, without the need to open additional ports. When using Cloud Shell in this way, it does introduce additional cost for the Azure Relay service. Using this solution will require two different approaches, depending on whether you are using a private or public API server. When using a Private API server, which is either directly connected to the vNet, or configured with Private Endpoints, Cloud Shell will be able to connect directly to the private IP of this service over the vNet When using a Public API server, with a public IP, traffic for this will still leave the vNet and go to the internet. The benefit is that we can control the public IP used for the outbound traffic using a Nat Gateway or Azure Firewall. Once this is configured, we can then allow-list this fixed IP on the AKS API server authorised IP ranges. Further Details: Use Cloud Shell in an Azure virtual network | Microsoft Learn Azure Bastion Azure Bastion provides secure and seamless RDP and SSH connectivity to your virtual machines (VMs) directly from the Azure portal, without exposing them to the public internet. Recently, Bastion has also added support for direct connection to AKS with SSH, rather than needing to connect to a jump box and then use Kubectl from there. This greatly simplifies connecting to AKS, and also reduces the cost. Using this approach, we can deploy a Bastion into the vNet hosting AKS. From Cloud Shell we can then use the following command to create a tunnel to AKS. az aks bastion --name <aks name> --resource-group <resource group name> --bastion <bastion resource ID> Once this tunnel is connected, we can run Kubectl commands without any need for further configuration. As with Cloud Shell network integration, we take two slightly different approaches depending on whether the API server is public or private: When using a Private API server, which is either directly connected to the vNet, or configured with Private Endpoints, Cloud Shells connected via Bastion will be able to connect directly to the private IP of this service over the vNet When using a Public API server, with a public IP, traffic for this will still leave the vNet and go to the internet. As with Cloud Shell vNet integration, we can configure this to use a static outbound IP and allow list this on the API server. Using Bastion, we can still use NAT Gateway or Azure Firewall to achieve this, however you can also allow list the public IP assigned to the Bastion, removing the cost for NAT Gateway or Azure Firewall if these are not required for anything else. Connecting to AKS directly from Bastion requires the use of the Standard for Premium SKU of Bastion, which does have additional cost over the Developer or Basic SKU. This feature also requires that you enable native client support. Further details: Connect to AKS Private Cluster Using Azure Bastion (Preview) - Azure Bastion | Microsoft Learn Summary of Options IP Allow Listing The outbound IP addresses for Cloud Shell instances can be added to the Authorised IP list for your API server. As these IPs are dynamically assigned to sessions they would need to be added at runtime, to avoid adding a large list of IPs and reducing security. This can be achieved with a script. While easy to implement, this requires additional time to run the script with every new session, and increases the overhead for managing the Authorise IP list to remove unused IPs. Command Invoke Command Invoke allows you to run commands against AKS without requiring direct network access or any setup. This is a convenient option for occasional tasks or troubleshooting, but it’s not designed for regular use due to its limited user experience and flexibility. Cloud Shell vNet Integration This approach connects Cloud Shell directly to your virtual network, enabling secure access to AKS resources. It’s well-suited for environments where Cloud Shell is the primary access method and offers a more secure and consistent experience than default configurations. It does involve additional cost for Azure Relay. Azure Bastion Azure Bastion provides a secure tunnel to AKS that can be used from Cloud Shell or by users running the CLI locally. It offers strong security by eliminating public exposure of the API server and supports flexible access for different user scenarios, though it does require setup and may incur additional cost. Cloud Shell is a great tool for providing pre-configured, easily accessible CLI instances, but in the default configuration it can require some security compromises. With a little work, it is possible to make Cloud Shell work with a more secure configuration that limits how much exposure is needed for your AKS API server.263Views1like0CommentsDeploying Azure ND H100 v5 Instances in AKS with NVIDIA MIG GPU Slicing
In this article we will cover: AKS Cluster Deployment (Latest Version) – creating an AKS cluster using the latest Kubernetes version. GPU Node Pool Provisioning – adding an ND H100 v5 node pool on Ubuntu, with --skip-gpu-driver-install to disable automatic driver installation. NVIDIA H100 MIG Slicing Configurations – available MIG partition profiles on the H100 GPU and how to enable them. Workload Recommendations for MIG Profiles – choosing optimal MIG slice sizes for different AI/ML and HPC scenarios. Best Practices for MIG Management and Scheduling – managing MIG in AKS, scheduling pods, and operational tips. AKS Cluster Deployment (Using the Latest Version) Install/Update Azure CLI: Ensure you have Azure CLI 2.0.64+ (or Azure CLI 1.0.0b2 for preview features). This is required for using the --skip-gpu-driver-install option and other latest features. Install the AKS preview extension if needed: az extension add --name aks-preview az extension update --name aks-preview (Preview features are opt-in; using the preview extension gives access to the latest AKS capabilities) Create a Resource Group: If not already done, create an Azure resource group for the cluster: az group create -n MyResourceGroup -l eastus Create the AKS Cluster: Run az aks create to create the AKS control plane. You can start with a default system node pool (e.g. a small VM for system pods) and no GPU nodes yet. For example: az aks create -g MyResourceGroup -n MyAKSCluster \ --node-vm-size Standard_D4s_v5 \ --node-count 1 \ --kubernetes-version <latest-stable-version> \ --enable-addons monitoring This creates a cluster named MyAKSCluster with one standard node. Use the --kubernetes-version flag to specify the latest AKS-supported Kubernetes version (or omit it to get the default latest). As of early 2025, AKS supports Kubernetes 1.27+; using the newest version ensures support for features like MIG and the ND H100 v5 SKU. Retrieve Cluster Credentials: Once created, get your Kubernetes credentials: az aks get-credentials -g MyResourceGroup -n MyAKSCluster Verification: After creation, you should have a running AKS cluster. You can verify the control plane is up with: kubectl get nodes Adding an ND H100 v5 GPU Node Pool (Ubuntu + Skip Driver Install) Next, add a GPU node pool using the ND H100 v5 VM size. The ND H100 v5 series VMs each come with 8× NVIDIA H100 80GB GPUs (640 GB total GPU memory), high-bandwidth interconnects, and 96 vCPUs– ideal for large-scale AI and HPC workloads. We will configure this node pool to run Ubuntu and skip the automatic NVIDIA driver installation, since we plan to manage drivers (and MIG settings) manually or via the NVIDIA operator. Steps to add the GPU node pool: Use Ubuntu Node Image: AKS supports Ubuntu 20.04/22.04 for ND H100 v5 nodes. The default AKS Linux OS (Ubuntu) is suitable. We also set --os-sku Ubuntu to ensure we use Ubuntu (if your cluster’s default is Azure Linux, note that Azure Linux is not currently supported for MIG node pools). Add the GPU Node Pool with Azure CLI: Run: az aks nodepool add \ --cluster-name MyAKSCluster \ --resource-group MyResourceGroup \ --name h100np \ --node-vm-size Standard_ND96isr_H100_v5 \ --node-count 1 \ --node-os-type Linux \ --os-sku Ubuntu \ --gpu-driver none \ --node-taints nvidia.com/gpu=true:NoSchedule Let’s break down these parameters: --node-vm-size Standard_ND96isr_H100_v5 selects the ND H100 v5 VM size (96 vCPUs, 8×H100 GPUs). Ensure your subscription has quota for this SKU and region. --node-count 1 starts with one GPU VM (scale as needed). --gpu-driver none tells AKS not to pre-install NVIDIA drivers on the node. This prevents the default driver installation, because we plan to handle drivers ourselves (using NVIDIA’s GPU Operator for better control). When using this flag, new GPU nodes come up without NVIDIA drivers until you install them manually or via an operator--node-taints --node-taints nvidia.com/gpu=true:NoSchedule taints the GPU nodes so that regular pods won’t be scheduled on them accidentally. Only pods with a matching toleration (e.g. labeled for GPU use) can run on these nodes. This is a best practice to reserve expensive GPU nodes for GPU workloads (Optional) You can also add labels if needed. For example, to prepare for MIG configuration with the NVIDIA operator, you might add a label like nvidia.com/mig.config=all-1g.10gb to indicate the desired MIG slicing (explained later). We will address MIG config shortly, so adding such a label now is optional. Wait for Node Pool to be Ready: Monitor the Azure CLI output or use kubectl get nodes until the new node appears. It should register in Kubernetes (in NotReady state initially while it's configuring). Since we skipped driver install, the node will not have GPU scheduling resources yet (no nvidia.com/gpu resource visible) until we complete the next step. Installing the NVIDIA Driver Manually (or via GPU Operator) Because we used --skip-gpu-driver-install, the node will not have the necessary NVIDIA driver or CUDA runtime out of the box. You have two main approaches to install the driver: Use the NVIDIA GPU Operator (Helm-based) to handle driver installation. Install drivers manually (e.g., run a DaemonSet that downloads and installs the .run package or Debian packages). NVIDIA GPU Operator manages drivers, the Kubernetes device plugin, and GPU monitoring components. AKS GPU node pools come with the NVIDIA drivers and container runtime already pre-installed. BUT, because we used the flag : -skip-gpu-driver-install, we can now deploy the NVIDIA GPU Operator to handle GPU workloads and monitoring, while disabling its driver installation (to avoid conflicts with the pre-installed drivers). The GPU Operator will deploy the necessary components like the Kubernetes device plugin and the DCGM exporter for monitoring. 2.1 Installing via NVIDIA GPU Operator Step 1: Add the NVIDIA Helm repository. NVIDIA provides a Helm chart for the GPU Operator. Add the official NVIDIA Helm repo and update it: helm repo add nvidia https://helm.ngc.nvidia.com/nvidia && helm repo update This repository contains the gpu-operator chart and other NVIDIA helm charts Step 2: Install the GPU Operator via Helm. Use Helm to install the GPU Operator into a dedicated namespace (e.g., gpu-operator). In AKS, disable the GPU Operator’s driver and toolkit deployment (since AKS already has those), and specify the correct container runtime class for NVIDIA. For example: helm install gpu-operator nvidia/gpu-operator \ -n gpu-operator --create-namespace \ --set operator.runtimeClass=nvidia-container-runtime In the above command: operator.runtimeClass=nvidia-container-runtime aligns with the runtime class name configured on AKS for GPU support After a few minutes, Helm should report a successful deployment. For example: NAME: gpu-operator LAST DEPLOYED: Fri May 5 15:30:05 2023 NAMESPACE: gpu-operator STATUS: deployed REVISION: 1 TEST SUITE: None You can verify that the GPU Operator’s pods are running in the cluster. The Operator will deploy several DaemonSets including the NVIDIA device plugin, DCGM exporter, and others. For example, after installation you should see pods like the following in the gpu-operator namespace: nvidia-dcgm-exporter-xxxxx 1/1 Running 0 60s nvidia-device-plugin-daemonset-xxxxx 1/1 Running 0 60s nvidia-mig-manager-xxxxx 1/1 Running 0 4m nvidia-driver-daemonset-xxxxx 1/1 Running 0 4m gpu-operator-node-feature-discovery-... 1/1 Running 0 5m ... (other GPU operator pods) ... Here we see the NVIDIA device plugin and NVIDIA DCGM exporter pods running on each GPU node, as well as other components. (Note: In our AKS setup, the nvidia-driver-daemonset may be present but left idle since we disabled driver management.) Step 3: Confirm the operator’s GPU validation. The GPU Operator will run a CUDA validation job to verify everything is working. Check that the CUDA validation pod has completed successfully: kubectl get pods -n gpu-operator -l app=nvidia-cuda-validator Expected output: NAME READY STATUS RESTARTS AGE nvidia-cuda-validator-bpvkt 0/1 Completed 0 3m56s A Completed CUDA validator indicates the GPUs are accessible and the NVIDIA stack is functioning. At this point, you have the NVIDIA GPU Operator (with device plugin and DCGM exporter) installed via Helm on AKS. Verifying MIG on H100 with Node Pool Provisioning Once the driver is installed and the NVIDIA device plugin is running, you can verify MIG. The process is similar to verifying MIG on A100, but the resource naming and GPU partitioning reflect H100 capabilities. Check Node Resources kubectl describe node <h100-node-name> If you chose single MIG strategy, you might see: Allocatable: nvidia.com/gpu: 56 for a node with 8 H100s × 7 MIG slices each = 56. Or: nvidia.com/gpu: 14 if you used MIG2g (which yields 2–3 slices per GPU, depending on the exact profile). If you chose mixed MIG strategy (mig.strategy=mixed), you’ll see something like: Allocatable: nvidia.com/mig-1g.10gb: 56 or whichever MIG slice name is appropriate (e.g., mig-3g.40gb for MIG3g). Confirm MIG in nvidia-smi Run a GPU Workload For instance, run a quick CUDA container: kubectl run mig-test --rm -ti \ --image=nvidia/cuda:12.1.1-runtime-ubuntu22.04 \ --limits="nvidia.com/gpu=1" \ -- bash Inside the container, nvidia-smi should confirm you have a MIG device. Then any CUDA commands (e.g., deviceQuery) should pass, indicating MIG is active and the driver is working. nvidia-smi -L MIG Management on H100 The H100 supports several MIG profiles – predefined ways to slice the GPU. Each profile is denoted by <N>g.<M>gb meaning it uses N GPU compute slices (out of 7) and M GB of memory. Key H100 80GB MIG profiles include: MIG 1g.10gb: Each instance has 1/7 of the SMs and 10 GB memory (1/8 of VRAM). This yields 7 instances per GPU (7 × 10 GB = 70 GB out of 80, a small portion is reserved). This is the smallest slice size and maximizes the number of instances (useful for many lightweight tasks). MIG 1g.20gb: Each instance has 1/7 of SMs but 20 GB memory (1/4 of VRAM), allowing up to 4 instances per GPU. This profile gives each instance more memory while still only a single compute slice – useful for memory-intensive workloads that don’t need much compute. MIG 2g.20gb: Each instance gets 2/7 of SMs and 20 GB memory (2/8 of VRAM). 3 instances can run on one GPU. This offers a balance: more compute per instance than 1g, with a moderate 20 GB memory each. MIG 3g.40gb: Each instance has 3/7 of SMs and 40 GB memory (half the VRAM). Two instances fit on one H100. This effectively splits the GPU in half. MIG 4g.40gb: Each instance uses 4/7 of SMs and 40 GB memory. Only one such instance can exist per GPU (because it uses half the memory and more than half of the SMs). In practice, a 4g.40gb profile might be combined with a smaller profile on the same GPU (e.g., a 4g.40gb + a 3g.40gb could occupy one GPU, totaling 7/7 SM and 80GB). However, AKS node pools use a single uniform profile per GPU, so you typically wouldn’t mix profiles on the same GPU in AKS. MIG 7g.80gb: This profile uses the entire GPU (all 7/7 SMs and 80 GB memory). Essentially, MIG 7g.80gb is the full GPU as one instance (no slicing). It’s equivalent to not using MIG at all for that GPU. These profiles illustrate the flexibility: you can trade off number of instances vs. the power of each instance. For example, MIG 1g.10gb gives you seven small GPUs, whereas MIG 3g.40gb gives you two much larger slices (each roughly half of an H100). All MIG instances are hardware-isolated, meaning each instance’s performance is independent (one instance can’t starve others of GPU resources) Enabling MIG in AKS: There are two main ways to configure MIG on the AKS node pool: At Node Pool Creation (Static MIG Profile): Azure allows specifying a GPU instance profile when creating the node pool. For example, adding --gpu-instance-profile MIG1g to the az aks nodepool add command would provision each H100 GPU in 1g mode (e.g., 7×10GB instances per GPU). Supported profile names for H100 include MIG1g, MIG2g, MIG3g, MIG4g, and MIG7g (the same profile names used for A100, but on H100 they correspond to the sizes above). Important: Once set, the MIG profile on a node pool cannot be changed without recreating the node pool. If you chose MIG1g, all GPUs in that node pool will be partitioned into 7 slices each, and you can’t later switch those nodes to a different profile on the fly. Dynamically via NVIDIA GPU Operator: If you skipped the driver install (as we did) and are using the GPU Operator, you can let the operator manage MIG. This involves labeling the node with a desired MIG layout. For example, nvidia.com/mig.config=all-1g.10gb means “partition all GPUs into 1g.10gb slices.” The operator’s MIG Manager will then enable MIG mode on the GPUs, create the specified MIG instances, and mark the node ready when done. This approach offers flexibility – you could theoretically adjust the MIG profile by changing the label and letting the operator reconfigure (though it will drain and reboot the node to apply changes). The operator adds a taint like mig-nvidia.io/device-config=pending (or similar) during reconfiguration to prevent scheduling pods too early For our deployment, we opted to skip Azure’s automatic MIG config and use the NVIDIA operator. If you followed the steps in section 2 and set the nvidia.com/mig.config label before node creation, the node on first boot will come up, install drivers, then partition into the specified MIG profile. If not, you can label the node now and the operator will configure MIG accordingly. For example: kubectl label node <node-name> nvidia.com/mig.config=all-3g.40gb --overwrite to split each GPU into two 3g.40gb instances. The operator will detect this and partition the GPUs (the node may briefly go NotReady while MIG is being set up). After MIG is configured, verify the node’s GPU resources again. Depending on the MIG strategy (see next section), you will either see a larger number of generic nvidia.com/gpu resources or specifically named resources like nvidia.com/mig-3g.40gb. We will discuss how to schedule workloads onto these MIG instances next. Important Considerations: Workload Interruption: Applying a new MIG configuration can disrupt running GPU workloads. It's advisable to drain the node or ensure that no critical workloads are running during the reconfiguration process. Node Reboot: Depending on the environment and GPU model, enabling or modifying MIG configurations might require a node reboot. Ensure that your system is prepared for potential reboots to prevent unexpected downtime. Workload Recommendations for MIG Profiles (AI/ML vs. HPC) Different MIG slicing configurations are suited to different types of workloads. Here are recommendations for AI/ML and HPC scenarios: Full GPU (MIG 7g.80gb or MIG disabled) – Best for the largest and most intensive tasks. If you are training large deep learning models (e.g. GPT-style models, complex computer vision training) or running HPC simulations that fully utilize a GPU, you should use the entire H100 GPU. The ND H100 v5 is designed to excel at these demanding workloads. In Kubernetes, you would simply schedule pods that request a whole GPU. (If MIG mode is enabled with 7g.80gb profile, each GPU is one resource unit.) This ensures maximum performance for jobs that can utilize 80 GB of GPU memory and all compute units. HPC workloads like physics simulations, CFD, weather modeling, etc., typically fall here – they are optimized to use full GPUs or even multiple GPUs in parallel, so slicing a GPU could impede their performance unless you explicitly want to run multiple smaller HPC jobs on one card. Large MIG Partitions (3g.40gb or 4g.40gb) – Good for moderately large models or jobs that don’t quite need a full H100. For instance, you can split an H100 into 2× 3g.40gb instances, each with 40 GB VRAM and ~43% of the H100’s compute. This configuration is popular for AI model serving and inference where a full H100 might be underutilized. In fact, it might happen that two MIG 3g.40gb instances on an H100 can serve models with performance equal or better than two full A100 GPUs, at a lower cost. Each 3g.40gb slice is roughly equivalent to an A100 40GB in capability, and also unlocks H100-specific features (like FP8 precision for inference). Use cases: Serving two large ML models concurrently (each model up to 40GB in size, such as certain GPT-XXL or vision models). Each model gets a dedicated MIG slice. Running two medium-sized training jobs on one physical GPU. For example, two separate experiments that each need ~40GB GPU memory can run in parallel, each on a MIG 3g.40gb. This can increase throughput for hyperparameter tuning or multi-user environments. HPC batch jobs: if you have HPC tasks that can fit in half a GPU (perhaps memory-bound tasks or jobs that only need ~50% of the GPU’s FLOPs), using two 3g.40gb instances allows two jobs to run on one GPU server concurrently with minimal interference. MIG 4g.40gb (one 40GB instance using ~57% of compute) is a less common choice by itself – since only one 4g instance can exist per GPU, it leaves some GPU capacity unused (the remaining 3/7 SMs would be idle). It might be used in a mixed profile scenario (one 4g + one 3g on the same GPU) if manually configured. In AKS (which uses uniform profiles per node pool), you’d typically prefer 3g.40gb if you want two equal halves, or just use full GPUs. So in practice, stick with 3g.40gb for a clean 2-way split on H100. Medium MIG Partitions (2g.20gb) – Good for multiple medium workloads. This profile yields 3 instances per GPU, each with 20 GB memory and about 28.6% of the compute. This is useful when you have several smaller training jobs or medium-sized inference tasks that run concurrently. Examples: Serving three different ML models (each ~15–20 GB in size) from one H100 node, each model on its own MIG 2g.20gb instance. Running 3 parallel training jobs for smaller models or prototyping (each job can use 20GB GPU memory). For instance, three data scientists can share one H100 GPU server, each getting what is effectively a “20GB GPU”. Each 2g.20gb MIG slice should outperform a V100 (16 GB) in both memory and compute, so this is still a hefty slice for many models. In HPC context, if you had many lighter GPU-accelerated tasks (for example, three independent tasks that each use ~1/3 of a GPU), this profile could allow them to share a node efficiently. Small MIG Partitions (1g.10gb) – Ideal for high-density inference and lightweight workloads. This profile creates 7 instances per GPU, each with 10 GB VRAM and 1/7 of the compute. It’s perfect for AI inference microservices, model ensembles, or multi-tenant GPU environments: Deploying many small models or instances of a model. For example, you could host seven different AI services (each requiring <10GB GPU memory) on one physical H100, each in its own isolated MIG slice. Most cloud providers use this to offer “fractional GPUs” to customers– e.g., a user could rent a 1g.10gb slice instead of the whole GPU. Running interactive workloads like Jupyter notebooks or development environments for multiple users on one GPU server. Each user can be assigned a MIG 1g.10gb slice for testing small-scale models or doing data science workloads, without affecting others. Inference tasks that are memory-light but require GPU acceleration – e.g., running many inference requests in parallel across MIG slices (each slice still has ample compute for model scoring tasks, and 10 GB is enough for many models like smaller CNNs or transformers). Keep in mind that 1g.10gb slices have the lowest compute per instance, so they are suited for workloads that individually don’t need the full throughput of an H100. They shine when throughput is achieved by running many in parallel. 1g.20gb profile – This one is a bit niche: 4 slices per GPU, each with 20 GB but only 1/7 of the SMs. You might use this if each task needs a large model (20 GB) but isn’t compute-intensive. An example could be running four instances of a large language model in inference mode, where each instance is constrained by memory (loading a 15-18GB model) but you deliberately limit its compute share to run more concurrently. In practice, the 2g.20gb profile (which gives the same memory per instance and more compute) might be preferable if you can utilize the extra SMs. So 1g.20gb would only make sense if you truly have compute-light, memory-heavy workloads or if you need exactly four isolated instances on one GPU. HPC Workloads Consideration: Traditional HPC jobs (MPI applications, scientific computing) typically either use an entire GPU or none. MIG can be useful in HPC for capacity planning – e.g., running multiple smaller GPU-accelerated jobs simultaneously if they don’t all require a full H100. But it introduces complexity, as the HPC scheduler must be aware of fractional GPUs. Many HPC scenarios might instead use whole GPUs per job for simplicity. That said, for HPC inference or analytics (like running multiple inference tasks on simulation output), MIG slicing can improve utilization. If jobs are latency-sensitive, MIG’s isolation ensures one job doesn’t impact another, which is beneficial for multi-tenant HPC clusters (for example, different teams sharing a GPU node). In summary, choose the smallest MIG slice that still meets your workload’s requirements. This maximizes overall GPU utilization and cost-efficiency by packing more tasks on the hardware. Use larger slices or full GPUs only when a job truly needs the extra memory and compute. It’s often a good strategy to create multiple GPU node pools with different MIG profiles tailored to different workload types (e.g., one pool of full GPUs for training and one pool of 1g or 2g MIG GPUs for inference). Appendix A: MIG Management via AKS Node Pool Provisioning (without GPU Operator MIG profiles) Multi-Instance GPU (MIG) allows partitioning an NVIDIA A100 (and newer) GPU into multiple instances. AKS supports MIG for compatible GPU VM sizes (such as the ND A100 v4 series), but MIG must be configured when provisioning the node pool – it cannot be changed on the fly in AKS. In this section, we show how to create a MIG-enabled node pool and integrate it with Kubernetes scheduling. We will not use the GPU Operator’s dynamic MIG reconfiguration; instead, we set MIG at node pool creation time (which is the only option on AKS). Step 1: Provision an AKS node pool with a MIG profile. Choose a MIG-capable VM size (for example, Each instance has 1/7 of the SMs and 10 GB memory (1/8 of VRAM). This yields 7 instances per GPU (7 × 10 GB = 70 GB out of 80, a small portion is reserved). Use the Azure CLI to create a new node pool and specify the --gpu-instance-profile: az aks nodepool add \ --resource-group <myResourceGroup> \ --cluster-name <myAKSCluster> \ --name migpool \ --node-vm-size Standard_ND96isr_H100_v5 \\ --node-count 1 \ --gpu-instance-profile MIG1g In this example, we create a node pool named "migpool" with MIG profile MIG1g (each physical H100 GPU is split into 7 instances of 1g/5gb each). Important: You cannot change the MIG profile after the node pool is created. If you need a different MIG configuration (e.g., 2g.10gb or 4g.20gb instances), you must create a new node pool with the desired profile. Note: MIG is only supported on Ubuntu-based AKS node pools (not on Azure Linux nodes), and currently the AKS cluster autoscaler does not support scaling MIG-enabled node pools. Plan capacity accordingly since MIG node pools can’t auto-scale. Appendix B: Key Points and Best Practices No On-the-Fly Profile Changes With AKS, once a node pool is created with --gpu-instance-profile MIGxg, you cannot switch to a different MIG layout on that same node pool. If you need a new MIG profile, create a new node pool. --skip-gpu-driver-install This is typically used if you need a specific driver version, or if you want the GPU Operator to manage drivers (instead of the in-box AKS driver). Make sure your driver is installed before you schedule GPU workloads. If the driver is missing, pods that request GPU resources will fail to initialize. Driver Versions for H100 H100 requires driver branch R525 or newer (and CUDA 12+). Verify the GPU Operator or your manual install uses a driver that supports H100 and MIG on H100 specifically. Single vs. Mixed Strategy Single strategy lumps all MIG slices together as nvidia.com/gpu. This is simpler for uniform MIG node pools. Mixed strategy exposes resources like nvidia.com/mig-1g.10gb. Use if you need explicit scheduling by MIG slice type. Configure this in the GPU Operator’s Helm values (e.g., --set mig.strategy=single or mixed). If the Operator’s MIG Manager is disabled, it won’t attempt to reconfigure MIG, but it will still let the device plugin report the slices in single or mixed mode. Resource Requests and Scheduling If using single strategy, a pod that requests nvidia.com/gpu: 1 will be allocated a single 1g.10gb MIG slice on H100. If using mixed, that same request must specifically match the MIG resource name (e.g., nvidia.com/mig-1g.10gb: 1). If your pod requests nvidia.com/gpu: 1, but the node only advertises nvidia.com/mig-1g.10gb, scheduling won’t match. So be consistent in your pod specs. Cluster Autoscaler Currently, MIG-enabled node pools have limited or no autoscaler support on AKS (the cluster autoscaler does not fully account for MIG resources). Scale these node pools manually or via custom logic. If you rely heavily on auto-scaling, consider using a standard GPU node pool (no MIG) or carefully plan capacity to avoid needing dynamic scaling for MIG pools. Monitoring The GPU Operator deploys DCGM exporter by default, which can collect MIG-specific metrics. Integrate with Prometheus + Grafana for GPU usage dashboards. MIG slices are typically identified by unique device IDs in DCGM. You can see which MIG slices are busier than others, memory usage, etc. Node Image Upgrades Because you’re skipping the driver install from AKS, ensure you keep your GPU driver DaemonSet or Operator up to date. If you do a node image upgrade (AKS version upgrade), the OS might change, requiring a recompile or a matching driver version. The GPU Operator normally handles this seamlessly by re-installing the driver on the new node image. Test your upgrades in a staging cluster if possible, especially with new AKS releases or driver versions. Handling Multiple Node Pools Many users create one node pool with full GPUs (no MIG) for large jobs, and another MIG-enabled node pool for smaller parallel workloads. You can do so easily by repeating the steps above for each node pool, specifying different MIG profiles. References MIG User Guide NVIDIA GPU Operator with Azure Kubernetes Service ND-H100-v5 sizes series Create a multi-instance GPU node pool in Azure Kubernetes Service (AKS)2.2KViews2likes0CommentsMonitor OpenAI Agents SDK with Application Insights
As AI agents become more prevalent in applications, monitoring their behavior and performance becomes crucial. In this blog post, we'll explore how to monitor the OpenAI Agents SDK using Azure Application Insights through OpenTelemetry integration. Enhancing OpenAI Agents with OpenTelemetry The OpenAI Agents SDK provides powerful capabilities for building agent-based applications. By default, the SDK doesn't emit OpenTelemetry data, as noted in GitHub issue #18. This presents an opportunity to extend the SDK's functionality with robust observability features. Adding OpenTelemetry integration enables you to: Track agent interactions across distributed systems Monitor performance metrics in production Gain insights into agent behaviour Seamlessly integrate with existing observability platforms Fortunately, the Pydantic Logfire SDK has implemented an OpenTelemetry instrumentation wrapper for OpenAI Agents. This wrapper allows us to capture telemetry data and propagate it to an OpenTelemetry Collector endpoint. How It Works The integration works by wrapping the OpenAI Agents tracing provider with a Logfire-compatible wrapper that generates OpenTelemetry spans for various agent activities: Agent runs Function calls Chat completions Handoffs between agents Guardrail evaluations Each of these activities is captured as a span with relevant attributes that provide context about the operation. Implementation Example Here's how to set up the Logfire instrumentation in your application: import logfire from openai import AsyncAzureOpenAI from agents import set_default_openai_client, set_tracing_disabled # Configure your OpenAI client azure_openai_client = AsyncAzureOpenAI( api_key=os.getenv("AZURE_OPENAI_API_KEY"), api_version=os.getenv("AZURE_OPENAI_API_VERSION"), azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"), azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT") ) # Set as default client and enable tracing set_default_openai_client(azure_openai_client) set_tracing_disabled(False) # Configure OpenTelemetry endpoint os.environ["OTEL_EXPORTER_OTLP_TRACES_ENDPOINT"] = "http://0.0.0.0:4318/v1/traces" # Configure Logfire logfire.configure( service_name='my-agent-service', send_to_logfire=False, distributed_tracing=True ) # Instrument OpenAI Agents logfire.instrument_openai_agents() Note: The send_to_logfire=False parameter ensures that data is only sent to your OpenTelemetry collector, not to Logfire's cloud service. Environment Variables: The OTEL_EXPORTER_OTLP_TRACES_ENDPOINT environment variable tells the Logfire SDK where to send the OpenTelemetry traces. If you're using Azure Container Apps with the built-in OpenTelemetry collector, this variable will be automatically set for you. Similarly, when using AKS with auto-instrumentation enabled via the OpenTelemetry Operator, this environment variable is automatically injected into your pods. For other environments, you'll need to set it manually as shown in the example above. Setting Up the OpenTelemetry Collector To collect and forward the telemetry data to Application Insights, we need to set up an OpenTelemetry Collector. There are two ways to do this: Option 1: Run the Collector Locally Find the right OpenTelemetry Contrib Releases for your processor architecture at: https://github.com/open-telemetry/opentelemetry-collector-releases/releases/tag/v0.121.0 Only Contrib releases will support Azure Monitor exporter. ./otelcol-contrib --config=otel-collector-config.yaml Option 2: Run the Collector in Docker docker run --rm \ -v $(pwd)/otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml \ -p 4318:4318 \ -p 55679:55679 \ otel/opentelemetry-collector-contrib:latest Collector Configuration Here's a basic configuration for the OpenTelemetry Collector that forwards data to Azure Application Insights: receivers: otlp: protocols: http: endpoint: "0.0.0.0:4318" exporters: azuremonitor: connection_string: "InstrumentationKey=your-instrumentation-key;IngestionEndpoint=https://your-region.in.applicationinsights.azure.com/" maxbatchsize: 100 maxbatchinterval: 10s debug: verbosity: basic service: pipelines: traces: receivers: [otlp] exporters: [azuremonitor, debug] Important: Replace connection_string with your actual Application Insights connection string. What You Can Monitor With this setup, you can monitor various aspects of your OpenAI Agents in Application Insights: Agent Performance: Track how long each agent takes to process requests Model Usage: Monitor which AI models are being used and their response times Function Calls: See which tools/functions are being called by agents Handoffs: Track when agents hand off tasks to other specialized agents Errors: Identify and diagnose failures in agent processing End-to-End Traces: Follow user requests through your entire system Example Trace Visualisation In Application Insights, you can visualise the traces as a hierarchical timeline, showing the flow of operations: Known Issue: Span Name Display in Application Insights When using LogFire SDK 3.8.1 with Application Insights, you might notice that span names appear as message templates (with regular expressions) instead of showing the actual agent or model names. This makes it harder to identify specific spans in the Application Insights UI. Issue: In the current implementation of LogFire SDK's OpenAI Agents integration (source code), the message template is used as the span's name, resulting in spans being displayed with placeholders like {name!r} or {gen_ai.request.model!r} instead of actual values. Temporary Fix Until LogFire SDK introduces a fix, you can modify the /logfire/_internal/integrations/openai_agents.py file to properly format the span names. This is after pip install logfire, the file will usually be at venv/lib/python3.11/site-packages/logfire/_internal/integrations/openai_agents.py Replace the span creation code around line 100: Original code logfire_span = self.logfire_instance.span( msg_template, **attributes_from_span_data(span_data, msg_template), **extra_attributes, _tags=['LLM'] * isinstance(span_data, GenerationSpanData), ) Modified code with setting Span name as message attributes = attributes_from_span_data(span_data, msg_template) message = logfire_format(msg_template, dict(attributes or {}), NOOP_SCRUBBER) logfire_span = self.logfire_instance.span( msg_template, _span_name=message, **attributes, **extra_attributes, _tags=['LLM'] * isinstance(span_data, GenerationSpanData), ) This change formats the message template with actual values and sets it as the span name, making it much easier to identify spans in the Application Insights UI. After applying this fix, your spans will display meaningful names like "Chat completion with 'gpt-4o'" instead of "Chat completion with {gen_ai.request.model!r}". Limitation: Even after applying this fix, HandOff spans will still not show the correct to_agent field in the span name. This occurs because the to_agent field is not set during initial span creation but later in the on_ending method of the LogfireSpanWrapper class: @dataclass class LogfireSpanWrapper(LogfireWrapperBase[Span[TSpanData]], Span[TSpanData]): # ... def on_ending(self): # This is where to_agent gets updated, but too late for the span name # ... Until LogFire SDK optimizes this behavior, you can still see the correct HandOff values by clicking on the span and looking at the logfire.msg property. For example, you'll see "Handoff: Customer Service Agent -> Investment Specialist" in the message property even if the span name doesn't show it correctly. Auto-Instrumentation for AKS Azure Kubernetes Service (AKS) offers a codeless way to enable OpenTelemetry instrumentation for your applications. This approach simplifies the setup process and ensures that your OpenAI Agents can send telemetry data without requiring manual instrumentation. How to Enable Auto-Instrumentation To enable auto-instrumentation for Python applications in AKS, you can add an annotation to your pod specification: annotations: instrumentation.opentelemetry.io/inject-python: 'true' This annotation tells the OpenTelemetry Operator to inject the necessary instrumentation into your Python application. For more details, refer to the following resources: Microsoft Learn: Codeless application monitoring for Kubernetes OpenTelemetry Docs: Automatic Instrumentation for Kubernetes Built-in Managed OpenTelemetry Collector in Azure Container Apps Azure Container Apps provides a built-in Managed OpenTelemetry Collector that simplifies the process of collecting and forwarding telemetry data to Application Insights. This eliminates the need to deploy and manage your own collector instance. Setting Up the Managed Collector When you enable the built-in collector, Azure Container Apps automatically sets the OTEL_EXPORTER_OTLP_ENDPOINT environment variable for your applications. This allows the Logfire SDK to send traces to the collector without any additional configuration. Here's an example of enabling the collector in an ARM template: { "type": "Microsoft.App/containerApps", "properties": { "configuration": { "dapr": {}, "ingress": {}, "observability": { "applicationInsightsConnection": { "connectionString": "InstrumentationKey=your-instrumentation-key" } } } } } For more information, check out these resources: Microsoft Learn: OpenTelemetry agents in Azure Container Apps Tech Community: How to monitor applications by using OpenTelemetry on Azure Container Apps Conclusion Monitoring OpenAI Agents with Application Insights provides valuable insights into your AI systems' performance and behavior. By leveraging the Pydantic Logfire SDK's OpenTelemetry instrumentation and the OpenTelemetry Collector, you can gain visibility into your agents' operations and ensure they're functioning as expected. This approach allows you to integrate AI agent monitoring into your existing observability stack, making it easier to maintain and troubleshoot complex AI systems in production environments. Resources Implementation can be found at https://github.com/hieumoscow/azure-openai-agents References: OpenAI Agents Python SDK GitHub Issue: OpenAI Agents Logging OpenTelemetry Collector Documentation Azure Application Insights Documentation Codeless application monitoring for Kubernetes OpenTelemetry Automatic Instrumentation for Kubernetes OpenTelemetry agents in Azure Container Apps How to monitor applications using OpenTelemetry on Azure Container Apps
3.9KViews1like5Comments