azure monitor managed service for prometheus
18 TopicsEnhance your Azure visualizations using Azure Monitor dashboards with Grafana
In line with our commitment to open-source solutions, we are announcing the public preview of Azure Monitor dashboards with Grafana. This service offers a powerful solution for cloud-native monitoring and visualizing Prometheus metrics. Dashboards with Grafana enable you to create and edit Grafana dashboards directly in the Azure portal without additional cost and less administrative overhead compared to self-hosting Grafana or using managed Grafana services. Start quickly with pre-built and community dashboards Pre-built Grafana dashboards for Azure Kubernetes Services, Azure Monitor, and dozens of other Azure resources are included and enabled by default. Additionally, you can import dashboards from thousands of publicly available Grafana community and open-source dashboards for Prometheus and Azure Monitor data sources. Built-in Grafana controls and capabilities allow you to apply a wide range of visualization panels and client-side transformations to Azure monitoring data to create custom dashboards. Flexibility with open-source dashboards Azure Monitor dashboards with Grafana are fully compatible with open-source Grafana dashboards and are portable across any Grafana instances regardless of where they are hosted. Teams that create and author Grafana dashboards can share and re-use dashboards in Azure Monitor dashboards with Grafana, Azure Managed Grafana, self-hosted Grafana and more. Manage dashboards with Azure Role Based Access Control (RBAC) Dashboards with Grafana are native Azure resources supporting Azure RBAC to assign permissions, and automation via ARM and Bicep templates. The ability to view dashboards and data solely depends on users’ permissions on both the dashboard and the underlying data source being viewed. Supported Scenarios Dashboards with Grafana include Azure data sources only, starting with Azure Monitor metrics, logs, traces, alerts, Azure Resource Graph, and Azure Monitor managed service for Prometheus metrics. Support for Grafana Explore and Exemplars will be available in future releases. When to use Azure Managed Grafana? If you store your telemetry data in Azure, Dashboards with Grafana in the Azure portal is a great way to get started with Grafana. If you have additional data sources, or need full enterprise capabilities in Grafana, you can choose to upgrade to Azure Managed Grafana, a fully managed hosted service for the Grafana Enterprise software. See a detailed solution comparison of Dashboards with Grafana and Azure Managed Grafana here. Get started with Azure Monitor dashboards with Grafana today.268Views2likes0CommentsGA: Managed Prometheus visualizations in Azure Monitor for AKS — unified insights at your fingertips
We’re thrilled to announce the general availability (GA) of Managed Prometheus visualizations in Azure Monitor for AKS, along with an enhanced, unified AKS Monitoring experience. Troubleshooting Kubernetes clusters is often time-consuming and complex whether you're diagnosing failures, scaling issues, or performance bottlenecks. This redesign of the existing Insights experience brings all your key monitoring data into a single, streamlined view reducing the time and effort it takes to diagnose, triage, and resolve problems so you can keep your applications running smoothly with less manual work. By using Managed Prometheus, customers can also realize up to 80% savings on metrics costs and benefit from up to 90% faster blade load performance delivering both a powerful and cost-efficient way to monitor and optimize your AKS environment. What’s New in GA Since the preview release, we’ve added several capabilities: Control plane metrics: Gain visibility into critical components like the API server and ETCD database, essential for diagnosing cluster-level performance bottlenecks. Load balancer chart deep links: Jump directly into the networking drilldown view to troubleshoot failed connections and SNAT port issues more efficiently. Improved at-scale cluster view: Get a faster, more comprehensive overview across all your AKS clusters, making multi-cluster monitoring easier. Simplified Troubleshooting, End to End The enhanced AKS Monitoring experience provides both a basic (free) tier and an upgraded experience with Prometheus metrics and logging — all within a unified, single-pane-of-glass dashboard. Here’s how it helps you troubleshoot faster: Identify failing components immediately With new KPI Cards for Pod and Node Status, you can quickly spot pending or failed pods, high CPU/memory usage, or saturation issues, decreasing diagnosis time. Monitor and manage cluster scaling smoothly The Events Summary Card surfaces Kubernetes warnings and pending pod states, helping you respond to scale-related disruptions before they impact production. Pinpoint root causes of latency and connectivity problems Detailed node saturation metrics, plus control plane and load balancer insights, make it easier to isolate where slowdowns or failures are occurring — whether at the node, cluster, or network layer. Free vs. Upgraded Metrics Overview Here’s a quick comparison of what’s included by default versus what you get with the enhanced experience: Basic tier metrics Additional metrics in upgraded experience Alert summary card Historical Kubernetes events (30 days) Events summary card Warning events by reason Pod status KPI card Namespace CPU and memory % Node status KPI card Container logs by volume Node CPU and memory % Top five controllers by logs volume VMSS OS disk bandwidth consumed % (max) Packets dropped I/O VMSS OS disk IOPS consumed % (max) Load balancer SNAT port usage API server CPU % (max) (preview) API server memory % (max) (preview) ETCD database usage % (max) (preview) See What Customers Are Saying Early adopters have already seen meaningful improvements: "Azure Monitor managed Prometheus visualizations for Container Insights has been a game-changer for our team. Offloading the burden of self-hosting and maintaining our own Prometheus infrastructure has significantly reduced our operational overhead. With the managed add-on, we get the powerful insights and metrics we need without worrying about scalability, upgrades, or reliability. It seamlessly integrates into our existing Azure environment, giving us out-of-the-box visibility into our container workloads. This solution allows our engineers to focus more on building and delivering features, rather than managing monitoring infrastructure." – S500 customer in health care industry Get Started Today We’re committed to helping you optimize and manage your AKS clusters with confidence. Visit the Azure portal and explore the new AKS Monitoring experience today! Learn more: https://aka.ms/azmon-prometheus-visualizations56Views0likes0CommentsPublic Preview: Metrics usage insights for Azure Monitor Workspace
As organizations expand their services and applications, reliability and high availability are a top priority to ensure they provide a high level of quality to their customers. As the complexity of these services and applications grows, organizations continue to collect more telemetry to ensure higher observability. However, many are facing a common challenge: increasing costs driven by the ever-growing volume of telemetry data. Over time, as products grow and evolve, not all telemetry remains valuable. In fact, over instrumentation can create unnecessary noise, generating data that contributes to higher costs without delivering actionable insights. In a time where every team is being asked to do more with less, identifying which telemetry streams truly matter has become essential. To address this need we are announcing the Public Preview of ‘metrics usage insights’, a feature currently designed for Azure Managed Prometheus users which will analyze all metrics ingested in Azure Managed Workspace (AMW), surfacing actionable insights to optimize your observability setup. Metrics usage insights is built to empower teams with the visibility and tools the organizations need to manage observability costs effectively. It empowers customers to pinpoint metrics that align with their business objectives, uncover areas of unnecessary spend by identifying unused metrics, and sustain a streamlined, cost-effective monitoring approach. Metrics usage insights sends usage data to a Log Analytics Workspace (LAW) for analysis. This is a free offering, and there is no charge associated for the data sent to the Log Analytics workspace, storage or queries. Customers will be guided to enable the feature as part of the standard out of the box experience during new AMW resource creation. For existing AMWs this can be configured using diagnostic settings. Key Features 1.Understanding Limits and Quotas for Effective Resource Management Monitoring limits and quotas is crucial for system performance and resource optimization. Tracking usage aids in efficient scaling and cost avoidance. Metrics usage insights provides tools to monitor thresholds, resolve throttling, and ensure cost-effective operations without the need for creating support incidents. 2.Workspace Exploration This experience lets customers explore their AMW data and gain insights. It provides a detailed analysis of data points and samples ingested for billing, both at metric and workspace levels. Customers can evaluate individual metrics by examining their quantity, ingestion volume, and financial impact. 3.Identifying and Removing Unused Metrics The metrics usage insights feature helps identify underutilized metrics that are being ingested, but not used through dashboards, monitors, and API calls. Users facing high storage and ingestion costs can use this feature to delete unused metrics to optimize high-cost metrics, and reclaim capacity. Enable metrics usage insights To enable metrics usage insights, you create a diagnostic setting, which instructs the AMW to send data supporting the insights queries and workbooks to a Log Analytics Workspace (LAW). You'll be prompted to enable it automatically when you create a new Azure Monitor workspace. You can enable it later for an existing Azure Monitor workspace. Read More368Views3likes0CommentsOperator/CRD support with Azure Monitor managed service for Prometheus is now Generally Available
We are excited to announce that custom resource definitions (CRD) support with Azure Monitor managed service for Prometheus is now generally available. Azure Monitor managed service for Prometheus is a component of Azure Monitor Metrics, allowing you to collect and analyze metrics at scale using a Prometheus-compatible monitoring solution, based on the Prometheus project from the Cloud Native Computing Foundation. This fully managed service enables using the Prometheus query language (PromQL) to analyze and alert on the performance of monitored infrastructure and workloads. What's new? With this new update, customers can customize scraping targets using Custom Resources (Pod Monitors and Service Monitors), similar to the OSS Prometheus Operator. Enabling Managed Prometheus add-on in an AKS cluster will deploy the Pod and Service Monitor custom resource definitions to allow you to create your own custom resources. If you are already using Prometheus Service and Pod monitors to collect metrics from your workloads, you can simply change the apiVersion in the Service/Pod monitor definitions to use them with Azure Managed Prometheus. Earlier, customers who did not have access to kube-system namespace were not able to customize metrics collection. With this update, customers can create custom resources to enable custom configuration of scrape jobs in any namespace. This is especially useful in a multitenancy scenario where customers are running workloads in different namespaces. Here is how a leading Public Sector Banking and Financial Services and Insurance (BFSI) company in India has used Service and Pod monitors custom resources to enable monitoring of GPU metrics with Azure Managed Prometheus, DCGM Exporter, and Azure Managed Grafana. “Azure Monitor managed service for Prometheus provides a production-grade solution for monitoring without the hassle of installation and maintenance. By leveraging these managed services, we can focus on extracting insights from your metrics and logs rather than managing the underlying infrastructure. The integration of essential GPU metrics—such as Framebuffer Memory Usage, GPU Utilization, Tensor Core Utilization, and SM Clock Frequencies—into Azure Managed Prometheus and Grafana enhances the visualization of actionable insights. This integration facilitates a comprehensive understanding of GPU consumption patterns, enabling more informed decisions regarding optimization and resource allocation.” -A leading public sector BFSI company in India Get started today! To use CRD support with Azure Managed Prometheus, enable Managed Prometheus add-on on your AKS cluster. This will automatically deploy the custom resource definitions (CRD) for service and pod monitors. To add Prometheus exporters to collect metrics from third-party workloads or other applications, and to see a list of workloads which have curated configurations and instructions, see Integrate common workloads with Azure Managed Prometheus - Azure Monitor | Microsoft Learn. For more details refer to this article, or our documentation. We would love to hear from you - Please share your feedback and suggestions in Azure Monitor · Community.2.5KViews1like2CommentsAzure Monitor Private Link Scope (AMPLS) Scale Limits Increased by 10x!
What is Azure Monitor Private Link Scope (AMPLS)? Azure Monitor Private Link Scope (AMPLS) is a feature that allows you to securely connect Azure Monitor resources to your virtual network using private endpoints. This ensures that your monitoring data is accessed only through authorized private networks, preventing data exfiltration and keeping all traffic inside the Azure backbone network. AMPLS – Scale Limits Increased by 10x in Public Cloud - Public Preview In a groundbreaking development, we are excited to share that the scale limits for Azure Monitor Private Link Scope (AMPLS) have been significantly increased by tenfold (10x) in Public Cloud regions as part of the Public Preview! This substantial enhancement empowers our customers to manage their resources more efficiently and securely with private links using AMPLS, ensuring that workload logs are routed via the Microsoft backbone network. Addressing Customer Challenges Top Azure Strategic 500 customers, including leading Telecom service providers, Banking & Financial services customers, have reported that the previous limits of AMPLS were insufficient to meet their growing demands. The need for private links has surged 3-5 times beyond capacity, impacting network isolation and integration of critical workloads. Real-World Impact Our solution now enables customers to scale their Azure Monitor resources significantly, ensuring seamless network configurations and enhanced performance. Scenario 1: A Leading Telecom Service Provider known for its micro-segmentation architecture, have faced challenges with large-scale monitoring and reporting due to limitations on AMPLS. With the new solution, the customer can now scale up to 3,000 Log Analytics and 10,000 Application Insights workspaces with a single AMPLS resource, allowing them to configure over 13,000 Azure Monitor resources effortlessly. Scenario 2: A Leading Banking & Financial Services Customer have faced the scale challenges in delivering personalized insights due to complex workflows. By utilizing Azure Monitor with network isolation configurations, the customer can now scale their Azure Monitor resources to ensure secure telemetry flow and compliance. They have enabled thousands of Azure Monitor resources configured with AMPLS. Key Benefits to the Customer We believe that the solution our team has developed will significantly improve our customers' experience, allowing them to manage their resources more efficiently and effectively with private links using AMPLS. An AMPLS object can now connect up to 3,000 Log Analytics workspaces and 10,000 Application Insights components. (10x Increase) The Log Analytics workspace limit has been increased from 300 to 3,000 (10x increase). The Application Insights limit has increased from 1,000 to 10,000 (10x increase). An Azure Monitor resources can now connect up to 100 AMPLSs (20x increase). Data Collection Endpoint (DCE) Log Analytics Workspace (LA WS) Application Insights components (AI) An AMPLS object can connect to 10 private endpoints at most. Redesign of AMPLS – User experience to load 13K+ resources with Pagination Call to Action Explore the new capabilities of Azure Monitor Private Link Scope (AMPLS) and see how it can transform your network isolation and resource management. Visit our Azure Monitor Private Link Scope (AMPLS) documentation page for more details and start leveraging these enhancements today! For detailed information on configuring Azure Monitor private link scope and azure monitor resources, please refer to the following link: Configure Azure Monitor Private Link Scope (AMPLS) Configure Private Link for Azure Monitor302Views0likes0CommentsIngestion of Managed Prometheus metrics from a private AKS cluster using private link
This article describes the end-to-end instructions on how to configure Managed Prometheus for data ingestion from your private Azure Kubernetes Service (AKS) cluster to an Azure Monitor Workspace. Azure Private Link enables you to access Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. An Azure Monitor Private Link Scope (AMPLS) connects a private endpoint to a set of Azure Monitor resources to define the boundaries of your monitoring network. Using private endpoints for Managed Prometheus and your Azure Monitor workspace you can allow clients on a virtual network (VNet) to securely ingest Prometheus metrics over a Private Link. Conceptual overview A private endpoint is a special network interface for an Azure service in your Virtual Network (VNet). When you create a private endpoint for your Azure Monitor workspace, it provides secure connectivity between clients on your VNet and your workspace. For more details, see Private Endpoint. An Azure Private Link enables you to securely link Azure platform as a service (PaaS) resource to your virtual network by using private endpoints. Azure Monitor uses a single private link connection called Azure Monitor Private Link Scope or AMPLS, which enables each client in the virtual network to connect with all Azure Monitor resources like Log Analytics Workspace, Azure Monitor Workspace etc. (instead of creating multiple private links). For more details, see Azure Monitor Private Link Scope (AMPLS) To set up ingestion of Managed Prometheus metrics from virtual network using private endpoints into Azure Monitor Workspace, follow these high-level steps: Create an Azure Monitor Private Link Scope (AMPLS) and connect it with the Data Collection Endpoint of the Azure Monitor Workspace. Connect the AMPLS to a private endpoint that is set up for the virtual network of your private AKS cluster. Prerequisites A private AKS cluster with Managed Prometheus enabled. As part of Managed Prometheus enablement, you will also have an Azure Monitor Workspace that is set up. For more information, see Enable Managed Prometheus in AKS. 1. Create an AMPLS for Azure Monitor Workspace Metrics collected with Azure Managed Prometheus is ingested and stored in Azure Monitor workspace, so you must make the workspace accessible over a private link. For this, create an Azure Monitor Private Link Scope or AMPLS. In the Azure portal, search for "Azure Monitor Private Link Scopes", and then click "Create". Enter the resource group and name, select Private Only for Ingestion Access Mode. Click on "Review + Create" to create the AMPLS. For more details on setup of AMPLS, see Configure private link for Azure Monitor. 2. Connect the AMPLS to the Data Collection Endpoint of Azure Monitor Workspace Private links for data ingestion for Managed Prometheus are configured on the Data Collection Endpoints (DCE) of the Azure Monitor workspace that stores the data. To identify the DCEs associated with your Azure Monitor workspace, select Data Collection Endpoints from your Azure Monitor workspace in the Azure portal. In the Azure portal, search for the Azure Monitor Workspace that you created as part of enabling Managed Prometheus for your private AKS cluster. Note the Data Collection Endpoint name. Now, in the Azure portal, search for the AMPLS that you created in the previous step. Go to the AMPLS overview page, click on Azure Monitor Resources, click Add, and then connect the DCE of the Azure Monitor Workspace that you noted in the previous step. 2a. Configure DCEs Note: If your AKS cluster isn't in the same region as your Azure Monitor Workspace, then you need to configure the Data Collection Rule for the Azure Monitor Workspace. Follow the steps below only if your AKS cluster is not in the same region as your Azure Monitor Workspace. If your cluster is in the same region, skip this step and move to step 3. Create a Data Collection Endpoint in the same region as the AKS cluster. Go to your Azure Monitor Workspace, and click on the Data collection rule (DCR) on the Overview page. This DCR has the same name as your Azure Monitor Workspace. From the DCR overview page, click on Resources -> + Add, and then select the AKS cluster. Once the AKS cluster is added (you might need to refresh the page), click on the AKS cluster, and then Edit Data Collection of Endpoint. On the blade that opens, select the Data Collection Endpoint that you created in step 1 of this section. This DCE should be in the same region as the AKS cluster. 3. Connect AMPLS to private endpoint of AKS cluster A private endpoint is a special network interface for an Azure service in your Virtual Network (VNet). We will now create a private endpoint in the VNet of your private AKS cluster and connect it to the AMPLS for secure ingestion of metrics. In the Azure portal, search for the AMPLS that you created in the previous steps. Go to the AMPLS overview page, click on Configure -> Private Endpoint connections, and then select + Private Endpoint. Select the resource group and enter a name of the private endpoint, then click Next. In the Resource section, select Microsoft.Monitor/accounts as the Resource type, the Azure Monitor Workspace as the Resource, and then select prometheusMetrics. Click Next. In the Virtual Network section, select the virtual network of your AKS cluster. You can find this in the portal under AKS overview -> Settings -> Networking -> Virtual network integration. 4. Verify if metrics are ingested into Azure Monitor Workspace Verify if Prometheus metrics from your private AKS cluster are ingested into Azure Monitor Workspace: In the Azure portal, search for the Azure Monitor Workspace, and go to Monitoring -> Metrics. In the Metrics Explorer, query for metrics and verify that you are able to query. Next steps Use private endpoints for Managed Prometheus and Azure Monitor workspace for details on how to configure private link to query data from your Azure Monitor workspace using workbooks.532Views0likes0CommentsPublic Preview: The New AKS Monitoring Experience
We're excited to announce the public preview of our enhanced Monitoring experience for Azure Kubernetes Service (AKS). This redesign of the existing Insights experience brings comprehensive monitoring capabilities into a single, streamlined view, addressing some of the most common challenges users face when managing their AKS clusters. Our new Monitoring experience provides both basic (free) and detailed insights (with enabled Prometheus metrics and logging), offering a unified, single-pane-of-glass experience. The basic experience is available for all AKS users with no configuration required at all. A significant benefit of this new experience is in diagnosing pod deployment failures. In the past, identifying pending or failed pods could be a cumbersome process. With the new KPI Card for Pod Status, you can now quickly pinpoint and address these issues before they escalate, ensuring smoother deployments and reduced downtime. Another key scenario where this enhanced view shines is investigating node resource issues. Understanding node readiness and capacity is crucial for efficient cluster management. The Node Readiness Status card, along with detailed CPU and memory usage metrics, provides clear insights into whether your nodes are fully prepared to host pods. This helps prevent resource bottlenecks and optimizes the overall performance of your cluster. Ensuring cluster health during a scaling operation has never been easier. The new Summary Card for Events helps you monitor Kubernetes warning events and pending pod states, making it simple to track and respond to spikes. This ensures your cluster scales smoothly and efficiently, without unexpected hitches that could disrupt your services. Additionally, troubleshooting latency and connectivity issues in AKS is now more straightforward. With enhanced insights into node saturation metrics, including VMSS OS Disk Bandwidth and IOPS consumption, you can quickly identify and resolve issues causing latency. Detailed ETCD monitoring and Load Balancer metrics, such as % SNAT Port Usage, provide critical data to maintain optimal cluster performance, keeping your applications running smoothly. The following comparison table highlights what data comes out of the box for free for ALL AKS users. When you upgrade, you get all the same data collected in the newer Prometheus format as well as access to more rich metrics and logs for your core troubleshooting scenarios. Basic tier metrics Additional metrics in upgraded experience Alert summary card Historical Kubernetes events (30 days) Events summary card Warning events by reason Pod status KPI card Namespace CPU and memory % Node status KPI card Container logs by volume Node CPU and memory % Top five controllers by logs volume VMSS OS disk bandwidth consumed % (max) Packets dropped I/O VMSS OS disk IOPS consumed % (max) Load balancer SNAT port usage We’re committed to providing you with the tools you need to manage and optimize your AKS clusters effectively. Explore the new Monitoring experience in the Azure portal today and experience the future of AKS monitoring!1.4KViews2likes0CommentsAzure Managed Grafana Brings Grafana 11 and More
We’re thrilled to announce the public preview of Grafana 11 and several feature enhancements in Azure Managed Grafana based on your feedback. We continue to evolve our service to deliver what matters most to our customers. Grafana 11 This annual major update to Grafana includes new functionality and improvements across dashboards, panels, queries, and alerts. The current preview in Managed Grafana offers Grafana v11.2. It includes the following key features: Explore Metrics Scenes powered dashboards Subfolders Numerous improvements to canvas visualization and alerting For more information on Grafana 11, please refer What’s new in Grafana v11.0, v11.1, and v11.2 and consider how the breaking changes may impact your specific use cases. You’ll need to create a new Managed Grafana instance to use Grafana 11 preview. Upgrading from Grafana 10 directly isn’t supported yet. You can copy over dashboards from your current Managed Grafana instance by following the steps in Migrate to Azure Managed Grafana. Please note that not all Grafana 11 features are available in Managed Grafana at present; if applicable, more features will be added over time. Azure Monitor Updates for Grafana 11 Improved Azure Monitor Logs visualizations This update extends Azure Monitor logs visualizations to support Basic Logs. This enables you to view Azure Monitor Log tables that have been configured with the lower cost Basic Log tier in Explore and dashboard panels. Additionally, Azure Monitor Logs details can now be viewed in Grafana Explore and Logs panels. You can filter query results by column values, run ad-hoc statistics and choose which column to display using simple point and click interaction without needing to modify the query text. Explore views also include options to view JSON data in dynamic columns. Azure Kubernetes Service users can leverage these views in a new Container Log dashboard. Prometheus Exemplars support for Azure Monitor Application Insight traces You can now drill down from Prometheus exemplars to Application Insights traces in Grafana. Using Exemplars in your troubleshooting workflow improves triage and analysis response times by allowing you to navigate from metrics to sample traces related to errors and exceptions and easily compare performance of transactions. To take advantage of this capability, the application needs to be instrumented to emit Prometheus metrics with Exemplars and traces to Azure Monitor Application Insights. Sign up for the Private Preview of Exemplars support in your Azure Monitor Workspace. User-Assigned Managed Identity Since its inception, Managed Grafana sets up a system-assigned managed identity for a new Grafana workspace by default. You can use this managed identity as the security principal to access backend data sources connected to your workspace. While it’s convenient to use, system-assigned managed identity isn’t always suitable. Enterprise customers who have stricter identity management policies typically create and manage all Entra ID identities by themselves. Managed Grafana now allows these customers to use identities defined in their Entra ID tenants instead. With the user-assigned managed identity feature, you can select an existing Entra ID identity to be used for authentication and authorization with your data sources. Please note that you can choose only one type of managed identity for each workspace. You can’t enable both system-assigned and user-assigned managed identities simultaneously. Grafana Settings Grafana server settings allow you to customize specific server behaviors. Managed Grafana configures and manages these settings automatically, so you don’t have to deal with them. There are some settings where usage varies from user to user. Managed Grafana now gives you the option to change their default values. The currently supported ones are: viewers_can_edit – determines whether users with the Grafana Viewer role can edit dashboards external_enabled – controls the public sharing of snapshots Grafana Migration Tool If you have a self-hosted Grafana server on-premises or in the cloud that you’d like to migrate to Managed Grafana, you can perform this operation with one command in the Azure CLI. The new az grafana migrate command automates the process of copying your existing dashboards from any Grafana server to your Managed Grafana workspace. It supports several options that control how the content migration should be conducted as well as a dry-run option for you to test and see the migration results before committing to the operation. Let Us Know How We’re Doing If you’re a current user of Managed Grafana, we’d love to hear from you. Please take a moment and fill out this online survey. It will help us further improve our service to better serve you. Thank you!1KViews2likes2CommentsMonitoring GPU Metrics in AKS with Azure Managed Prometheus, DCGM Exporter and Managed Grafana
Azure Monitor managed service for Prometheus provides a production-grade solution for monitoring without the hassle of installation and maintenance. By leveraging these managed services, we can focus on extracting insights from your metrics and logs rather than managing the underlying infrastructure. The integration of essential GPU metrics—such as Framebuffer Memory Usage, GPU Utilization, Tensor Core Utilization, and SM Clock Frequencies—into Azure Managed Prometheus and Grafana enhances the visualization of actionable insights. This integration facilitates a comprehensive understanding of GPU consumption patterns, enabling more informed decisions regarding optimization and resource allocation. Azure Managed Prometheus recently announced general availability of Operator and CRD support, which will enable customers to customize metrics collection and add scraping of metrics from workloads and applications using Service and Pod Monitors, similar to the OSS Prometheus Operator. This blog will demonstrate how we leveraged the CRD/Operator support in Azure Managed Prometheus and used the Nvidia DCGM Exporter and Grafana to enable GPU monitoring. GPU monitoring As the use of GPUs has skyrocketed for deploying large language models (LLMs) for both inference and fine-tuning, monitoring these resources becomes critical to ensure optimal performance and utilization. Prometheus, an open-source monitoring and alerting toolkit, coupled with Grafana, a powerful dashboarding and visualization tool, provides an excellent solution for collecting, visualizing, and acting on these metrics. Essential metrics such as Framebuffer Memory Usage, GPU Utilization, Tensor Core Utilization, and SM Clock Frequencies serve as fundamental indicators of GPU consumption, offering invaluable insights into the performance and efficiency of graphics processing units, and thereby enabling us to reduce our COGs and improve operations. Using Nvidia’s DGCM Exporter with Azure Managed Prometheus The DGCM Exporter is a tool developed by Nvidia to collect and export GPU metrics. It runs as a pod on Kubernetes clusters and gathers various metrics from Nvidia GPUs, such as utilization, memory usage, temperature, and power consumption. These metrics are crucial for monitoring and managing the performance of GPUs. You can integrate this exporter with Azure Managed Prometheus. The section below in blog describes the steps and changes needed to deploy the DCGM Exporter successfully. Prerequisites Before we jump straight to the installation, ensure your AKS cluster meets the following requirements: GPU Node Pool: Add a node pool with the required VM SKU that includes GPU support. GPU Driver: Ensure the NVIDIA Kubernetes device plugin driver is running as a DaemonSet on your GPU nodes. Enable Azure Managed Prometheus and Azure Managed Grafana on your AKS cluster. Refactoring Nvidia DCGM Exporter for AKS: Code Changes and Deployment Guide Updating API Versions and Configurations for Seamless Integration As per the official documentation, the best way to get started with DGCM Exporter is to install it using Helm. When installing over AKS with Managed Prometheus, you might encounter the below error: Error: Installation Failed: Unable to build Kubernetes objects from release manifest: resource mapping not found for name: "dcgm-exporter-xxxxx" namespace: "default" from "": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1". Ensure CRDs are installed first. To resolve this, follow these steps to make necessary changes in the DCGM code: Clone the Project: Go to the GitHub repository of the DCGM Exporter and clone the project or download it to your local machine. Navigate to the Template Folder: The code used to deploy the DCGM Exporter is located in the template folder within the deployment folder. Modify the service-monitor.yaml File: Find the file service-monitor.yaml. The apiVersion key in this file needs to be updated from monitoring.coreos.com/v1 to azmonitoring.coreos.com/v1. This change allows the DCGM Exporter to use the Azure managed Prometheus CRD. apiVersion: azmonitoring.coreos.com/v1 4. Handle Node Selectors and Tolerations: GPU node pools often have tolerations and node selector tags. Modify the values.yaml file in the deployment folder to handle these configurations: nodeSelector: accelerator: nvidia tolerations: - key: "sku" operator: "Equal" value: "gpu" effect: "NoSchedule" Helm: Packaging, Pushing, and Installation on Azure Container Registry We followed the MS Learn documentation for pushing and installing the package through Helm on Azure Container Registry. For a comprehensive understanding, you can refer to the documentation. Here are the quick steps for installation: After making all the necessary changes in the deployment folder on the source code, be on that directory to package the code. Log in to your registry to proceed further. 1. Package the Helm chart and login to your container registry: helm package . helm registry login <container-registry-url> --username $USER_NAME --password $PASSWORD 2. Push the Helm Chart to the Registry: helm push dcgm-exporter-3.4.2.tgz oci://<container-registry-url>/helm 3. Verify that the package has been pushed to the registry on Azure portal. 4. Install the chart and verify the installation: helm install dcgm-nvidia oci://<container-registry-url>/helm/dcgm-exporter -n gpu-resources #Check the installation on your AKS cluster by running: helm list -n gpu-resources #Verify the DGCM Exporter: Kubectl get po -n gpu-resources Kubectl get ds -n gpu-resources You can now check that the DGCM Exporter is running on the GPU nodes as a DaemonSet. Exporting GPU Metrics and Configuring Azure Managed Grafana Dashboard Once the DGCM Exporter DaemonSet is running across all GPU node pools, you need to export the GPU metrics generated by this workload to Azure Managed Prometheus. This is accomplished by deploying a PodMonitor resource. Follow these steps: Deploy the PodMonitor: Apply the following YAML configuration to deploy the PodMonitor: apiVersion: azmonitoring.coreos.com/v1 kind: PodMonitor metadata: name: nvidia-dcgm-exporter labels: app.kubernetes.io/name: nvidia-dcgm-exporter spec: selector: matchLabels: app.kubernetes.io/name: nvidia-dcgm-exporter podMetricsEndpoints: - port: metrics interval: 30s podTargetLabels: 2. Check if the PodMonitor is deployed and running by executing: kubectl get podmonitor -n <namespace> 3. Verify Metrics export: Ensure that the metrics are being exported to Azure Managed Prometheus on the portal by navigating to the "Metrics" page on your Azure Monitor Workspace. Create the DGCM Dashboard on Azure Managed Grafana The GitHub repository for the DGCM Exporter includes a JSON file for the Grafana dashboard. Follow the MS Learn documentation to import this JSON into your Managed Grafana instance. After importing the JSON, the dashboard displaying GPU metrics will be visible on Grafana.3.7KViews0likes0Comments