azure monitor managed service for prometheus
21 TopicsAnnouncing resource-scope query for Azure Monitor Workspaces
We’re excited to announce the public preview of resource-scope query for Azure Monitor Workspaces (AMWs)—a major step forward in simplifying observability, improving access control, and aligning with Azure-native experiences. This new capability builds on the successful implementation of resource-scope query in Log Analytics Workspaces (LAWs), which transformed how users access logs by aligning them with Azure resource scopes. We’re now bringing the same power and flexibility to metrics in AMWs. What is resource-scope query? Resource-scope query has been a frequently requested capability that allows users to query metrics scoped to a specific resource, resource group, or subscription—rather than needing to know which AMW the metrics are stored in. This means: Simpler querying: users can scope to the context of one or more resources directly, without knowledge of where metrics are stored. Granular Azure RBAC control: if the AMW is configured in resource-centric access mode, user permissions are checked against the resources they are querying for, rather than access to the workspace itself - just like how LAW works today. This supports security best practices for least privileged access requirements. Why use resource-centric query? Traditional AMW querying required users to: Know the exact AMW storing their metrics. Have access to the AMW. Navigate away from the resource context to query metrics. This created friction for DevOps teams and on-call engineers who do not necessarily know which AMW to query when responding to an alert. With resource-centric querying: Users can query metrics directly from the resource’s Metrics blade. Least privilege access is respected—users only need access to the resource(s) they are querying about. Central teams can maintain control of AMWs while empowering app teams to self-monitor. How does it work? All metrics ingested via Azure Monitor Agent are automatically stamped with dimensions like Microsoft.resourceid, Microsoft.subscriptionid, and Microsoft.resourcegroupname to enable this experience. The addition of these dimensions does not have any cost implications to end users. Resource-centric queries use a new endpoint: https://query.<region>.prometheus.monitor.azure.com We will re-route queries as needed from any region, but we recommend choosing the one nearest to your AMWs for the best performance. Users can query via: Azure Portal PromQL Editor Grafana dashboards (with data source configuration) Query-based metric alerts Azure Monitor solutions like Container Insights and App Insights (when using OTel metrics with AMW as data source) Prometheus HTTP APIs When querying programmatically, users pass an HTTP header: x-ms-azure-scoping: <ARM Resource ID> Scoping supports a single: Individual resource Resource group Subscription At this time, scoping is only support at a single-resource level, but comma-separated multi-resource scoping will be added by the end of 2025. Who Can Benefit? Application Teams: Query metrics for their own resources without needing AMW access. Central Monitoring Teams: Maintain control of AMWs while enabling secure, scoped access for app teams. DevOps Engineers: Respond to alerts and troubleshoot specific resources without needing to locate the AMW(s) storing the metrics they need. Grafana Users: Configure dashboards scoped to subscriptions or resource groups with dynamic variables without needing to identify the AMW(s) storing their metrics. When Is This Available? Microsoft. dimension stamping* is already complete and ongoing for all AMWs. Public Preview of the resource-centric query endpoint begins October 10th, 2025. Starting on that date, all newly created AMWs will default to resource-context access mode. What is the AMW “access control mode”? The access control mode is a setting on each workspace that defines how permissions are determined for the workspace. Require workspace permissions. This control mode does NOT allow granular resource-level Azure RBAC. To access the workspace, the user must be granted permissions to the workspace. When a user scopes their query to a workspace, workspace permissions apply. When a user scopes their query to a resource, both workspace permissions AND resource permissions are verified. This setting is the default for all workspaces created before October 2025. Use resource or workspace permissions. This control mode allows granular Azure RBAC. Users can be granted access to only data associated with resources they can view by assigning Azure read permission. When a user scopes their query to a workspace, workspace permissions apply. When a user scopes their query to a resource, only resource permissions are verified, and workspace permissions are ignored. This setting is the default for all workspaces created after October 2025. Read about how to change the control mode for your workspaces here. Final Thoughts Resource-centric query brings AMWs in line with Azure-native experiences, enabling secure, scalable, and intuitive observability. Whether you’re managing thousands of VMs, deploying AKS clusters, or building custom apps with OpenTelemetry, this feature empowers you to monitor in the context of your workloads or resources rather than needing to first query the AMW(s) and then filter down on what you’re looking for. To get started, simply navigate to your resource’s Metrics blade after October 10 th , 2025 or configure your Grafana data source to use the new query endpoint.158Views0likes0CommentsAzure Monitor managed service for Prometheus now includes native Grafana dashboards
We are excited to announce that Azure Monitor managed service for Prometheus now includes native Grafana dashboards within the Azure portal at no additional cost. This integration marks a major milestone in our mission to simplify observability reducing the administrative overhead and complexity compared to deploying and maintaining your own Grafana instances. The use of open-source observability tools continues to grow for cloud-native scenarios such as application and infrastructure monitoring using Prometheus metrics and OpenTelemetry logs and traces. For these scenarios, DevOps and SRE teams need streamlined and cost-effective access to industry-standard tooling like Prometheus metrics and Grafana dashboards within their cloud-hosted environments. For many teams, this usually means deploying and managing separate monitoring stacks with some versions self-hosted or partner-managed Prometheus and Grafana. However, Azure Monitor's latest integrations with Grafana provides this capability out-of-the-box by enabling you to view Prometheus metrics and Azure other observability data in Grafana dashboards fully integrated into the Azure portal. Azure Monitor dashboards with Grafana delivers powerful visualization and data transformation capabilities on Prometheus metrics, Azure resource metrics, logs, and traces stored in Azure Monitor. Pre-built dashboards are included for several key scenarios like Azure Kubernetes Service, Azure Container Apps, Container Insights, and Application Insights. Why Grafana in Azure portal? Grafana dashboards are widely adopted visualization tool used with Prometheus metrics and cloud-native observability tools. Embedding it natively in Azure Portal offers: Unified Azure experience: No additional RBAC or network configuration required, users Azure login credentials and Azure RBAC are used to access dashboards and data. View Grafana dashboards alongside all your other Azure resources and Azure Monitor views in the same portal. No management overhead or compute costs: Dashboards with Grafana use a fully SaaS model built into Azure Monitor, where you do not have to administer the Grafana server or the compute on which it runs. Access to community dashboards: Open-source and Grafana community dashboards using Prometheus or Azure Monitor data sources can be imported with no modifications. These capabilities mean faster troubleshooting, deeper insights, and a more consistent observability platform for Azure-centric workloads. Figure 1: Dashboards with Grafana landing page in the context of Azure Monitor Workspace in the Azure portal Getting Started To get started, enable Managed Prometheus for your AKS cluster and then navigate to the Azure Monitor workspace or AKS cluster in the Azure portal and select Monitoring > Dashboards with Grafana (preview). From this page you can view, edit, create and import Grafana dashboards. Simply click on one of the pre-built dashboards to get started. You may use these dashboards as they have been provided or edit and add panels, update visualizations and create variables to create your own custom dashboards. With this approach, no Grafana servers or additional Azure resources need to be provisioned or maintained. Teams can quickly leverage and customize Grafana dashboards within the Azure portal, reducing their deployment and management time while still gaining the benefits of dashboards and visualizations to improve monitoring and troubleshooting times. Figure 2: Kubernetes Compute Resources dashboard being viewed in the context of Azure Monitor Workspace in the Azure portal When to upgrade to Azure Managed Grafana? Dashboards with Grafana in the Azure portal cover most common Prometheus scenarios but, Azure Managed Grafana remains the right choice for several advanced use cases, including: Extended data source support for non-Azure data sources e.g. open-source and third-party data stores Private networking and advanced authentication options Multi-cloud, hybrid and on-premises data source connectivity. See When to use Azure Managed Grafana for more details. Get started with Azure Monitor dashboards with Grafana today.621Views1like0CommentsGeneral Availability of Azure Monitor Network Security Perimeter Features
We’re excited to announce that Azure Monitor Network Security Perimeter features are now generally available! This update is an important step forward for Azure Monitor’s security, providing comprehensive network isolation for your monitoring data. In this post, we’ll explain what Network Security Perimeter is, why it matters, and how it benefits Azure Monitor users. Network Security Perimeter is purpose-built to strengthen network security and monitoring, enabling customers to establish a more secure and isolated environment. As enterprise interest grows, it’s clear that this feature will play a key role in elevating the protection of Azure PaaS resources against evolving security threats. What is Network Security Perimeter and Why Does It Matter? Network Security Perimeter is a network isolation feature for Azure PaaS services that creates a trusted boundary around your resources. Azure Monitor’s key components (like Log Analytics workspaces and Application Insights) run outside of customer virtual networks; Network security perimeter allows these services to communicate only within an explicit perimeter and blocks any unauthorized public access. In essence, the security perimeter acts as a virtual firewall at the Azure service level – by default it restricts public network access to resources inside the perimeter, and only permits traffic that meets your defined rules. This prevents unwanted network connections and helps prevent data exfiltration (sensitive monitoring data stays within your control). For Azure Monitor customers, Network Security Perimeter is a game-changer. It addresses a common ask from enterprises for “zero trust” network security on Azure’s monitoring platform. Previously, while you could use Private Link to secure traffic from your VNets to Azure Monitor, Azure Monitor’s own service endpoints were still accessible over the public internet. The security perimeter closes that gap by enforcing network controls on Azure’s side. This means you can lock down your Log Analytics workspace or Application Insights to only accept data from specific sources (e.g. certain IP ranges, or other resources in your perimeter) and only send data out to authorized destinations. If anything or anyone outside those rules attempts to access your monitoring resources, Network Security Perimeter will deny it and log the attempt for auditing. In short, Network Security Perimeter brings a new level of security to Azure Monitor: it allows organizations to create a logical network boundary around their monitoring resources, much like a private enclave. This is crucial for customers in regulated industries (finance, government, healthcare) who need to ensure their cloud services adhere to strict network isolation policies. By using the security perimeter, Azure Monitor can be safely deployed in environments that demand no public exposure and thorough auditing of network access. It’s an important step in strengthening Azure Monitor’s security posture and aligning with enterprise zero-trust networking principles. Key Benefits of Network Security Perimeter in Azure Monitor With Network Security Perimeter now generally available, Azure Monitor users gain several powerful capabilities: 🔒 Enhanced Security & Data Protection: Azure PaaS resources in a perimeter can communicate freely with each other, but external access is blocked by default. You define explicit inbound/outbound rules for any allowed public traffic, ensuring no unauthorized network access to your Log Analytics workspaces, Application Insights components, or other perimeter resources. This greatly reduces the risk of data exfiltration and unauthorized access to monitoring data. ⚖️ Granular Access Control: Network Security Perimeter supports fine-grained rules to tailor access. You can allow inbound access by specific IP address ranges or Azure subscription IDs, and allow outbound calls to specific Fully Qualified Domain Names (FQDNs). For example, you might permit only your corporate IP range to send telemetry to a workspace, or allow a workspace to send data out only to contoso-api.azurewebsites.net. This level of control ensures that only trusted sources and destinations are used. 📜 Comprehensive Logging & Auditing: Every allowed or denied connection governed by Network Security Perimeter can be logged. Azure Monitor’s Network Security Perimeter integration provides unified access logs for all resources in the perimeter. These logs give you visibility into exactly what connections were attempted, from where, and whether they were permitted or blocked. This is invaluable for auditing and compliance – for instance, proving that no external IPs accessed your workspace, or detecting unexpected outbound calls. The logs can be sent to a Log Analytics workspace or storage for retention and analysis. 🔧 Seamless Integration with Azure Monitor Services: Network Security Perimeter is natively integrated across Azure Monitor’s services and workflows. Log Analytics workspaces and Application Insights components support Network Security Perimeter out-of-the-box, meaning ingestion, queries, and alerts all enforce perimeter rules behind the scenes. Azure Monitor Alerts (scheduled query rules) and Action Groups also work with Network Security Perimeter , so that alert notifications or automation actions respect the perimeter (for example, an alert sending to an Event Hub will check Network Security Perimeter rules). This end-to-end integration ensures that securing your monitoring environment with Network Security Perimeter doesn’t break any functionality – everything continues to work, but within your defined security boundary. 🤝 Consistent, Centralized Management: Network Security Perimeter introduces a uniform way to manage network access for multiple resources. You can group resources from different services (and even different subscriptions) into one perimeter and manage network rules in one place. This “single pane of glass” approach simplifies operations: network admins can define a perimeter once and apply it to all relevant Azure Monitor components (and other supported services). It’s a more scalable and consistent method than maintaining disparate firewall settings on each service. Network Security Perimeter uses Azure’s standard API and portal experience, so setting up a perimeter and rules is straightforward. 🌐 No-Compromise Isolation (with Private Link): Network Security Perimeter complements existing network security options. If you’re already using Azure Private Link to keep traffic off the internet, Network Security Perimeter adds another layer of protection. Private Link secures traffic between your VNet and Azure Monitor; Network Security Perimeter secures Azure Monitor’s service endpoints themselves. Used together, you achieve defense-in-depth: e.g., a workspace can be accessible only via private endpoint and only accept data from certain sources due to Network Security Perimeter . This layered approach helps meet even the most stringent security requirements. In conclusion, Network Security Perimeter for Azure Monitor provides strong network isolation, flexible control, and visibility – all integrated into the Azure platform. It helps organizations confidently use Azure Monitor in scenarios where they need to lock down network access and simplify compliance. For detailed information on configuring Azure Monitor with a Network Security Perimeter, please refer to the following link: Configure Azure Monitor with Network Security Perimeter.1.1KViews1like0CommentsEnhance your Azure visualizations using Azure Monitor dashboards with Grafana
In line with our commitment to open-source solutions, we are announcing the public preview of Azure Monitor dashboards with Grafana. This service offers a powerful solution for cloud-native monitoring and visualizing Prometheus metrics. Dashboards with Grafana enable you to create and edit Grafana dashboards directly in the Azure portal without additional cost and less administrative overhead compared to self-hosting Grafana or using managed Grafana services. Start quickly with pre-built and community dashboards Pre-built Grafana dashboards for Azure Kubernetes Services, Azure Monitor, and dozens of other Azure resources are included and enabled by default. Additionally, you can import dashboards from thousands of publicly available Grafana community and open-source dashboards for Prometheus and Azure Monitor data sources. Built-in Grafana controls and capabilities allow you to apply a wide range of visualization panels and client-side transformations to Azure monitoring data to create custom dashboards. Flexibility with open-source dashboards Azure Monitor dashboards with Grafana are fully compatible with open-source Grafana dashboards and are portable across any Grafana instances regardless of where they are hosted. Teams that create and author Grafana dashboards can share and re-use dashboards in Azure Monitor dashboards with Grafana, Azure Managed Grafana, self-hosted Grafana and more. Manage dashboards with Azure Role Based Access Control (RBAC) Dashboards with Grafana are native Azure resources supporting Azure RBAC to assign permissions, and automation via ARM and Bicep templates. The ability to view dashboards and data solely depends on users’ permissions on both the dashboard and the underlying data source being viewed. Supported Scenarios Dashboards with Grafana include Azure data sources only, starting with Azure Monitor metrics, logs, traces, alerts, Azure Resource Graph, and Azure Monitor managed service for Prometheus metrics. Support for Grafana Explore and Exemplars will be available in future releases. When to use Azure Managed Grafana? If you store your telemetry data in Azure, Dashboards with Grafana in the Azure portal is a great way to get started with Grafana. If you have additional data sources, or need full enterprise capabilities in Grafana, you can choose to upgrade to Azure Managed Grafana, a fully managed hosted service for the Grafana Enterprise software. See a detailed solution comparison of Dashboards with Grafana and Azure Managed Grafana here. Get started with Azure Monitor dashboards with Grafana today.1.1KViews4likes0CommentsGA: Managed Prometheus visualizations in Azure Monitor for AKS — unified insights at your fingertips
We’re thrilled to announce the general availability (GA) of Managed Prometheus visualizations in Azure Monitor for AKS, along with an enhanced, unified AKS Monitoring experience. Troubleshooting Kubernetes clusters is often time-consuming and complex whether you're diagnosing failures, scaling issues, or performance bottlenecks. This redesign of the existing Insights experience brings all your key monitoring data into a single, streamlined view reducing the time and effort it takes to diagnose, triage, and resolve problems so you can keep your applications running smoothly with less manual work. By using Managed Prometheus, customers can also realize up to 80% savings on metrics costs and benefit from up to 90% faster blade load performance delivering both a powerful and cost-efficient way to monitor and optimize your AKS environment. What’s New in GA Since the preview release, we’ve added several capabilities: Control plane metrics: Gain visibility into critical components like the API server and ETCD database, essential for diagnosing cluster-level performance bottlenecks. Load balancer chart deep links: Jump directly into the networking drilldown view to troubleshoot failed connections and SNAT port issues more efficiently. Improved at-scale cluster view: Get a faster, more comprehensive overview across all your AKS clusters, making multi-cluster monitoring easier. Simplified Troubleshooting, End to End The enhanced AKS Monitoring experience provides both a basic (free) tier and an upgraded experience with Prometheus metrics and logging — all within a unified, single-pane-of-glass dashboard. Here’s how it helps you troubleshoot faster: Identify failing components immediately With new KPI Cards for Pod and Node Status, you can quickly spot pending or failed pods, high CPU/memory usage, or saturation issues, decreasing diagnosis time. Monitor and manage cluster scaling smoothly The Events Summary Card surfaces Kubernetes warnings and pending pod states, helping you respond to scale-related disruptions before they impact production. Pinpoint root causes of latency and connectivity problems Detailed node saturation metrics, plus control plane and load balancer insights, make it easier to isolate where slowdowns or failures are occurring — whether at the node, cluster, or network layer. Free vs. Upgraded Metrics Overview Here’s a quick comparison of what’s included by default versus what you get with the enhanced experience: Basic tier metrics Additional metrics in upgraded experience Alert summary card Historical Kubernetes events (30 days) Events summary card Warning events by reason Pod status KPI card Namespace CPU and memory % Node status KPI card Container logs by volume Node CPU and memory % Top five controllers by logs volume VMSS OS disk bandwidth consumed % (max) Packets dropped I/O VMSS OS disk IOPS consumed % (max) Load balancer SNAT port usage API server CPU % (max) (preview) API server memory % (max) (preview) ETCD database usage % (max) (preview) See What Customers Are Saying Early adopters have already seen meaningful improvements: "Azure Monitor managed Prometheus visualizations for Container Insights has been a game-changer for our team. Offloading the burden of self-hosting and maintaining our own Prometheus infrastructure has significantly reduced our operational overhead. With the managed add-on, we get the powerful insights and metrics we need without worrying about scalability, upgrades, or reliability. It seamlessly integrates into our existing Azure environment, giving us out-of-the-box visibility into our container workloads. This solution allows our engineers to focus more on building and delivering features, rather than managing monitoring infrastructure." – S500 customer in health care industry Get Started Today We’re committed to helping you optimize and manage your AKS clusters with confidence. Visit the Azure portal and explore the new AKS Monitoring experience today! Learn more: https://aka.ms/azmon-prometheus-visualizations381Views1like0CommentsPublic Preview: Metrics usage insights for Azure Monitor Workspace
As organizations expand their services and applications, reliability and high availability are a top priority to ensure they provide a high level of quality to their customers. As the complexity of these services and applications grows, organizations continue to collect more telemetry to ensure higher observability. However, many are facing a common challenge: increasing costs driven by the ever-growing volume of telemetry data. Over time, as products grow and evolve, not all telemetry remains valuable. In fact, over instrumentation can create unnecessary noise, generating data that contributes to higher costs without delivering actionable insights. In a time where every team is being asked to do more with less, identifying which telemetry streams truly matter has become essential. To address this need we are announcing the Public Preview of ‘metrics usage insights’, a feature currently designed for Azure Managed Prometheus users which will analyze all metrics ingested in Azure Managed Workspace (AMW), surfacing actionable insights to optimize your observability setup. Metrics usage insights is built to empower teams with the visibility and tools the organizations need to manage observability costs effectively. It empowers customers to pinpoint metrics that align with their business objectives, uncover areas of unnecessary spend by identifying unused metrics, and sustain a streamlined, cost-effective monitoring approach. Metrics usage insights sends usage data to a Log Analytics Workspace (LAW) for analysis. This is a free offering, and there is no charge associated for the data sent to the Log Analytics workspace, storage or queries. Customers will be guided to enable the feature as part of the standard out of the box experience during new AMW resource creation. For existing AMWs this can be configured using diagnostic settings. Key Features 1.Understanding Limits and Quotas for Effective Resource Management Monitoring limits and quotas is crucial for system performance and resource optimization. Tracking usage aids in efficient scaling and cost avoidance. Metrics usage insights provides tools to monitor thresholds, resolve throttling, and ensure cost-effective operations without the need for creating support incidents. 2.Workspace Exploration This experience lets customers explore their AMW data and gain insights. It provides a detailed analysis of data points and samples ingested for billing, both at metric and workspace levels. Customers can evaluate individual metrics by examining their quantity, ingestion volume, and financial impact. 3.Identifying and Removing Unused Metrics The metrics usage insights feature helps identify underutilized metrics that are being ingested, but not used through dashboards, monitors, and API calls. Users facing high storage and ingestion costs can use this feature to delete unused metrics to optimize high-cost metrics, and reclaim capacity. Enable metrics usage insights To enable metrics usage insights, you create a diagnostic setting, which instructs the AMW to send data supporting the insights queries and workbooks to a Log Analytics Workspace (LAW). You'll be prompted to enable it automatically when you create a new Azure Monitor workspace. You can enable it later for an existing Azure Monitor workspace. Read More567Views3likes0CommentsOperator/CRD support with Azure Monitor managed service for Prometheus is now Generally Available
We are excited to announce that custom resource definitions (CRD) support with Azure Monitor managed service for Prometheus is now generally available. Azure Monitor managed service for Prometheus is a component of Azure Monitor Metrics, allowing you to collect and analyze metrics at scale using a Prometheus-compatible monitoring solution, based on the Prometheus project from the Cloud Native Computing Foundation. This fully managed service enables using the Prometheus query language (PromQL) to analyze and alert on the performance of monitored infrastructure and workloads. What's new? With this new update, customers can customize scraping targets using Custom Resources (Pod Monitors and Service Monitors), similar to the OSS Prometheus Operator. Enabling Managed Prometheus add-on in an AKS cluster will deploy the Pod and Service Monitor custom resource definitions to allow you to create your own custom resources. If you are already using Prometheus Service and Pod monitors to collect metrics from your workloads, you can simply change the apiVersion in the Service/Pod monitor definitions to use them with Azure Managed Prometheus. Earlier, customers who did not have access to kube-system namespace were not able to customize metrics collection. With this update, customers can create custom resources to enable custom configuration of scrape jobs in any namespace. This is especially useful in a multitenancy scenario where customers are running workloads in different namespaces. Here is how a leading Public Sector Banking and Financial Services and Insurance (BFSI) company in India has used Service and Pod monitors custom resources to enable monitoring of GPU metrics with Azure Managed Prometheus, DCGM Exporter, and Azure Managed Grafana. “Azure Monitor managed service for Prometheus provides a production-grade solution for monitoring without the hassle of installation and maintenance. By leveraging these managed services, we can focus on extracting insights from your metrics and logs rather than managing the underlying infrastructure. The integration of essential GPU metrics—such as Framebuffer Memory Usage, GPU Utilization, Tensor Core Utilization, and SM Clock Frequencies—into Azure Managed Prometheus and Grafana enhances the visualization of actionable insights. This integration facilitates a comprehensive understanding of GPU consumption patterns, enabling more informed decisions regarding optimization and resource allocation.” -A leading public sector BFSI company in India Get started today! To use CRD support with Azure Managed Prometheus, enable Managed Prometheus add-on on your AKS cluster. This will automatically deploy the custom resource definitions (CRD) for service and pod monitors. To add Prometheus exporters to collect metrics from third-party workloads or other applications, and to see a list of workloads which have curated configurations and instructions, see Integrate common workloads with Azure Managed Prometheus - Azure Monitor | Microsoft Learn. For more details refer to this article, or our documentation. We would love to hear from you - Please share your feedback and suggestions in Azure Monitor · Community.2.7KViews1like2CommentsAzure Monitor Private Link Scope (AMPLS) Scale Limits Increased by 10x!
What is Azure Monitor Private Link Scope (AMPLS)? Azure Monitor Private Link Scope (AMPLS) is a feature that allows you to securely connect Azure Monitor resources to your virtual network using private endpoints. This ensures that your monitoring data is accessed only through authorized private networks, preventing data exfiltration and keeping all traffic inside the Azure backbone network. AMPLS – Scale Limits Increased by 10x in Public Cloud - Public Preview In a groundbreaking development, we are excited to share that the scale limits for Azure Monitor Private Link Scope (AMPLS) have been significantly increased by tenfold (10x) in Public Cloud regions as part of the Public Preview! This substantial enhancement empowers our customers to manage their resources more efficiently and securely with private links using AMPLS, ensuring that workload logs are routed via the Microsoft backbone network. Addressing Customer Challenges Top Azure Strategic 500 customers, including leading Telecom service providers, Banking & Financial services customers, have reported that the previous limits of AMPLS were insufficient to meet their growing demands. The need for private links has surged 3-5 times beyond capacity, impacting network isolation and integration of critical workloads. Real-World Impact Our solution now enables customers to scale their Azure Monitor resources significantly, ensuring seamless network configurations and enhanced performance. Scenario 1: A Leading Telecom Service Provider known for its micro-segmentation architecture, have faced challenges with large-scale monitoring and reporting due to limitations on AMPLS. With the new solution, the customer can now scale up to 3,000 Log Analytics and 10,000 Application Insights workspaces with a single AMPLS resource, allowing them to configure over 13,000 Azure Monitor resources effortlessly. Scenario 2: A Leading Banking & Financial Services Customer have faced the scale challenges in delivering personalized insights due to complex workflows. By utilizing Azure Monitor with network isolation configurations, the customer can now scale their Azure Monitor resources to ensure secure telemetry flow and compliance. They have enabled thousands of Azure Monitor resources configured with AMPLS. Key Benefits to the Customer We believe that the solution our team has developed will significantly improve our customers' experience, allowing them to manage their resources more efficiently and effectively with private links using AMPLS. An AMPLS object can now connect up to 3,000 Log Analytics workspaces and 10,000 Application Insights components. (10x Increase) The Log Analytics workspace limit has been increased from 300 to 3,000 (10x increase). The Application Insights limit has increased from 1,000 to 10,000 (10x increase). An Azure Monitor resources can now connect up to 100 AMPLSs (20x increase). Data Collection Endpoint (DCE) Log Analytics Workspace (LA WS) Application Insights components (AI) An AMPLS object can connect to 10 private endpoints at most. Redesign of AMPLS – User experience to load 13K+ resources with Pagination Call to Action Explore the new capabilities of Azure Monitor Private Link Scope (AMPLS) and see how it can transform your network isolation and resource management. Visit our Azure Monitor Private Link Scope (AMPLS) documentation page for more details and start leveraging these enhancements today! For detailed information on configuring Azure Monitor private link scope and azure monitor resources, please refer to the following link: Configure Azure Monitor Private Link Scope (AMPLS) Configure Private Link for Azure Monitor685Views0likes0CommentsIngestion of Managed Prometheus metrics from a private AKS cluster using private link
This article describes the end-to-end instructions on how to configure Managed Prometheus for data ingestion from your private Azure Kubernetes Service (AKS) cluster to an Azure Monitor Workspace. Azure Private Link enables you to access Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. An Azure Monitor Private Link Scope (AMPLS) connects a private endpoint to a set of Azure Monitor resources to define the boundaries of your monitoring network. Using private endpoints for Managed Prometheus and your Azure Monitor workspace you can allow clients on a virtual network (VNet) to securely ingest Prometheus metrics over a Private Link. Conceptual overview A private endpoint is a special network interface for an Azure service in your Virtual Network (VNet). When you create a private endpoint for your Azure Monitor workspace, it provides secure connectivity between clients on your VNet and your workspace. For more details, see Private Endpoint. An Azure Private Link enables you to securely link Azure platform as a service (PaaS) resource to your virtual network by using private endpoints. Azure Monitor uses a single private link connection called Azure Monitor Private Link Scope or AMPLS, which enables each client in the virtual network to connect with all Azure Monitor resources like Log Analytics Workspace, Azure Monitor Workspace etc. (instead of creating multiple private links). For more details, see Azure Monitor Private Link Scope (AMPLS) To set up ingestion of Managed Prometheus metrics from virtual network using private endpoints into Azure Monitor Workspace, follow these high-level steps: Create an Azure Monitor Private Link Scope (AMPLS) and connect it with the Data Collection Endpoint of the Azure Monitor Workspace. Connect the AMPLS to a private endpoint that is set up for the virtual network of your private AKS cluster. Prerequisites A private AKS cluster with Managed Prometheus enabled. As part of Managed Prometheus enablement, you will also have an Azure Monitor Workspace that is set up. For more information, see Enable Managed Prometheus in AKS. 1. Create an AMPLS for Azure Monitor Workspace Metrics collected with Azure Managed Prometheus is ingested and stored in Azure Monitor workspace, so you must make the workspace accessible over a private link. For this, create an Azure Monitor Private Link Scope or AMPLS. In the Azure portal, search for "Azure Monitor Private Link Scopes", and then click "Create". Enter the resource group and name, select Private Only for Ingestion Access Mode. Click on "Review + Create" to create the AMPLS. For more details on setup of AMPLS, see Configure private link for Azure Monitor. 2. Connect the AMPLS to the Data Collection Endpoint of Azure Monitor Workspace Private links for data ingestion for Managed Prometheus are configured on the Data Collection Endpoints (DCE) of the Azure Monitor workspace that stores the data. To identify the DCEs associated with your Azure Monitor workspace, select Data Collection Endpoints from your Azure Monitor workspace in the Azure portal. In the Azure portal, search for the Azure Monitor Workspace that you created as part of enabling Managed Prometheus for your private AKS cluster. Note the Data Collection Endpoint name. Now, in the Azure portal, search for the AMPLS that you created in the previous step. Go to the AMPLS overview page, click on Azure Monitor Resources, click Add, and then connect the DCE of the Azure Monitor Workspace that you noted in the previous step. 2a. Configure DCEs Note: If your AKS cluster isn't in the same region as your Azure Monitor Workspace, then you need to configure the Data Collection Rule for the Azure Monitor Workspace. Follow the steps below only if your AKS cluster is not in the same region as your Azure Monitor Workspace. If your cluster is in the same region, skip this step and move to step 3. Create a Data Collection Endpoint in the same region as the AKS cluster. Go to your Azure Monitor Workspace, and click on the Data collection rule (DCR) on the Overview page. This DCR has the same name as your Azure Monitor Workspace. From the DCR overview page, click on Resources -> + Add, and then select the AKS cluster. Once the AKS cluster is added (you might need to refresh the page), click on the AKS cluster, and then Edit Data Collection of Endpoint. On the blade that opens, select the Data Collection Endpoint that you created in step 1 of this section. This DCE should be in the same region as the AKS cluster. 3. Connect AMPLS to private endpoint of AKS cluster A private endpoint is a special network interface for an Azure service in your Virtual Network (VNet). We will now create a private endpoint in the VNet of your private AKS cluster and connect it to the AMPLS for secure ingestion of metrics. In the Azure portal, search for the AMPLS that you created in the previous steps. Go to the AMPLS overview page, click on Configure -> Private Endpoint connections, and then select + Private Endpoint. Select the resource group and enter a name of the private endpoint, then click Next. In the Resource section, select Microsoft.Monitor/accounts as the Resource type, the Azure Monitor Workspace as the Resource, and then select prometheusMetrics. Click Next. In the Virtual Network section, select the virtual network of your AKS cluster. You can find this in the portal under AKS overview -> Settings -> Networking -> Virtual network integration. 4. Verify if metrics are ingested into Azure Monitor Workspace Verify if Prometheus metrics from your private AKS cluster are ingested into Azure Monitor Workspace: In the Azure portal, search for the Azure Monitor Workspace, and go to Monitoring -> Metrics. In the Metrics Explorer, query for metrics and verify that you are able to query. Next steps Use private endpoints for Managed Prometheus and Azure Monitor workspace for details on how to configure private link to query data from your Azure Monitor workspace using workbooks.820Views0likes0CommentsPublic Preview: The New AKS Monitoring Experience
We're excited to announce the public preview of our enhanced Monitoring experience for Azure Kubernetes Service (AKS). This redesign of the existing Insights experience brings comprehensive monitoring capabilities into a single, streamlined view, addressing some of the most common challenges users face when managing their AKS clusters. Our new Monitoring experience provides both basic (free) and detailed insights (with enabled Prometheus metrics and logging), offering a unified, single-pane-of-glass experience. The basic experience is available for all AKS users with no configuration required at all. A significant benefit of this new experience is in diagnosing pod deployment failures. In the past, identifying pending or failed pods could be a cumbersome process. With the new KPI Card for Pod Status, you can now quickly pinpoint and address these issues before they escalate, ensuring smoother deployments and reduced downtime. Another key scenario where this enhanced view shines is investigating node resource issues. Understanding node readiness and capacity is crucial for efficient cluster management. The Node Readiness Status card, along with detailed CPU and memory usage metrics, provides clear insights into whether your nodes are fully prepared to host pods. This helps prevent resource bottlenecks and optimizes the overall performance of your cluster. Ensuring cluster health during a scaling operation has never been easier. The new Summary Card for Events helps you monitor Kubernetes warning events and pending pod states, making it simple to track and respond to spikes. This ensures your cluster scales smoothly and efficiently, without unexpected hitches that could disrupt your services. Additionally, troubleshooting latency and connectivity issues in AKS is now more straightforward. With enhanced insights into node saturation metrics, including VMSS OS Disk Bandwidth and IOPS consumption, you can quickly identify and resolve issues causing latency. Detailed ETCD monitoring and Load Balancer metrics, such as % SNAT Port Usage, provide critical data to maintain optimal cluster performance, keeping your applications running smoothly. The following comparison table highlights what data comes out of the box for free for ALL AKS users. When you upgrade, you get all the same data collected in the newer Prometheus format as well as access to more rich metrics and logs for your core troubleshooting scenarios. Basic tier metrics Additional metrics in upgraded experience Alert summary card Historical Kubernetes events (30 days) Events summary card Warning events by reason Pod status KPI card Namespace CPU and memory % Node status KPI card Container logs by volume Node CPU and memory % Top five controllers by logs volume VMSS OS disk bandwidth consumed % (max) Packets dropped I/O VMSS OS disk IOPS consumed % (max) Load balancer SNAT port usage We’re committed to providing you with the tools you need to manage and optimize your AKS clusters effectively. Explore the new Monitoring experience in the Azure portal today and experience the future of AKS monitoring!1.6KViews2likes0Comments