azure monitor
1288 TopicsPublic Preview: Azure Monitor pipeline transformations
Overview The Azure Monitor pipeline extends the data collection capabilities of Azure Monitor to edge and multi-cloud environments. It enables at-scale data collection (data collection over 100k EPS), and routing of telemetry data before it's sent to the cloud. The pipeline can cache data locally and sync with the cloud when connectivity is restored and route telemetry to Azure Monitor in cases of intermittent connectivity. Learn more about this here - Configure Azure Monitor pipeline - Azure Monitor | Microsoft Learn Why transformations matter Lower Costs: Filter and aggregate before ingestion to reduce ingestion volume and in turn lower ingestion costs Better Analytics: Standardized schemas mean faster queries and cleaner dashboards. Future-Proof: Built-in schema validation prevents surprises during deployment. Azure Monitor pipeline solves the challenges of high ingestion costs and complex analytics by enabling transformations before ingestion, so your data is clean, structured, and optimized before it even hits your Log Analytics Workspace. Check out a quick demo here - If the player doesn’t load, open the video in a new window: Open video Key features in public preview 1. Schema change detection One of the most exciting additions is schema validation for Syslog and CEF : Integrated into the “Check KQL Syntax” button in the Strato UI. Detects if your transformation introduces schema changes that break compatibility with standard tables. Provides actionable guidance: Option 1: Remove schema-changing transformations like aggregations. Option 2: Send data to a custom tables that support custom schemas. This ensures your pipeline remains robust and compliant with analytics requirements. For example, in the picture below, extending to new columns that don't match the schema of the syslog table throws an error during validation and asks the user to send to a custom table or remove the transformations. While in the case of the example below, filtering does not modify the schema of the data at all and so no validation error is thrown, and the user is able to send it to a standard table directly. 2. Pre-built KQL templates Apply ready-to-use templates for common transformations. Save time and minimize errors when writing queries. 3. Automatic schema standardization for syslog and CEF Automatically schematize CEF and syslog data to fit standard tables without any added transformations to convert raw data to syslog/CEF from the user. 4. Advanced filtering Drop unwanted events based on attributes like: Syslog: Facility, ProcessName, SeverityLevel. CEF: DeviceVendor, DestinationPort. Reduce noise and optimize ingestion costs. 5. Aggregation for high-volume logs Group events by key fields (e.g., DestinationIP, DeviceVendor) into 1-minute intervals. Summarize high-frequency logs for actionable insights. 6. Drop unnecessary fields Remove redundant columns to streamline data and reduce storage overhead. Supported KQL sunctions 1. Aggregation summarize (by), sum, max, min, avg, count, bin 2. Filtering where, contains, has, in, and, or, equality (==, !=), comparison (>, >=, <, <=) 3. Schematization extend, project, project-away, project-rename, project-keep, iif, case, coalesce, parse_json 4. Variables for Expressions or Functions let 5. Other Functions String: strlen, replace_string, substring, strcat, strcat_delim, extract Conversion: tostring, toint, tobool, tofloat, tolong, toreal, todouble, todatetime, totimespan Get started today Head to the Azure Portal and explore the new Azure Monitor pipeline transformations UI. Apply templates, validate your KQL, and experience the power of Azure Monitor pipeline transformations. Find more information on the public docs here - Configure Azure Monitor pipeline transformations - Azure Monitor | Microsoft Learn575Views1like0CommentsAccelerating SCOM to Azure Monitor Migrations with Automated Analysis and ARM Template Generation
Accelerating SCOM to Azure Monitor Migrations with Automated Analysis and ARM Template Generation Azure Monitor has become the foundation for modern, cloud-scale monitoring on Azure. Built to handle massive volumes of telemetry across infrastructure, applications, and services, it provides a unified platform for metrics, logs, alerts, dashboards, and automation. As organizations continue to modernize their environments, Azure Monitor is increasingly the target state for enterprise monitoring strategies. With Azure Monitor increasingly becoming the destination platform, many organizations face a familiar challenge: migrating from System Center Operations Manager (SCOM). While both platforms serve the same fundamental purpose—keeping your infrastructure healthy and alerting you to problems—the migration path isn’t always straightforward. SCOM Management Packs contain years of accumulated monitoring logic: performance thresholds, event correlation rules, service discoveries, and custom scripts. Translating all of this into Azure Monitor’s paradigm of Log Analytics queries, alert rules, and Data Collection Rules can be a significant undertaking. To help with this challenge, members of the community have built and shared a tool that automates much of the analysis and artifact generation. The community-driven SCOM to Azure Monitor Migration Tool accepts Management Pack XML files and produces several outputs designed to accelerate migration planning and execution. The tool parses the Management Pack structure and identifies all monitors, rules, discoveries, and classes. Each component is analyzed for migration complexity: some translate directly to Azure Monitor equivalents, while others require custom implementation or may not have a direct equivalent. Results are organized into two clear categories: Auto-Migrated Components – Covered by the generated templates and ready for deployment Requires Manual Migration – Components that need custom implementation or review Instead of manually authoring Azure Resource Manager templates, the tool generates deployable infrastructure-as-code artifacts, including: Scheduled Query Alert rules mapped from SCOM monitors and rules Data Collection Rules for performance counters and Windows Events Custom Log DCRs for collecting script-generated log files Action Groups for notification routing Log Analytics workspace configuration (for new environments) For streamlined deployment, the tool offers a combined ARM template that deploys all resources in a single operation: Log Analytics workspace (create new or connect to an existing workspace) Action Groups with email notification All alert rules Data Collection Rules Monitoring Workbook One download, one deployment command — with configurable parameters for workspace settings, notification recipients, and custom log paths. The tool generates an Azure Monitor Workbook dashboard tailored to the Management Pack, including: Performance counter trends over time Event monitoring by severity with drill-down tables Service health overview (stopped services) Active alerts summary from Azure Resource Graph This provides immediate operational visibility once the monitoring configuration is deployed. Each migrated component includes the Kusto Query Language (KQL) equivalent of the original SCOM monitoring logic. These queries can be used as-is or refined to match environment-specific requirements. The workflow is designed to reduce the manual effort involved in migration planning: Export your Management Pack XML from SCOM Upload it to the tool Review the analysis — components are separated into auto-migrated and requires manual work Download the All-in-One ARM template (or individual templates) Customize parameters such as workspace name and action group recipients Deploy to your Azure subscription For a typical Management Pack, such as Windows Server Active Directory monitoring, you may see 120+ components that can be migrated directly, with an additional 15–20 components requiring manual review due to complex script logic or SCOM-specific functionality. The tool handles straightforward translations well: Performance threshold monitors become metric alerts or log-based alerts Windows Event collection rules become Data Collection Rule configurations Service monitors become scheduled query alerts against Heartbeat or Event tables Components that typically require manual attention: Complex PowerShell or VBScript probe actions Monitors that depend on SCOM-specific data sources Correlation rules spanning multiple data sources Custom workflows with proprietary logic The tool clearly identifies which category each component falls into, allowing teams to plan their migration effort with confidence. A Note on Validation This is a community tool, not an officially supported Microsoft product. Generated artifacts should always be reviewed and tested in a non-production environment before deployment. Every environment is different, and the tool makes reasonable assumptions that may require adjustment. Even so, starting with structured ARM templates and working KQL queries can significantly reduce time to deployment. Try It Out The tool is available at https://tinyurl.com/Scom2Azure.Upload a Management Pack, review the analysis, and see what your migration path looks like.265Views1like0CommentsAnnouncing Application Insights SDK 3.x for .NET
Microsoft remains committed to making OpenTelemetry the foundation of modern observability on Azure. Today, we’re excited to take the next step on that journey with a major release of the Application Insights SDK 3.x for .NET. Migrate to OpenTelemetry with a Major Version Bump With Application Insights SDK 3.x, developers can migrate to OpenTelemetry-based instrumentation with dramatically less effort. Until now, migrating from classic Application Insights SDK to the Azure Monitor OpenTelemetry Distro required a clean install and code updates. With this release, most customers can adopt OpenTelemetry simply by upgrading their SDK version. The new SDK automatically routes your classic Application Insights Track* APIs calls through a new mapping layer that emits OpenTelemetry signals under the hood. Why This Matters By upgrading, you gain: ✔ Vendor‑neutral OpenTelemetry APIs going forward You can immediately begin writing new code using OpenTelemetry APIs, ensuring future portability and alignment with industry standards. ✔ Access to the full OpenTelemetry ecosystem You can now easily plug in community instrumentation libraries and exporters. For example, collecting Redis Cache dependency data—previously not supported with Application Insights 2.x—becomes straightforward. ✔ Multi‑exporter support Export to Azure Monitor and another system (e.g., a SIEM or backend of your choice) simultaneously if your scenario requires it. What Still Requires Attention: Initializers and Processors One area where automatic migration is not possible is telemetry processors and telemetry initializers. These Application Insights extensibility points were extremely flexible, allowing custom property injection, filtering, or deletion logic. OpenTelemetry supports similar behavior, but through more structured concepts such as span processors. See here for a full list of breaking changes. On a positive note, these OpenTelemetry components generally deliver better performance and clearer behavior. Our documentation assists with migration, and we plan to release an MCP with guardrails to assist LLM in accurate coding. Keeping the essence of Azure Monitor Application Insights While OpenTelemetry encourages the use of the OpenTelemetry-Collector, we remain committed to preserving the simplicity that customers love about Azure Monitor Application Insights. The Azure Monitor OpenTelemetry Distro is all that’s required to get started. It’s just a single NuGet package and you configure it with a Connection String. Telemetry flows in minutes. No Collector is required unless you explicitly want one. We are able to achieve this with extensive built‑in sampling to manage cost and a trace‑preservation algorithm, so you see complete traces. This keeps the “just works” spirit of Azure Monitor Application Insights intact, while aligning with OpenTelemetry standards. Feedback If you encounter issues during the upgrade, please open a support ticket—we want the migration to be smooth. If you’d like to share feedback or engage directly with the product team, email us at otel@microsoft.com. This is not an official support channel, but we read every email and appreciate hearing feedback directly from you!806Views1like0CommentsAnnouncing the Public Preview of Azure Monitor health models
Troubleshooting modern cloud-native workloads has become increasingly complex. As applications scale across distributed services and regions, pinpointing the root cause of performance degradation or outages often requires navigating a maze of disconnected signals, metrics, and alerts. This fragmented experience slows down troubleshooting and burdens engineering teams with manual correlation work. We address these challenges by introducing a unified, intelligent concept of workload health that’s enriched with application context. Health models streamline how you monitor, assess, and respond to issues affecting your workloads. Built on Azure service groups, they provide an out-of-the-box model tailored to your environment, consolidate signals to reduce alert noise, and surface actionable insights — all designed to accelerate detection, diagnosis, and resolution across your Azure landscape. Overview Azure Monitor health models enable customers to monitor the health of their applications with ease and confidence. These models use the Azure-wide workload concept of service groups to infer the scope of workloads and provide out-of-the-box health criteria based on platform metrics for Azure resources. Key Capabilities Out-of-the-Box Health Model Customers often struggle with defining and monitoring the health of their workloads due to the variability of metrics across different Azure resources. Azure Monitor health models provide a simplified out-of-the-box health experience built using Azure service group membership. Customers can define the scope of their workload using service groups and receive default health criteria based on platform metrics. This includes recommended alert rules for various Azure resources, ensuring comprehensive monitoring coverage. Improved Detection of Workload Issues Isolating the root cause of workload issues can be time-consuming and challenging, especially when dealing with multiple signals from various resources. The health model aggregates health signals across the model to generate a single health notification, helping customers isolate the type of signal that became unhealthy. This enables quick identification of whether the issue is related to backend services or user-centric signals. Quick Impact Assessment Assessing the impact of workload issues across different regions and resources can be complex and slow, leading to delayed responses and prolonged downtime. The health model provides insights into which Azure resources or components have become unhealthy, which regions are affected, and the duration of the impact based on health history. This allows customers to quickly assess the scope and severity of issues within the workload. Localize the Issue Identifying the specific signals and resources that triggered a health state change can be difficult, leading to inefficient troubleshooting and resolution processes. Health models inform customers which signals triggered the health state change, and which service group members were affected. This enables quick isolation of the trouble source and notifies the relevant team, streamlining the troubleshooting process. Customizable Health Criteria for Bespoke Workloads Many organizations operate complex, bespoke workloads that require their own specific health definitions. Relying solely on default platform metrics can lead to blind spots or false positives, making it difficult to accurately assess the true health of these custom applications. Azure Monitor health models allow customers to tailor health assessments by adding custom health signals. These signals can be sourced from Azure Monitor data such as Application Insights, Managed Prometheus, and Log Analytics. This flexibility empowers teams to tune the health model to reflect the unique characteristics and performance indicators of their workloads, ensuring more precise and actionable health insights. Getting Started Ready to simplify and accelerate how you monitor the health of your workloads? Getting started with Azure Monitor health models is easy — and during the public preview, it’s completely free to use. Pricing details will be shared ahead of general availability (GA), so you can plan with confidence. Start Monitoring in Minutes Define Your Service Group Create your service group and add the relevant resources as members to the service group. If you don’t yet have access to service groups, you can join here. Create Your Health Model In the Azure Portal navigate to Health Models and create your first model. You’ll get out-of-the-box health criteria automatically applied. Customize to Fit Your Needs In many cases the default health signals may suit your needs, but we support customization as well. Investigate and Act Use the health timeline and our alerting integration to quickly assess impact, isolate issues, and take action — all from a single pane of glass. You can access health models today in the Azure portal! For more details on how to get started with health models, please refer to our documentation. We Want to Hear From You Azure Monitor health models are built with our customers in mind — and your feedback is essential to shaping the future of this experience. Whether you're using the out-of-the-box health model or customizing it to fit your unique workloads, we want to know what’s working well and where we can improve. Share Your Feedback Use the “Give Feedback” feature directly within the Azure Monitor health models experience to send us your thoughts in context. Post your ideas in the Azure Monitor community. Prefer email? Reach out to us at azmonhealthmodels@service.microsoft.com — we’re listening. Your insights help us prioritize features, improve usability, and ensure Azure Monitor continues to meet the evolving needs of modern cloud-native operations.6.4KViews8likes1CommentData Collection Rule : XPath queries to filter 7036 without WMI etc
Hi, In PowerShell on server I’m trying to filter out some events from Event Id 7036 Service Control Manager Start stop services. I’m trying to filter out WMI Performance Adapter, so I don’t want to have those events imported in log analytic workspace with data collection rule. Can you help me what I’m doing wrong ? $XPath = 'System!*[System[(EventID="7036")]] and [EventData[Data[@Name="param1"]!="WMI Performance Adapter"]]' Get-WinEvent -FilterXPath $XPath Get-WinEvent : Could not retrieve information about the Security log. Error: Attempted to perform an unauthorized operation.. At line:3 char:1 + Get-WinEvent -FilterXPath $XPath + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Get-WinEvent], Exception + FullyQualifiedErrorId : LogInfoUnavailable,Microsoft.PowerShell.Commands.GetWinEventCommand Get-WinEvent : No events were found that match the specified selection criteria. At line:3 char:1 + Get-WinEvent -FilterXPath $XPath + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (:) [Get-WinEvent], Exception + FullyQualifiedErrorId : NoMatchingEventsFound,Microsoft.PowerShell.Commands.GetWinEventCommand $XPath = 'System!*[System[(EventID="7036")]] and [EventData[Data[@Name="param1"]!="WMI Performance Adapter"]]' Get-WinEvent -LogName 'System' -FilterXPath $XPath Get-WinEvent : The specified query is invalid At line:2 char:1 + Get-WinEvent -LogName 'System' -FilterXPath $XPath + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Get-WinEvent], EventLogException + FullyQualifiedErrorId : System.Diagnostics.Eventing.Reader.EventLogException,Microsoft.PowerShell.Commands.GetWinEventCommand528Views0likes1CommentObservability for the Age of Generative AI
Every generation of computing brings new challenges in how we monitor and trust our systems. With the rise of Generative AI, applications are no longer static code—they’re living systems that plan, reason, call tools, and make choices dynamically. Traditional observability, built for servers and microservices, simply can’t tell you when an AI agent is correct, safe, or cost-efficient. We’re reimagining observability for this new world. At Ignite, we introduced the next wave of Azure Monitor and AI Foundry integration—purpose-built for GenAI apps and agents. End-to-End GenAI Observability Across the AI Stack Customers can see not just whether their systems are up or fast, but also whether their agent responses are accurate. Azure Monitor, in partnership with Foundry, unifies agent telemetry with infrastructure, application, network, and hardware signals—creating a true end-to-end view that spans AI agents, the services they call, and the compute they run on. New capabilities include: Agent Overview Dashboard in Grafana and Azure – Gain a unified view of one or more GenAI agents, including success rate, grounding quality, safety violations, latency, and cost per outcome. Customize dashboards in Grafana or Azure Monitor Workbooks to detect regressions instantly after a model or prompt change—and understand how those changes affect user experience and spend. AI-Tailored Trace View – Follow every AI decision as a readable story: plan → reasoning → tool calls → guardrail checks. Identify slow or unsafe steps in seconds, without sifting through thousands of spans. AI-Aware Trace Search by Attributes – Search, sort, and filter across millions of runs using GenAI-specific attributes like model ID, grounding score, or cost. Find the “needle” in your GenAI haystack in a single query. Foundry Low-Code Agent Monitoring – Agents created through Foundry’s visual, low-code interface are now automatically observable. Without writing a single line of code, you can track reliability, safety, and cost metrics from day one. Full-Stack Visibility Across the AI Stack – All evaluations, traces, and red-teaming results are now published to Azure Monitor, where agent signals correlate seamlessly with infrastructure KPIs and application telemetry to deliver a unified operational view. Check out our get started documentation. Powered by OpenTelemetry Innovation This work builds directly on the new OpenTelemetry extensions announced in our recent Azure AI Foundry blog post. Microsoft is helping define the OpenTelemetry agent specification, extending it to capture multi-agent orchestration traces, LLM reasoning context, and evaluation signals—enabling interoperability across Azure Monitor, AI Foundry, and partner tools such as Datadog, Arize, and Weights & Biases. By building on open standards, customers gain consistent visibility across multi-cloud and hybrid AI environments—without vendor lock-in. Built for Enterprise Scale and Trust With open standards and deep integration between Azure Monitor and AI Foundry, organizations can now apply the same discipline they use for traditional applications to their GenAI workloads, complete with compliance, cost governance, and quality assurance. GenAI is redefining what it means to operate software. With these innovations, Microsoft is giving customers the visibility, control, and confidence to operate AI responsibly, at enterprise scale.781Views0likes0CommentsAnnouncing public preview of query-based metric alerts in Azure Monitor
Azure Monitor metric alerts are now more powerful than ever Azure Monitor metric alerts now support all Azure metrics - including platform, Prometheus, and custom metrics - giving you complete coverage for your monitoring needs. In addition, metric alerts now offer powerful query capabilities with PromQL, enabling complex logic across multiple metrics and resources. This makes it easier to detect patterns, correlate signals, and customize alerts for modern workloads like Kubernetes clusters, VMs, and custom applications. Key Benefits Full metrics coverage: metric alerts now support alerting on any Azure metrics including platform metrics, Prometheus metrics and custom metrics. PromQL-Powered Conditions: Use PromQL to select, aggregate, and transform metrics for advanced alerting scenarios. Powerful event detection: Query-based alert rules can now detect intricate patterns across multiple timeseries based on metric change ratio, complex aggregations, or comparison between different metrics and timeseries. You can also analyze metrics across different time windows to identify change in metric behavior over time. Flexible Scoping: For query-based alert rules, choose between resource-centric alerts for granular RBAC or workspace-centric alerts for cross-resource visibility. Alerting at scale: Query-based alert rules allow monitoring metrics from multiple resources within a subscription or a resource group, using a single rule. Managed Identity Support: Securely authorize queries using Azure Managed Identity, ensuring compliance and reducing credential management overhead. Customizable Notifications: Add dynamic custom properties and custom email subjects for faster triage and context-rich alerting. Reuse community alerts: Easily import and re-use PromQL alert queries from the open-source community or from other Prometheus-based monitoring systems. Supported metrics At this time, query-based metric alerts support any metrics ingested into Azure Monitor Workspace (AMW). This currently includes: Metrics collected by Azure Monitor managed service for Prometheus, from Azure Kubernetes Services clusters (AKS) or from other sources. Virtual machine OpenTelemetry (OTel) Guest OS Metrics Other OTel custom metrics collected into Azure Monitor. You can still create threshold-based metric alerts as before on Azure platform metrics. Query-based alerts on platform metrics will be added in future releases. Comparison: Query-based metric alerts vs. Prometheus rule groups alerts Query-based metric alerts serve as an alternative to alerts defined in Prometheus rule groups. Both options remain viable and execute the same PromQL-based alerting logic. However, metric alerts are natively integrated with Azure Monitor, aligning seamlessly with other Azure alert types. They now support all your metric alerting needs within the same rule type. They also offer richer functionality and greater flexibility, making them a strong choice for teams looking for consistency across Azure monitoring solutions. See the table below for detailed comparison of the two alternatives. Stay tuned - additional enhancements to metric alerts are coming in future releases! Feature Azure Prometheus rule groups Query-based metric alerts Alert rule management Part of a rule group resource Independent Azure resource Supported metrics Metrics in AMW (Managed Prometheus) Metrics in AMW (Managed Prometheus, OTel metrics) Condition logic PromQL-based query PromQL-based query Aggregation & transformation Full PromQL support Full PromQL support Scope Workspace-wide Resource-centric or workspace-wide Alerting at scale Not supported Subscription level, Resource-group level Cross-resource conditions Supported Supported RBAC granularity Workspace level Resource or workspace level Managed identity support Not supported Supported Notification customization Supported - Prometheus labels and annotations Advanced - dynamic custom properties, custom email subject Getting Started If you have an Azure Monitor workspace containing Prometheus or OpenTelemetry metrics, you can create query-based metric alert rules today. Rules can be created and managed using the Azure Portal, ARM templates, or Azure REST API. For details, visit Azure Monitor documentation.661Views1like0CommentsGenerally Available - Azure Monitor Private Link Scope (AMPLS) Scale Limits Increased by 10x!
Introduction We are excited to announce the General Availability (GA) of Azure Monitor Private Link Scope (AMPLS) scale limit increase, delivering 10x scalability improvements compared to previous limits. This enhancement empowers customers to securely connect more Azure Monitor resources via Private Link, ensuring network isolation, compliance, and Zero Trust alignment for large-scale environments. What is Azure Monitor Private Link Scope (AMPLS)? Azure Monitor Private Link Scope (AMPLS) is a feature that allows you to securely connect Azure Monitor resources to your virtual network using private endpoints. This ensures that your monitoring data is accessed only through authorized private networks, preventing data exfiltration and keeping all traffic inside the Azure backbone network. AMPLS – Scale Limits Increased by 10x in Public Cloud & Sovereign Cloud (Fairfax/Mooncake) - Regions In a groundbreaking development, we are excited to share that the scale limits for Azure Monitor Private Link Scope (AMPLS) have been significantly increased by tenfold (10x) in Public & Sovereign Cloud regions as part of the General Availability! This substantial enhancement empowers our customers to manage their resources more efficiently and securely with private links using AMPLS, ensuring that workload logs are routed via the Microsoft backbone network. What’s New? 10x Scale Increase Connect up to 3,000 Log Analytics workspaces per AMPLS (previously 300) Connect up to 10,000 Application Insights components per AMPLS (previously 1,000) 20x Resource Connectivity Each Azure Monitor resource can now connect to 100 AMPLS resources (previously 5) Enhanced UX/UI Redesigned AMPLS interface supports loading 13,000+ resources with pagination for smooth navigation Private Endpoint Support Each AMPLS object can connect to 10 private endpoints, ensuring secure telemetry flows Why It Matters Top Azure Strategic 500 customers, including major Telecom service providers and Banking & Financial Services organizations, have noted that previous AMPLS limits did not adequately support their increasing requirements. The demand for private links has grown 3–5 times over existing capacity, affecting both network isolation and integration of essential workloads. This General Availability release resolves these issues, providing centralized monitoring at scale while maintaining robust security and performance. Customer Stories Our solution now enables customers to scale their Azure Monitor resources significantly, ensuring seamless network configurations and enhanced performance. Customer B - Case Study: Leading Banking & Financial Services Customer Challenge: The Banking Customer faced complexity in delivering personalized insights due to intricate workflows and content systems. They needed a solution that could scale securely while maintaining compliance and performance for business-critical applications. Solution: The Banking Customer has implemented Microsoft Private Links Services (AMPLS) to enhance the security and performance of financial models for smart finance assistants, leading to greater efficiency and improved client engagement. To ensure secure telemetry flow and compliance, the banking customer implemented Azure Monitor with Private Link Scope (AMPLS) and leveraged the AMPLS Scale Limit Increase feature. Business Impact: Strengthened security posture aligned with Zero Trust principles Improved operational efficiency for monitoring and reporting Delivered a future-ready architecture that scales with evolving compliance and performance demands Customer B - Case Study: Leading Telecom Service Provider - Scaling Secure Monitoring with AMPLS Architecture: A Leading Telecom Service Provider employs a highly micro-segmented design where each DevOps team operates in its own workspace to maximize security and isolation. Challenge: While this design strengthens security, it introduces complexity for large-scale monitoring and reporting due to physical and logical limitations on Azure Monitor Private Link Scope (AMPLS). Previous scale limits made it difficult to centralize telemetry without compromising isolation. Solution: The AMPLS Scale Limit Increase feature enabled the Telecom Service Provider to expand Azure Monitor resources significantly. Monitoring traffic now routes through Microsoft’s backbone network, reducing data exfiltration risks and supporting Zero Trust principles. Impact & Benefits Scalability: Supports up to 3,000 Log Analytics workspaces and 10,000 Application Insights components per AMPLS (10× increase). Efficiency: Each Azure Monitor resource can now connect to 100 AMPLS resources (20× increase). Security: Private connectivity via Microsoft backbone mitigates data exfiltration risks. Operational Excellence: Simplifies configuration for 13K+ Azure Monitor resources, reducing overhead for DevOps teams. Customer Benefits & Results Our solution significantly enhances customers’ ability to manage Azure Monitor resources securely and at scale using Azure Monitor Private Link Scope (AMPLS). Key Benefits Massive Scale Increase 3,000 Log Analytics workspaces (previously 300) 10,000 Application Insights components (previously 1,000) Each AMPLS object can now connect to: Azure Monitor resources can now connect with up to 100 AMPLS resources (20× increase). Broader Resource Support - Supported resource types include: Data Collection Endpoints (DCE) Log Analytics Workspaces (LA WS) Application Insights components (AI) Improved UX/UI Redesigned AMPLS interface supports loading 13,000+ Azure Monitor resources with pagination for smooth navigation. Private Endpoint Connectivity Each AMPLS object can connect to 10 private endpoints, ensuring secure telemetry flows. Resources: Explore the new capabilities of Azure Monitor Private Link Scope (AMPLS) and see how it can transform your network isolation and resource management. Visit our Azure Monitor Private Link Scope (AMPLS) documentation page for more details and start leveraging these enhancements today! For detailed information on configuring Azure Monitor private link scope and azure monitor resources, please refer to the following link: Use Azure Private Link to connect networks to Azure Monitor - Azure Monitor | Microsoft Learn Design your Azure Private Link setup - Azure Monitor | Microsoft Learn Configure your private link - Azure Monitor | Microsoft Learn465Views0likes0CommentsAdvancing Full-Stack Observability with Azure Monitor at Ignite 2025
New AI-powered innovations in the observability space First, we’re excited to usher in the era of agentic cloud operations with Azure Copilot agents. At Ignite 2025, we are announcing the preview of the Azure Copilot observability agent to help you enhance full-stack troubleshooting. Formerly “Azure Monitor investigate”, the observability agent streamlines troubleshooting across application services and resources such as AKS and VMs with advanced root cause analysis in alerts, the portal, and Azure Copilot (gated preview). By automatically correlating telemetry across resources and surfacing actionable findings, it empowers teams to resolve issues faster, gain deeper visibility, and collaborate effectively. Learn more here about the observability agent and learn about additional agents in Azure Copilot here. Additionally, with the new Azure Copilot, we are streamlining agentic experiences across Azure. From operations center in the Azure portal, you can get a single view to navigate, operate and optimize your environments and invoke agents in your workflows. You also get suggested top actions within the observability blade of operations center to prioritize, diagnose and resolve issues with support from the observability agent. Learn more here. In the era of AI, more and more apps are now AI apps. That’s why we’re enhancing our observability capabilities for GenAI and agents: Azure Monitor brings agent-level visibility and control into a single experience in partnership with Observability in Foundry Control Plane through a new agent details view (public preview) showcasing success metrics, quality indicators, safety checks, and cost insights in one place. Simplified tracing also transforms every agent run into a reasonable, plan-and-act narrative for faster understanding. On top of these features, the new smart trace search enables faster detection of anomalies—such as policy violations, unexpected cost spikes, or model regressions—so teams can troubleshoot and optimize with confidence. These new agentic experiences build upon a solid observability foundation provided by Azure Monitor. Learn more here. We’re making several additional improvements in Azure Monitor: Simplified Onboarding & More Centralized Visibility Streamlined onboarding: Azure Monitor now offers streamlined onboarding for VMs, containers, and applications with sensible defaults and abstraction layers. This means ITOps teams can enable monitoring across environments in minutes, not hours. Previously, configuring DCRs and linking Log Analytics workspaces was a multi-step process; now, you can apply predefined templates and scale monitoring across hundreds of VMs faster than before. Centralized dashboards: A new monitor overview page in operations center consolidates top suggested actions and Azure Copilot-driven workflows for rapid investigation. Paired with the new monitoring coverage page (public preview) in Azure Monitor, ITOps can quickly identify gaps based on Azure Advisor recommendations, enable VM Insights and Container Insights at scale, and act on monitoring recommendations—all from a single pane of glass. Learn more here. Richer visualizations: Azure Monitor dashboards with Grafana are now in GA, delivering rich visualizations and data transformation capabilities on Prometheus metrics, Azure resource metrics, and more. Learn more here. Cloud to edge visibility: With expanded support for Arc-enabled Kubernetes with OpenShift and Azure Red Hat OpenShift in Container Insights and Managed Prometheus, Azure Monitor offers an even more complete set of services for monitoring the health and performance of different layers of Kubernetes infrastructure and the applications that depend on it. Learn more here. Advanced Logs, Metrics, and Alert Management Logs & metrics innovations: Azure Monitor now supports the log filtering and transformation (GA), as well as the emission of logs to additional destinations (public preview) such as Azure Data Explorer and Fabric—unlocking real-time analytics and more seamless data control. Learn more here. More granular access for managing logs: Granular RBAC for Log Analytics workspaces ensures compliance and least privilege principles across teams, now in general availability. Learn more here. Dynamic thresholds for log search alerts (public preview): Now you can apply the advanced machine learning methods of dynamic threshold calculations to enhance monitoring with log search alerts. Learn more here. Query-based metric alerts (public preview): Get rich and flexible query-based alerting on Prometheus, VM Guest OS, and custom OTel metrics to reduce complexity and unblock advanced alerting scenarios. Learn more here. OpenTelemetry Ecosystem Expansion Azure Monitor doubles down on our commitment to OpenTelemetry with expanded support for monitoring applications deployed to Azure Kubernetes Service (AKS) by using OTLP for instrumentation and data collection. New capabilities include: Auto-instrumentation with the Azure Monitor OpenTelemetry distro for Java and NodeJS apps on AKS (public preview): this reduces friction for teams adopting OTel standards and ensures consistent telemetry across diverse compute environments. Auto-configuration for apps on AKS in any language already instrumented with the open-source OpenTelemetry SDK to emit telemetry to Azure Monitor. Learn more here. Additionally, we are making it easier to gain richer and more consistent visibility across Azure VMs and Arc Servers with OpenTelemetry visualizations, offering standardized system metrics, per-process insights, and extensibility to popular workloads on a more cost-efficient and performant solution. Learn more here. Next Steps These innovations redefine observability from cloud to edge—simplifying onboarding, accelerating troubleshooting, and embracing open standards. For ITOps and DevOps teams, this means fewer blind spots, faster MTTR, and improved operational resilience. Whether you’re joining us at Microsoft Ignite 2025 in-person or online, there are plenty of ways to connect with the Azure Monitor team and learn more: Attend breakout session BRK149 for a deep dive into Azure Monitor’s observability capabilities and best practices for optimizing cloud resources. Attend breakout session BRK145 to learn more about how agentic AI can help you streamline cloud operations and management. Attend breakout session BRK190 to learn about how Azure Monitor and Microsoft Foundry deliver an end-to-end observability experience for your AI apps and agents. Join theater demo THR735 to see a live demo on monitoring AI agents in production. Connect with Microsoft experts at the Azure Copilot, Operations, and Management expert meet-up booth to get your questions answered.1.7KViews3likes0Comments