azure monitor
1319 TopicsAnnouncing new public preview capabilities in Azure Monitor pipeline
Azure Monitor pipeline, similar to ETL (Extract, Transform, Load) process, enhances traditional data collection methods. It streamlines data collection from various sources through a unified ingestion pipeline and utilizes a standardized configuration approach that is more efficient and scalable. As Azure Monitor pipeline is used in more complex and security‑sensitive environments — including on‑premises infrastructure, edge locations, and large Kubernetes clusters — certain patterns and challenges show up consistently. Based on what we’ve been seeing across these deployments, we’re sharing a few new capabilities now available in public preview. These updates focus on three areas that tend to matter most at scale: secure ingestion, control over where pipeline instances run, and processing data before it lands in Azure Monitor. Here’s what’s new — and why it matters. Secure ingestion with TLS and mutual TLS (mTLS) Pod placement controls for Azure Monitor pipeline Transformations and Automated Schema Standardization Secure ingestion with TLS and mutual TLS (mTLS) Why is this needed? As telemetry ingestion moves beyond Azure and closer to the edge, security expectations increase. In many environments, plain TCP ingestion is no longer sufficient. Teams often need: Encrypted ingestion paths by default Strong guarantees around who is allowed to send data A way to integrate with existing PKI and certificate management systems In regulated or security‑sensitive setups, secure authentication at the ingestion boundary is a baseline requirement — not an optional add‑on. What does this feature do? Azure Monitor pipeline now supports TLS and mutual TLS (mTLS) for TCP‑based ingestion endpoints in public preview. With this support, you can: Encrypt data in transit using TLS Enable mutual authentication with mTLS, so both the client and the pipeline endpoint validate each other Use your own certificates Enforce security requirements at ingestion time, before data is accepted This makes it easier to securely ingest data from network devices, appliances, and on‑prem workloads without relying on external proxies or custom gateways. Learn more. If the player doesn’t load, open the video in a new window: Open video Pod placement controls for Azure Monitor pipeline Why is it needed? As Azure Monitor pipeline scales in Kubernetes environments, default scheduling behavior often isn’t sufficient. In many deployments, teams need more control to: Isolate telemetry workloads in multi‑tenant clusters Run pipelines on high‑capacity nodes for resource‑intensive processing Prevent port exhaustion by limiting instances per node Enforce data residency or security zone requirements Distribute instances across availability zones for better resiliency and resource use Without explicit placement controls, pipeline instances can end up running in sub‑optimal locations, leading to performance and operational issues. What does this feature do? With the new executionPlacement configuration (public preview), Azure Monitor pipeline gives you direct control over how pipeline instances are scheduled. Using this feature, you can: Target specific nodes using labels (for example, by team, zone, or node capability) Control how instances are distributed across nodes Enforce strict isolation by allowing only one instance per node Apply placement rules per pipeline group, without impacting other workloads These rules are validated and enforced at deployment time. If the cluster can’t satisfy the placement requirements, the pipeline won’t deploy — making failures clear and predictable. This gives you better control over performance, isolation, and cluster utilization as you scale. Learn more. Transformations and Automated Schema Standardization Why is this needed? Telemetry data is often high‑volume, noisy, and inconsistent across sources. In many deployments, ingesting everything as‑is and cleaning it up later isn’t practical or cost‑effective. There’s a growing need to: Filter or reduce data before ingestion Normalize formats across different sources Route data directly into standard tables without additional processing What does this feature do? Azure Monitor pipeline data transformations, already in public preview, let you process data before it’s ingested. With transformations, you can: Filter, aggregate, or reshape incoming data Convert raw syslog or CEF messages into standardized schemas Choose sample KQL templates to perform transformations instead of manually writing KQL queries Route data directly into built‑in Azure tables Reduce ingestion volume while keeping the data that matters Check out the recent blog about the transformations preview, or you can learn more here. Getting started All of these capabilities are available today in public preview as part of Azure Monitor pipeline. If you’re already using the pipeline, you can start experimenting with secure ingestion, pod placement, and transformations right away. As always, feedback is welcome as we continue to refine these features on the path to general availability.314Views0likes0CommentsPublic Preview: Azure Monitor pipeline transformations
Overview The Azure Monitor pipeline extends the data collection capabilities of Azure Monitor to edge and multi-cloud environments. It enables at-scale data collection (data collection over 100k EPS), and routing of telemetry data before it's sent to the cloud. The pipeline can cache data locally and sync with the cloud when connectivity is restored and route telemetry to Azure Monitor in cases of intermittent connectivity. Learn more about this here - Configure Azure Monitor pipeline - Azure Monitor | Microsoft Learn Why transformations matter Lower Costs: Filter and aggregate before ingestion to reduce ingestion volume and in turn lower ingestion costs Better Analytics: Standardized schemas mean faster queries and cleaner dashboards. Future-Proof: Built-in schema validation prevents surprises during deployment. Azure Monitor pipeline solves the challenges of high ingestion costs and complex analytics by enabling transformations before ingestion, so your data is clean, structured, and optimized before it even hits your Log Analytics Workspace. Check out a quick demo here - If the player doesn’t load, open the video in a new window: Open video Key features in public preview 1. Schema change detection One of the most exciting additions is schema validation for Syslog and CEF : Integrated into the “Check KQL Syntax” button in the Strato UI. Detects if your transformation introduces schema changes that break compatibility with standard tables. Provides actionable guidance: Option 1: Remove schema-changing transformations like aggregations. Option 2: Send data to a custom tables that support custom schemas. This ensures your pipeline remains robust and compliant with analytics requirements. For example, in the picture below, extending to new columns that don't match the schema of the syslog table throws an error during validation and asks the user to send to a custom table or remove the transformations. While in the case of the example below, filtering does not modify the schema of the data at all and so no validation error is thrown, and the user is able to send it to a standard table directly. 2. Pre-built KQL templates Apply ready-to-use templates for common transformations. Save time and minimize errors when writing queries. 3. Automatic schema standardization for syslog and CEF Automatically schematize CEF and syslog data to fit standard tables without any added transformations to convert raw data to syslog/CEF from the user. 4. Advanced filtering Drop unwanted events based on attributes like: Syslog: Facility, ProcessName, SeverityLevel. CEF: DeviceVendor, DestinationPort. Reduce noise and optimize ingestion costs. 5. Aggregation for high-volume logs Group events by key fields (e.g., DestinationIP, DeviceVendor) into 1-minute intervals. Summarize high-frequency logs for actionable insights. 6. Drop unnecessary fields Remove redundant columns to streamline data and reduce storage overhead. Supported KQL sunctions 1. Aggregation summarize (by), sum, max, min, avg, count, bin 2. Filtering where, contains, has, in, and, or, equality (==, !=), comparison (>, >=, <, <=) 3. Schematization extend, project, project-away, project-rename, project-keep, iif, case, coalesce, parse_json 4. Variables for Expressions or Functions let 5. Other Functions String: strlen, replace_string, substring, strcat, strcat_delim, extract Conversion: tostring, toint, tobool, tofloat, tolong, toreal, todouble, todatetime, totimespan Get started today Head to the Azure Portal and explore the new Azure Monitor pipeline transformations UI. Apply templates, validate your KQL, and experience the power of Azure Monitor pipeline transformations. Find more information on the public docs here - Configure Azure Monitor pipeline transformations - Azure Monitor | Microsoft Learn735Views1like0CommentsAnnouncing public preview of query-based metric alerts in Azure Monitor
Azure Monitor metric alerts are now more powerful than ever Azure Monitor metric alerts now support all Azure metrics - including platform, Prometheus, and custom metrics - giving you complete coverage for your monitoring needs. In addition, metric alerts now offer powerful query capabilities with PromQL, enabling complex logic across multiple metrics and resources. This makes it easier to detect patterns, correlate signals, and customize alerts for modern workloads like Kubernetes clusters, VMs, and custom applications. Key Benefits Full metrics coverage: metric alerts now support alerting on any Azure metrics including platform metrics, Prometheus metrics and custom metrics. PromQL-Powered Conditions: Use PromQL to select, aggregate, and transform metrics for advanced alerting scenarios. Powerful event detection: Query-based alert rules can now detect intricate patterns across multiple timeseries based on metric change ratio, complex aggregations, or comparison between different metrics and timeseries. You can also analyze metrics across different time windows to identify change in metric behavior over time. Flexible Scoping: For query-based alert rules, choose between resource-centric alerts for granular RBAC or workspace-centric alerts for cross-resource visibility. Alerting at scale: Query-based alert rules allow monitoring metrics from multiple resources within a subscription or a resource group, using a single rule. Managed Identity Support: Securely authorize queries using Azure Managed Identity, ensuring compliance and reducing credential management overhead. Customizable Notifications: Add dynamic custom properties and custom email subjects for faster triage and context-rich alerting. Reuse community alerts: Easily import and re-use PromQL alert queries from the open-source community or from other Prometheus-based monitoring systems. Supported metrics At this time, query-based metric alerts support any metrics ingested into Azure Monitor Workspace (AMW). This currently includes: Metrics collected by Azure Monitor managed service for Prometheus, from Azure Kubernetes Services clusters (AKS) or from other sources. Virtual machine OpenTelemetry (OTel) Guest OS Metrics Other OTel custom metrics collected into Azure Monitor. You can still create threshold-based metric alerts as before on Azure platform metrics. Query-based alerts on platform metrics will be added in future releases. Comparison: Query-based metric alerts vs. Prometheus rule groups alerts Query-based metric alerts serve as an alternative to alerts defined in Prometheus rule groups. Both options remain viable and execute the same PromQL-based alerting logic. However, metric alerts are natively integrated with Azure Monitor, aligning seamlessly with other Azure alert types. They now support all your metric alerting needs within the same rule type. They also offer richer functionality and greater flexibility, making them a strong choice for teams looking for consistency across Azure monitoring solutions. See the table below for detailed comparison of the two alternatives. Stay tuned - additional enhancements to metric alerts are coming in future releases! Feature Azure Prometheus rule groups Query-based metric alerts Alert rule management Part of a rule group resource Independent Azure resource Supported metrics Metrics in AMW (Managed Prometheus) Metrics in AMW (Managed Prometheus, OTel metrics) Condition logic PromQL-based query PromQL-based query Aggregation & transformation Full PromQL support Full PromQL support Scope Workspace-wide Resource-centric or workspace-wide Alerting at scale Not supported Subscription level, Resource-group level Cross-resource conditions Supported Supported RBAC granularity Workspace level Resource or workspace level Managed identity support Not supported Supported Notification customization Supported - Prometheus labels and annotations Advanced - dynamic custom properties, custom email subject Getting Started If you have an Azure Monitor workspace containing Prometheus or OpenTelemetry metrics, you can create query-based metric alert rules today. Rules can be created and managed using the Azure Portal, ARM templates, or Azure REST API. For details, visit Azure Monitor documentation.691Views1like1CommentAccelerating SCOM to Azure Monitor Migrations with Automated Analysis and ARM Template Generation
Accelerating SCOM to Azure Monitor Migrations with Automated Analysis and ARM Template Generation Azure Monitor has become the foundation for modern, cloud-scale monitoring on Azure. Built to handle massive volumes of telemetry across infrastructure, applications, and services, it provides a unified platform for metrics, logs, alerts, dashboards, and automation. As organizations continue to modernize their environments, Azure Monitor is increasingly the target state for enterprise monitoring strategies. With Azure Monitor increasingly becoming the destination platform, many organizations face a familiar challenge: migrating from System Center Operations Manager (SCOM). While both platforms serve the same fundamental purpose—keeping your infrastructure healthy and alerting you to problems—the migration path isn’t always straightforward. SCOM Management Packs contain years of accumulated monitoring logic: performance thresholds, event correlation rules, service discoveries, and custom scripts. Translating all of this into Azure Monitor’s paradigm of Log Analytics queries, alert rules, and Data Collection Rules can be a significant undertaking. To help with this challenge, members of the community have built and shared a tool that automates much of the analysis and artifact generation. The community-driven SCOM to Azure Monitor Migration Tool accepts Management Pack XML files and produces several outputs designed to accelerate migration planning and execution. The tool parses the Management Pack structure and identifies all monitors, rules, discoveries, and classes. Each component is analyzed for migration complexity: some translate directly to Azure Monitor equivalents, while others require custom implementation or may not have a direct equivalent. Results are organized into two clear categories: Auto-Migrated Components – Covered by the generated templates and ready for deployment Requires Manual Migration – Components that need custom implementation or review Instead of manually authoring Azure Resource Manager templates, the tool generates deployable infrastructure-as-code artifacts, including: Scheduled Query Alert rules mapped from SCOM monitors and rules Data Collection Rules for performance counters and Windows Events Custom Log DCRs for collecting script-generated log files Action Groups for notification routing Log Analytics workspace configuration (for new environments) For streamlined deployment, the tool offers a combined ARM template that deploys all resources in a single operation: Log Analytics workspace (create new or connect to an existing workspace) Action Groups with email notification All alert rules Data Collection Rules Monitoring Workbook One download, one deployment command — with configurable parameters for workspace settings, notification recipients, and custom log paths. The tool generates an Azure Monitor Workbook dashboard tailored to the Management Pack, including: Performance counter trends over time Event monitoring by severity with drill-down tables Service health overview (stopped services) Active alerts summary from Azure Resource Graph This provides immediate operational visibility once the monitoring configuration is deployed. Each migrated component includes the Kusto Query Language (KQL) equivalent of the original SCOM monitoring logic. These queries can be used as-is or refined to match environment-specific requirements. The workflow is designed to reduce the manual effort involved in migration planning: Export your Management Pack XML from SCOM Upload it to the tool Review the analysis — components are separated into auto-migrated and requires manual work Download the All-in-One ARM template (or individual templates) Customize parameters such as workspace name and action group recipients Deploy to your Azure subscription For a typical Management Pack, such as Windows Server Active Directory monitoring, you may see 120+ components that can be migrated directly, with an additional 15–20 components requiring manual review due to complex script logic or SCOM-specific functionality. The tool handles straightforward translations well: Performance threshold monitors become metric alerts or log-based alerts Windows Event collection rules become Data Collection Rule configurations Service monitors become scheduled query alerts against Heartbeat or Event tables Components that typically require manual attention: Complex PowerShell or VBScript probe actions Monitors that depend on SCOM-specific data sources Correlation rules spanning multiple data sources Custom workflows with proprietary logic The tool clearly identifies which category each component falls into, allowing teams to plan their migration effort with confidence. A Note on Validation This is a community tool, not an officially supported Microsoft product. Generated artifacts should always be reviewed and tested in a non-production environment before deployment. Every environment is different, and the tool makes reasonable assumptions that may require adjustment. Even so, starting with structured ARM templates and working KQL queries can significantly reduce time to deployment. Try It Out The tool is available at https://tinyurl.com/Scom2Azure.Upload a Management Pack, review the analysis, and see what your migration path looks like.360Views1like0CommentsAnnouncing Application Insights SDK 3.x for .NET
Microsoft remains committed to making OpenTelemetry the foundation of modern observability on Azure. Today, we’re excited to take the next step on that journey with a major release of the Application Insights SDK 3.x for .NET. Migrate to OpenTelemetry with a Major Version Bump With Application Insights SDK 3.x, developers can migrate to OpenTelemetry-based instrumentation with dramatically less effort. Until now, migrating from classic Application Insights SDK to the Azure Monitor OpenTelemetry Distro required a clean install and code updates. With this release, most customers can adopt OpenTelemetry simply by upgrading their SDK version. The new SDK automatically routes your classic Application Insights Track* APIs calls through a new mapping layer that emits OpenTelemetry signals under the hood. Why This Matters By upgrading, you gain: ✔ Vendor‑neutral OpenTelemetry APIs going forward You can immediately begin writing new code using OpenTelemetry APIs, ensuring future portability and alignment with industry standards. ✔ Access to the full OpenTelemetry ecosystem You can now easily plug in community instrumentation libraries and exporters. For example, collecting Redis Cache dependency data—previously not supported with Application Insights 2.x—becomes straightforward. ✔ Multi‑exporter support Export to Azure Monitor and another system (e.g., a SIEM or backend of your choice) simultaneously if your scenario requires it. What Still Requires Attention: Initializers and Processors One area where automatic migration is not possible is telemetry processors and telemetry initializers. These Application Insights extensibility points were extremely flexible, allowing custom property injection, filtering, or deletion logic. OpenTelemetry supports similar behavior, but through more structured concepts such as span processors. See here for a full list of breaking changes. On a positive note, these OpenTelemetry components generally deliver better performance and clearer behavior. Our documentation assists with migration, and we plan to release an MCP with guardrails to assist LLM in accurate coding. Keeping the essence of Azure Monitor Application Insights While OpenTelemetry encourages the use of the OpenTelemetry-Collector, we remain committed to preserving the simplicity that customers love about Azure Monitor Application Insights. The Azure Monitor OpenTelemetry Distro is all that’s required to get started. It’s just a single NuGet package and you configure it with a Connection String. Telemetry flows in minutes. No Collector is required unless you explicitly want one. We are able to achieve this with extensive built‑in sampling to manage cost and a trace‑preservation algorithm, so you see complete traces. This keeps the “just works” spirit of Azure Monitor Application Insights intact, while aligning with OpenTelemetry standards. Feedback If you encounter issues during the upgrade, please open a support ticket—we want the migration to be smooth. If you’d like to share feedback or engage directly with the product team, email us at otel@microsoft.com. This is not an official support channel, but we read every email and appreciate hearing feedback directly from you!1.2KViews1like0Comments