azure monitor
1326 TopicsIngest at Scale, Securely — Azure Monitor pipeline Is Now Generally Available
Today, we're thrilled to announce the general availability of Azure Monitor pipeline — a telemetry pipeline built for secure, high-scale ingestion across any environment. But the best way to understand what makes it powerful isn't to start with features. It's to start with the problems that kept showing up, over and over, in our conversations with customers. So, let's dig in... Chances are, this sounds a lot like your environment Imagine a large enterprise rolling out Microsoft Sentinel as their SIEM. They have sites across regions, a mix of on‑premises and cloud environments, and security telemetry streaming in from firewalls, network devices, and Linux servers—100,000 to 1 million events per second in some locations. Traditional forwarders buckle under the load, drop events during network blips, and ship everything – signal and noise – straight into Sentinel. The result: skyrocketing ingestion costs, degraded detections, and a brittle forwarding infrastructure that demands constant babysitting. If you're managing environments like these, these questions are probably top of mind: How do I securely ingest telemetry—without opening hundreds of risky endpoints? How do I reduce ingestion costs when telemetry spikes across thousands of sources simultaneously? How do I centrally standardize logs across sites and device types before they ever reach Azure? What happens to telemetry from an entire location when connectivity drops? And how do I do all of this consistently, at massive scale, and centrally across environments instead of configuring each host individually? These aren't edge cases. For many teams, getting data into the system itself is the hardest part of observability —and by the time telemetry reaches Azure Monitor or Sentinel, it's already too late to fix these problems. Customers need control before the data hits the cloud. What is Azure Monitor pipeline (and why it’s different)? Azure Monitor pipeline provides a centralized control point for telemetry ingestion and transformation, designed specifically for secure, high‑throughput, enterprise‑scale scenarios. It's built on open-source technologies from the OpenTelemetry ecosystem and includes the components needed to receive telemetry from local clients, process that telemetry, and forward it to Azure Monitor. It’s not another agent. And NO, you do not need to install it on all the resources… Agents such as Azure Monitor agent are great for collecting telemetry from individual machines and services. Azure Monitor pipeline solves a different problem: “How do I ingest telemetry from across my environment through a centralized pipeline – instead of configuring each host – while maintaining control over reliability, security, and ingestion cost?” With Azure Monitor pipeline control, you can: Ensure logs land directly in Azure‑native schemas – automatic schematization into tables such as Syslog and CommonSecurityLog Prevent data loss during intermittent connectivity across sites – local buffering in persistent storage with automated backfill Reduce ingestion costs before data reaches the cloud – centralized filtering, aggregation, and transformation Ingest telemetry at sustained high volumes in the range of hundreds and thousands of events per second – horizontally scalable pipeline architecture Secure telemetry ingestion without managing certificates on each host individually – centralized TLS/mTLS with automated certificate provisioning and zero‑downtime rotation Maintain visibility into ingestion infrastructure health – pipeline performance and health monitoring Plan deployments confidently at large scale – infrastructure sizing guidance for expected telemetry volume And all of this is fully supported and production‑ready in GA. Learn more. So, let's talk a little bit about these in detail! Tired of broken detections because logs don't match your table schema? - Automatic schematization (a customer favorite!) A consistent theme from preview customers was how painful it is to deal with log formats. Azure Monitor pipeline is the only solution that automatically shapes and schematizes data, so it lands directly in standard Azure tables such as Syslog and CommonSecurityLog. Learn more. That means: No custom parsing pipelines downstream No broken detections due to schema drift Faster time to value for security teams This happens before data reaches the cloud – right where it matters most. What happens to my telemetry when the network goes down? - Local buffering in persistent storage and automated backfill Networks fail. Maintenance happens. Sites go offline. Azure Monitor pipeline is built for this reality. It buffers telemetry locally in your configured persistent storage during network interruptions and automatically backfills data when connectivity is restored. Learn more. The result: No gaps in security visibility No manual replays Confidence that critical telemetry isn’t lost How do I reduce ingestion costs without sacrificing signal quality? - Filter and aggregate at the edge Nobody likes to pay for the data that they do not need... With Azure Monitor pipeline, customers can filter, aggregate, and shape the telemetry at the edge, sending only high‑value data to Azure. Learn more. This helps teams: Reduce ingestion costs Improve detection quality Keep cloud analytics focused on signal, not volume Cost optimization and signal quality are no longer trade‑offs – you get both. How do I keep up when telemetry volumes spike to hundreds of thousands of events per second? - Scaling One of the biggest pain points we hear is scale. Azure Monitor pipeline is designed for sustained high throughput ingestion, scaling horizontally and vertically to handle hundreds of thousands to millions of events per second. Learn more. This isn’t about theoretical limits; it’s about handling the real-world extremes that break traditional forwarders. How do I send telemetry in a secure manner? - Secure ingestion with TLS and mTLS Security teams consistently tell us that plain TCP ingestion just isn’t acceptable – especially in regulated environments. Azure Monitor pipeline addresses this head‑on by providing TLS‑secured ingestion endpoints with mutual authentication, ensuring telemetry is encrypted in transit and accepted only from trusted sources. Learn more. The result: Secure ingestion at the boundary by encrypting data in transit using TLS with automated certificate provisioning and zero downtime rotation. Clients and Azure Monitor pipeline endpoints both validate each other before ingestion by enabling mutual authentication with mTLS, and it’s easy to set it up with our default experience. Do you have your own PKI and certificate management systems? - Feel free to bring your own certificates to enable secure ingestion. If the pipeline is this critical — how do I know it's healthy? One thing we heard loud and clear during preview: “If this pipeline is critical, I need to see how it’s doing.” Azure Monitor pipeline now exposes health and performance signals, so it’s no longer a black box. Learn more. Customers can answer questions like: Is my pipeline receiving, processing, and sending telemetry? What’s the CPU and memory usage of each pipeline instance? Why is a pipeline unhealthy—or down? Observability for observability felt like the right bar to meet. How do I plan infrastructure without over- or under-provisioning? Planning pipeline infrastructure shouldn't be a guessing game – and we heard this loud and clear during preview. GA includes clear sizing guidance to help you plan the right infrastructure based on your expected telemetry volume and workload characteristics. Not rigid formulas, but practical starting points that give you a confident baseline so you can design intentionally, deploy faster, and avoid costly over- or under-provisioning. Learn more. Alright, these are a bunch of exciting features. How much do I need to pay for them? Azure Monitor pipeline is included at no additional cost for ingesting telemetry into Azure Monitor and Microsoft Sentinel. With general availability, Azure Monitor pipeline is production-ready so you can run the most demanding ingestion scenarios with confidence. If you’re already using it in preview, welcome to GA. If you’re just getting started, there’s never been a better time to dive in. As always, your feedback is what drives this forward. Drop a comment below, reach out directly, or share what you're building. We'd love to hear from you.536Views2likes0CommentsTroubleshoot with OpenTelemetry in Azure Monitor - Public Preview
OpenTelemetry is fast becoming the industry standard for modern telemetry collection and ingestion pipelines. With Azure Monitor’s new OpenTelemetry Protocol (OTLP) support, you can ship logs, metrics, and traces from wherever you run workloads to analyze and act on your observability data in one place. What’s in the preview Direct OTLP ingestion into Azure Monitor for logs, metrics, and traces. Automated onboarding for AKS workloads. Application Insights on OTLP for distributed tracing, performance and troubleshooting experiences. Pre-built Grafana dashboards to visualize signals quickly. Prometheus for metric storage and query. OpenTelemetry semantic conventions for logs and traces, so your data lands in a familiar standard-based schema. How to send OTLP to Azure Monitor: pick your path AKS: Auto-instrument Java and Node.js workloads using the Azure Monitor OpenTelemetry distro, or auto-configure any OpenTelemetry SDK-instrumented workload to export OTLP to Azure Monitor. Get started Limited preview: Auto-instrumentation for .NET and Python is also available. Get started VMs/VM Scale Sets (and Azure Arc-enabled compute): Use the Azure Monitor Agent (AMA) to receive OTLP from your apps and export it to Azure Monitor. Get started Any environment: Use the OpenTelemetry Collector to receive OTLP signals and export directly to Azure Monitor cloud ingestion endpoints. Get started Under the hood: where your telemetry lands Metrics: Stored in an Azure Monitor Workspace, a Prometheus metrics store. Logs + traces: Stored in a Log Analytics workspace using an OpenTelemetry semantic conventions–based schema. Troubleshooting: Application Insights lights up distributed tracing and end-to-end performance investigations, backed by Azure Monitor. Why it matters Standardize once: Instrument with OpenTelemetry and keep your telemetry portable. Reduce overhead: Fewer bespoke exporters and pipelines to maintain. Debug faster: Correlate metrics, logs, and traces to get from alert to root cause with less guesswork. Observe with confidence: Use dashboards and tracing views that are ready on day one. Next step: Try the OTLP preview in your environment, then validate end-to-end signal flow with Application Insights and Grafana dashboards. Learn More237Views2likes0CommentsAzure VMs host (platform) metrics (not guest metrics) to the log analytics workspace ?
Hi Team, Can some one help me how to send Azure VMs host (platform) metrics (not guest metrics) to the log analytics workspace ? Earlier some years ago I used to do it, by clicking on “Diagnostic Settings”, but now if I go to “Diagnostic Settings” tab its asking me to enable guest level monitoring (guest level metrics I don’t want) and pointing to a Storage Account. I don’t see the option to send the these metrics to Log analytics workspace. I have around 500 azure VMs whose host (platform) metrics (not guest metrics) I want to send it to the log analytics workspace.43Views0likes1CommentIntroducing Azure Managed Grafana MCP: The Managed Telemetry Gateway for AI Agents
AI agents are rapidly becoming a core part of how teams build, operate, and improve cloud systems, from coding assistants to autonomous remediation workflows. To deliver on that promise in the enterprise, agents need a secure, governed way to access real production telemetry. Azure Managed Grafana MCP lets AI agents securely query the same production telemetry you already connect to Azure Managed Grafana, like Azure Monitor metrics and logs, Application Insights, and Kusto, using your existing Azure RBAC and managed identities. How do you securely connect AI agents to real production telemetry, without standing up yet another piece of infrastructure? Today, enabling an agent to query systems like Azure Monitor, Application Insights, or Kusto often requires deploying and operating a self‑hosted MCP server, wiring up identity and networking, and maintaining additional runtime infrastructure. That friction slows adoption and expands the security surface area. Azure Managed Grafana MCP removes that entire layer. With this release, every Azure Managed Grafana instance now includes a fully managed, remote MCP server that is ready by default. What is Azure Managed Grafana MCP? Azure Managed Grafana MCP is a built‑in, managed MCP endpoint that allows AI agents to securely query enterprise telemetry and operational data through Azure Managed Grafana. Instead of deploying your own MCP server, customers can simply: Point their agent to the Azure Managed Grafana MCP endpoint Grant the agent a managed identity Start querying production data immediately No containers. No extra infrastructure. No duplicated auth systems. Azure Managed Grafana MCP is very easy to configure with your existing AMG instance Because most Azure Managed Grafana customers already connect data sources like Azure Monitor metrics, logs, Kusto, and Application Insights to Azure Managed Grafana, the MCP server can expose that telemetry to AI agents instantly, using the same RBAC and access controls teams already trust. Why we built this As we’ve talked with customers experimenting with Foundry and coding agents, a consistent theme has emerged: agents are only as useful as the data they can reason over. Requiring teams to stand up and operate a separate MCP layer introduces real cost: Additional infrastructure to deploy and maintain Custom identity and token handling Expanded attack surface Slower experimentation and adoption This Azure Managed Grafana MCP takes a different approach. Rather than asking customers to build new infrastructure for agents, we leverage infrastructure they already run and trust: Azure Managed Grafana. This shifts Grafana from being just a visualization layer to something more strategic: A secure telemetry access plane An analytical engine for agent reasoning A bridge between operational data and autonomous action Core value propositions Zero infrastructure overhead Azure Managed Grafana MCP is fully managed and enabled by default: No self‑hosted MCP servers No additional networking configuration Agents connect directly to Azure Managed Grafana and start querying data. Secure by design Security is not bolted on, it’s inherited: Uses existing Azure RBAC Supports managed identities Respects current Azure Managed Grafana access controls There’s no need to duplicate authentication or authorization logic, and the security posture remains consistent with existing observability access patterns. Immediate enterprise scenarios By exposing production telemetry through MCP, teams can unlock high‑value agent workflows immediately: Root cause analysis using Application Insights Automated operational summaries Real‑time diagnostics Cross‑resource telemetry correlation Structured data access via Kusto Chatting with an agent using Azure Managed Grafana MCP These are scenarios customers already run manually today and this MCP server makes them accessible to agents. Closing the loop: from insight to action One of the most powerful aspects of Azure Managed Grafana MCP is what happens when agents have access to both code context and live telemetry. For example: An agent queries Application Insights for production errors Identifies recurring exception patterns Locates the source code emitting those errors Generates a fix and submits a pull request This closes the loop between observability and remediation, something that’s been largely manual until now. Designing for agents, not just dashboards Humans and agents consume data very differently. Humans: Navigate dashboards sequentially Are limited by cognitive bandwidth Correlate issues manually Agents: Process large datasets in parallel Perform iterative drill‑downs without fatigue Detect statistically significant patterns quickly Azure Managed Grafana MCP is designed with this in mind. Instead of only exposing raw data, it enables agent‑optimized tools, like aggregated failure views across dozens of Application Insights instances, so agents can reason efficiently at scale. To make it easier for our customers, it is now available as a native tool within Microsoft Foundry, so you can easily connect it to your Foundry Agents. Azure Managed Grafana MCP as a native Foundry tool Looking ahead Azure Managed Grafana MCP is the foundation for a broader vision: Observability‑driven autonomous agents Secure enterprise telemetry reasoning AI systems that detect, diagnose, and act Over time, this transforms Azure Managed Grafana from dashboard software into a strategic AI integration layer for Azure. This isn’t just a visualization feature. It’s an infrastructure shift. Check out the doc for more information: Configure an Azure Managed Grafana remote MCP server | Microsoft Learn859Views1like0CommentsFebruary 2026 Recap: Azure Database for PostgreSQL
Hello Azure Community, We’re excited to share the February 2026 recap for Azure Database for PostgreSQL, featuring a set of updates focused on speed, simplicity, and better visibility. From Terraform support for Elastic Clusters and a refreshed VM SKU selection experience in the Azure portal to built‑in Grafana dashboards, these improvements make it easier to build, operate, and scale PostgreSQL on Azure. This recap also includes practical GIN index tuning guidance, enhancements to the PostgreSQL VS Code extension, and improved connectivity for azure_pg_admin users. Features Terraform support for Elastic Clusters - Generally Available Dashboards with Grafana - Generally Available Easier way to choose VM SKUs on portal – Generally Available What’s New in the PostgreSQL VS Code Extension Priority Connectivity to azure_pg_admin users Guide on 'gin_pending_list_limit' indexes Terraform support for Elastic Clusters Terraform now supports provisioning and managing Azure Database for PostgreSQL Elastic Clusters, enabling customers to define and operate elastic clusters using infrastructure‑as‑code workflows. With this support, it is now easier to create, scale, and manage multi‑node PostgreSQL clusters through Terraform, making it easier to automate deployments, replicate environments, and integrate elastic clusters into CI/CD pipelines. This improves operational consistency and simplifies management for horizontally scalable PostgreSQL workloads. Learn more about building and scaling with Azure Database for PostgreSQL elastic clusters. Dashboards with Grafana — Now Built-In Grafana dashboards are now natively integrated into the Azure Portal for Azure Database for PostgreSQL. This removes the need to deploy or manage a separate Grafana instance. With just a few clicks, you can visualize key metrics and logs side by side, correlate events by timestamp, and gain deep insights into performance, availability, and query behavior all in one place. Whether you're troubleshooting a spike, monitoring trends, or sharing insights with your team, this built-in experience simplifies day-to-day observability with no added cost or complexity. Try it under Azure Portal > Dashboards with Grafana in your PostgreSQL server view. For more details, see the blog post: Dashboards with Grafana — Now in Azure Portal for PostgreSQL. Easier way to choose VM SKUs on portal We’ve improved the VM SKU selection experience in the Azure portal to make it easier to find and compare the right compute options for your PostgreSQL workload. The updated experience organizes SKUs in a clearer, more scannable view, helping you quickly compare key attributes like vCores and memory without extra clicks. This streamlined approach reduces guesswork and makes selecting the right SKU faster and more intuitive. What’s New in the PostgreSQL VS Code Extension The VS Code extension for PostgreSQL helps developers and database administrators work with PostgreSQL directly from VS Code. It provides capabilities for querying, schema exploration, diagnostics, and Azure PostgreSQL management allowing users to stay within their editor while building and troubleshooting. This release focuses on improving developer productivity and diagnostics. It introduces new visualization capabilities, Copilot-powered experiences, enhanced schema navigation, and deeper Azure PostgreSQL management directly from VS Code. New Features & Enhancements Query Plan Visualization: Graphical execution plans can now be viewed directly in the editor, making it easier to diagnose slow queries without leaving VS Code. AGE Graph Rendering: Support is now available for automatically rendering graph visualizations from Cypher queries, improving the experience of working with graph data in PostgreSQL. Object Explorer Search: A new graphical search experience in Object Explorer allows users to quickly find tables, views, functions, and other objects across large schemas, addressing one of the highest-rated user feedback requests. Azure PostgreSQL Backup Management: Users can now manage Azure Database for PostgreSQL backups directly from the Server Dashboard, including listing backups and configuring retention policies. Server Logs Dashboard: A new Server Dashboard view surfaces Azure Database for PostgreSQL server logs and retention settings for faster diagnostics. Logs can be opened directly in VS Code and analyzed using the built-in GitHub Copilot integration. This release also includes several reliability improvements and bug fixes, including resolving connection pool exhaustion issues, fixing Docker container creation failures when no password is provided, and improving stability around connection profiles and schema-related operations. Priority Connectivity to azure_pg_admin Users Members of the azure_pg_admin role can now use connections from the pg_use_reserved_connections pool. This ensures that an admin always has at least one available connection, even if all standard client connections from the server connection pool are in use. By making sure admin users can log in when the client connection pool is full, this change prevents lockout situations and lets admins handle emergencies without competing for available open connection slots. Guide on 'gin_pending_list_limit' indexes Struggling with slow GIN index inserts in PostgreSQL? This post dives into the often-overlooked gin_pending_list_limit parameter and how it directly impacts insert performance. Learn how GIN’s pending list works, why the right limit matters, and practical guidance on tuning it to strike the perfect balance between write performance and index maintenance overhead. For a deeper dive into gin_pending_list_limit and tuning guidance, see the full blog here. Learning Bytes Create Azure Database for PostgreSQL elastic clusters with terraform: Elastic clusters in Azure Database for PostgreSQL let you scale PostgreSQL horizontally using a managed, multi‑node architecture. With Elastic cluster now generally available, you can provision and manage elastic clusters using infrastructure‑as‑code, making it easier to automate deployments, standardize environments, and integrate PostgreSQL into CI/CD workflows. Elastic clusters are a good fit when you need: Horizontal scale for large or fast‑growing PostgreSQL workloads Multi‑tenant applications or sharded data models Repeatable and automated deployments across environments The following example shows a basic Terraform configuration to create an Azure Database for PostgreSQL flexible server configured as an elastic cluster. resource "azurerm_postgresql_flexible_server" "elastic_cluster" { name = "pg-elastic-cluster" resource_group_name = <rg-name> location = <region> administrator_login = var.admin_username administrator_password = var.admin_password version = "17" sku_name = "GP_Standard_D4ds_v5" storage_mb = 131072 cluster { size = 3 } } Conclusion That’s a wrap for the February 2026 Azure Database for PostgreSQL recap. We’re continuing to focus on making PostgreSQL on Azure easier to build, operate, and scale whether that’s through better automation with Terraform, improved observability, or a smoother day‑to‑day developer and admin experience. Your feedback is important to us, have suggestions, ideas, or questions? We’d love to hear from you: https://aka.ms/pgfeedback.435Views2likes1CommentIntroducing Azure Managed Grafana 12
In this release, Azure Managed Grafana makes it easier to tighten access with current-user Entra authentication, speed up Azure Monitor logs exploration, and level up Prometheus and database monitoring experiences. What’s new in Azure Managed Grafana 12 Use current-user Entra authentication for supported Azure data sources to query with the signed-in user’s permissions. Analyze Azure Monitor logs faster with a new query builder and improved visualization and Explore experiences. Explore Prometheus metrics with improved drill-down, prefix and suffix filters, group-by label support, plus OpenTelemetry and native histogram support. Use updated, pre-built database monitoring dashboards for Azure PostgreSQL, Azure SQL, and SQL Managed Instance (SQL MI). Advanced authentication: query with current user’s Entra credentials Current-user Entra authentication is now available in Azure data sources. That means Grafana admins can configure supported data sources to re-use the logged-in user’s credentials when issuing queries. In practice, the signed-in user’s permissions define what data stores they can access, helping teams apply least-privilege access to each user while keeping the option to use Managed Identities and Service Principals in other data sources where that fits best. Supported data sources include: Azure Monitor Azure Data Explorer Azure Monitor Managed Service for Prometheus Faster log analysis: Click-to-build queries and smoother Explore If you live in Azure Monitor logs, this update is for you. Improvements to log visualization in the Logs visualization panel and Grafana Explore make it easier to filter and extract meaningful insights from Azure Monitor logs. There’s also a new Azure Monitor logs query builder, so you can create and refine queries with a few clicks instead of writing Kusto Query Language (KQL) by hand. Performance is significantly faster too. Grafana Explore can now query and render up to 30K log records at a time, so you get much faster load times, faster searches, and more responsive navigation through large log volumes. Prometheus query enhancements: drill down without the query gymnastics Users new to Prometheus get a smoother path to explore metrics and analyze time series. Metrics drill-down now includes sidebar filters for prefix/suffix so you can quickly narrow metrics by naming conventions, and group-by label support to build more context-rich groupings. This is a true queryless exploration of Azure Managed Prometheus metrics when you’re troubleshooting or just identifying what’s been collected. This release also adds OpenTelemetry & native histogram support, including an OTel mode to automate label-join complexities when querying OTLP metrics. New database monitoring dashboards Azure Managed Grafana now includes new versions of pre-built dashboards for monitoring Azure Database for PostgreSQL and Azure SQL Databases (Preview). For teams building on Azure-native databases, these updated dashboards can help you get to a useful baseline faster, so you spend less time wiring panels and more time acting on what the data is telling you. Getting started To try Grafana 12, you can create a new Azure Managed Grafana instance with Grafana 12 selected, or upgrade an existing instance from the Azure portal. From there, consider enabling current-user Entra authentication for supported Azure data sources, test the new Azure Monitor logs query builder in Explore for day-to-day investigations, and take the updated database dashboards for a spin if you run Azure PostgreSQL, Azure SQL, or SQL MI. Check out the doc for more information: Upgrade Azure Managed Grafana to Grafana 12 - Azure Managed Grafana.639Views0likes0CommentsAnnouncing new public preview capabilities in Azure Monitor pipeline
Azure Monitor pipeline, similar to ETL (Extract, Transform, Load) process, enhances traditional data collection methods. It streamlines data collection from various sources through a unified ingestion pipeline and utilizes a standardized configuration approach that is more efficient and scalable. As Azure Monitor pipeline is used in more complex and security‑sensitive environments — including on‑premises infrastructure, edge locations, and large Kubernetes clusters — certain patterns and challenges show up consistently. Based on what we’ve been seeing across these deployments, we’re sharing a few new capabilities now available in public preview. These updates focus on three areas that tend to matter most at scale: secure ingestion, control over where pipeline instances run, and processing data before it lands in Azure Monitor. Here’s what’s new — and why it matters. Secure ingestion with TLS and mutual TLS (mTLS) Pod placement controls for Azure Monitor pipeline Transformations and Automated Schema Standardization Secure ingestion with TLS and mutual TLS (mTLS) Why is this needed? As telemetry ingestion moves beyond Azure and closer to the edge, security expectations increase. In many environments, plain TCP ingestion is no longer sufficient. Teams often need: Encrypted ingestion paths by default Strong guarantees around who is allowed to send data A way to integrate with existing PKI and certificate management systems In regulated or security‑sensitive setups, secure authentication at the ingestion boundary is a baseline requirement — not an optional add‑on. What does this feature do? Azure Monitor pipeline now supports TLS and mutual TLS (mTLS) for TCP‑based ingestion endpoints in public preview. With this support, you can: Encrypt data in transit using TLS Enable mutual authentication with mTLS, so both the client and the pipeline endpoint validate each other Use your own certificates Enforce security requirements at ingestion time, before data is accepted This makes it easier to securely ingest data from network devices, appliances, and on‑prem workloads without relying on external proxies or custom gateways. Learn more. If the player doesn’t load, open the video in a new window: Open video Pod placement controls for Azure Monitor pipeline Why is it needed? As Azure Monitor pipeline scales in Kubernetes environments, default scheduling behavior often isn’t sufficient. In many deployments, teams need more control to: Isolate telemetry workloads in multi‑tenant clusters Run pipelines on high‑capacity nodes for resource‑intensive processing Prevent port exhaustion by limiting instances per node Enforce data residency or security zone requirements Distribute instances across availability zones for better resiliency and resource use Without explicit placement controls, pipeline instances can end up running in sub‑optimal locations, leading to performance and operational issues. What does this feature do? With the new executionPlacement configuration (public preview), Azure Monitor pipeline gives you direct control over how pipeline instances are scheduled. Using this feature, you can: Target specific nodes using labels (for example, by team, zone, or node capability) Control how instances are distributed across nodes Enforce strict isolation by allowing only one instance per node Apply placement rules per pipeline group, without impacting other workloads These rules are validated and enforced at deployment time. If the cluster can’t satisfy the placement requirements, the pipeline won’t deploy — making failures clear and predictable. This gives you better control over performance, isolation, and cluster utilization as you scale. Learn more. Transformations and Automated Schema Standardization Why is this needed? Telemetry data is often high‑volume, noisy, and inconsistent across sources. In many deployments, ingesting everything as‑is and cleaning it up later isn’t practical or cost‑effective. There’s a growing need to: Filter or reduce data before ingestion Normalize formats across different sources Route data directly into standard tables without additional processing What does this feature do? Azure Monitor pipeline data transformations, already in public preview, let you process data before it’s ingested. With transformations, you can: Filter, aggregate, or reshape incoming data Convert raw syslog or CEF messages into standardized schemas Choose sample KQL templates to perform transformations instead of manually writing KQL queries Route data directly into built‑in Azure tables Reduce ingestion volume while keeping the data that matters Check out the recent blog about the transformations preview, or you can learn more here. Getting started All of these capabilities are available today in public preview as part of Azure Monitor pipeline. If you’re already using the pipeline, you can start experimenting with secure ingestion, pod placement, and transformations right away. As always, feedback is welcome as we continue to refine these features on the path to general availability.760Views0likes0CommentsPublic Preview: Azure Monitor pipeline transformations
Overview The Azure Monitor pipeline extends the data collection capabilities of Azure Monitor to edge and multi-cloud environments. It enables at-scale data collection (data collection over 100k EPS), and routing of telemetry data before it's sent to the cloud. The pipeline can cache data locally and sync with the cloud when connectivity is restored and route telemetry to Azure Monitor in cases of intermittent connectivity. Learn more about this here - Configure Azure Monitor pipeline - Azure Monitor | Microsoft Learn Why transformations matter Lower Costs: Filter and aggregate before ingestion to reduce ingestion volume and in turn lower ingestion costs Better Analytics: Standardized schemas mean faster queries and cleaner dashboards. Future-Proof: Built-in schema validation prevents surprises during deployment. Azure Monitor pipeline solves the challenges of high ingestion costs and complex analytics by enabling transformations before ingestion, so your data is clean, structured, and optimized before it even hits your Log Analytics Workspace. Check out a quick demo here - If the player doesn’t load, open the video in a new window: Open video Key features in public preview 1. Schema change detection One of the most exciting additions is schema validation for Syslog and CEF : Integrated into the “Check KQL Syntax” button in the Strato UI. Detects if your transformation introduces schema changes that break compatibility with standard tables. Provides actionable guidance: Option 1: Remove schema-changing transformations like aggregations. Option 2: Send data to a custom tables that support custom schemas. This ensures your pipeline remains robust and compliant with analytics requirements. For example, in the picture below, extending to new columns that don't match the schema of the syslog table throws an error during validation and asks the user to send to a custom table or remove the transformations. While in the case of the example below, filtering does not modify the schema of the data at all and so no validation error is thrown, and the user is able to send it to a standard table directly. 2. Pre-built KQL templates Apply ready-to-use templates for common transformations. Save time and minimize errors when writing queries. 3. Automatic schema standardization for syslog and CEF Automatically schematize CEF and syslog data to fit standard tables without any added transformations to convert raw data to syslog/CEF from the user. 4. Advanced filtering Drop unwanted events based on attributes like: Syslog: Facility, ProcessName, SeverityLevel. CEF: DeviceVendor, DestinationPort. Reduce noise and optimize ingestion costs. 5. Aggregation for high-volume logs Group events by key fields (e.g., DestinationIP, DeviceVendor) into 1-minute intervals. Summarize high-frequency logs for actionable insights. 6. Drop unnecessary fields Remove redundant columns to streamline data and reduce storage overhead. Supported KQL sunctions 1. Aggregation summarize (by), sum, max, min, avg, count, bin 2. Filtering where, contains, has, in, and, or, equality (==, !=), comparison (>, >=, <, <=) 3. Schematization extend, project, project-away, project-rename, project-keep, iif, case, coalesce, parse_json 4. Variables for Expressions or Functions let 5. Other Functions String: strlen, replace_string, substring, strcat, strcat_delim, extract Conversion: tostring, toint, tobool, tofloat, tolong, toreal, todouble, todatetime, totimespan Get started today Head to the Azure Portal and explore the new Azure Monitor pipeline transformations UI. Apply templates, validate your KQL, and experience the power of Azure Monitor pipeline transformations. Find more information on the public docs here - Configure Azure Monitor pipeline transformations - Azure Monitor | Microsoft Learn1.1KViews2likes0Comments