Blog Post

Azure Observability Blog
3 MIN READ

Announcing new public preview capabilities in Azure Monitor pipeline

XemaPathak's avatar
XemaPathak
Icon for Microsoft rankMicrosoft
Feb 25, 2026

Secure ingestion, Pod placement, and Data transformations

Azure Monitor pipeline, similar to ETL (Extract, Transform, Load) process, enhances traditional data collection methods. It streamlines data collection from various sources through a unified ingestion pipeline and utilizes a standardized configuration approach that is more efficient and scalable. 

As Azure Monitor pipeline is used in more complex and security‑sensitive environments — including on‑premises infrastructure, edge locations, and large Kubernetes clusters — certain patterns and challenges show up consistently.

Based on what we’ve been seeing across these deployments, we’re sharing a few new capabilities now available in public preview. These updates focus on three areas that tend to matter most at scale: secure ingestion, control over where pipeline instances run, and processing data before it lands in Azure Monitor.

Here’s what’s new — and why it matters.

 

Secure ingestion with TLS and mutual TLS (mTLS)

Why is this needed?

As telemetry ingestion moves beyond Azure and closer to the edge, security expectations increase. In many environments, plain TCP ingestion is no longer sufficient.

Teams often need:

  • Encrypted ingestion paths by default
  • Strong guarantees around who is allowed to send data
  • A way to integrate with existing PKI and certificate management systems

In regulated or security‑sensitive setups, secure authentication at the ingestion boundary is a baseline requirement — not an optional add‑on.

What does this feature do?

Azure Monitor pipeline now supports TLS and mutual TLS (mTLS) for TCP‑based ingestion endpoints in public preview.

With this support, you can:

  • Encrypt data in transit using TLS
  • Enable mutual authentication with mTLS, so both the client and the pipeline endpoint validate each other
  • Use your own certificates 
  • Enforce security requirements at ingestion time, before data is accepted

This makes it easier to securely ingest data from network devices, appliances, and on‑prem workloads without relying on external proxies or custom gateways. Learn more.

If the player doesn’t load, open the video in a new window: Open video

Pod placement controls for Azure Monitor pipeline

Why is it needed?

As Azure Monitor pipeline scales in Kubernetes environments, default scheduling behavior often isn’t sufficient.

In many deployments, teams need more control to:

  • Isolate telemetry workloads in multi‑tenant clusters
  • Run pipelines on high‑capacity nodes for resource‑intensive processing
  • Prevent port exhaustion by limiting instances per node
  • Enforce data residency or security zone requirements
  • Distribute instances across availability zones for better resiliency and resource use

Without explicit placement controls, pipeline instances can end up running in sub‑optimal locations, leading to performance and operational issues.

If the player doesn’t load, open the video in a new window: Open video
What does this feature do?

With the new executionPlacement configuration (public preview), Azure Monitor pipeline gives you direct control over how pipeline instances are scheduled.

Using this feature, you can:

  • Target specific nodes using labels (for example, by team, zone, or node capability)
  • Control how instances are distributed across nodes
  • Enforce strict isolation by allowing only one instance per node
  • Apply placement rules per pipeline group, without impacting other workloads

These rules are validated and enforced at deployment time. If the cluster can’t satisfy the placement requirements, the pipeline won’t deploy — making failures clear and predictable.

This gives you better control over performance, isolation, and cluster utilization as you scale. Learn more.

Transformations and Automated Schema Standardization 

Why is this needed?

Telemetry data is often high‑volume, noisy, and inconsistent across sources. In many deployments, ingesting everything as‑is and cleaning it up later isn’t practical or cost‑effective.

There’s a growing need to:

  • Filter or reduce data before ingestion
  • Normalize formats across different sources
  • Route data directly into standard tables without additional processing
What does this feature do?

Azure Monitor pipeline data transformations, already in public preview, let you process data before it’s ingested.

With transformations, you can:

  • Filter, aggregate, or reshape incoming data
  • Convert raw syslog or CEF messages into standardized schemas
  • Choose sample KQL templates to perform transformations instead of manually writing KQL queries 
  • Route data directly into built‑in Azure tables
  • Reduce ingestion volume while keeping the data that matters

Check out the recent blog about the transformations preview, or you can learn more here.

Getting started

All of these capabilities are available today in public preview as part of Azure Monitor pipeline.

If you’re already using the pipeline, you can start experimenting with secure ingestion, pod placement, and transformations right away. As always, feedback is welcome as we continue to refine these features on the path to general availability.

Updated Feb 25, 2026
Version 1.0
No CommentsBe the first to comment