updates
788 TopicsMigrating to the next generation of Virtual Nodes on Azure Container Instances (ACI)
Azure Container Instances (ACI) is a fully-managed serverless container platform which gives you the ability to run containers on-demand without provisioning infrastructure. Virtual Nodes on ACI allows you to run Kubernetes pods managed by an AKS cluster in a serverless way on ACI instead of traditional VM‑backed node pools. From a developer’s perspective, Virtual Nodes look just like regular Kubernetes nodes, but under the hood the pods are executed on ACI’s serverless infrastructure, enabling fast scale‑out without waiting for new VMs to be provisioned. This makes Virtual Nodes ideal for bursty, unpredictable, or short‑lived workloads where speed and cost efficiency matter more than long‑running capacity planning. The newer Virtual Nodes v2 implementation modernises this capability by removing many of the limitations of the original AKS managed add‑on and delivering a more Kubernetes‑native, flexible, and scalable experience when bursting workloads from AKS to ACI. In this article I will demonstrate how you can migrate an existing AKS cluster using the Virtual Nodes managed add-on (legacy), to the new generation of Virtual Nodes on ACI, which is deployed and managed via Helm. More information about Virtual Nodes on Azure Container Instances can be found here, and the GitHub repo is available here. Advanced documentation for Virtual Nodes on ACI is also available here, and includes topics such as node customisation, release notes and a troubleshooting guide. Please note that all code samples within this guide are examples only, and are provided without warranty/support. Background Virtual Nodes on ACI is rebuilt from the ground-up, and includes several fixes and enhancements, for instance: Added support/features VNet peering, outbound traffic to the internet with network security groups Init containers Host aliases Arguments for exec in ACI Persistent Volumes and Persistent Volume Claims Container hooks Confidential containers (see supported regions list here) ACI standby pools Planned future enhancements Support for ACR image pull via Service Principal (SPN) Kubernetes network policies Support for IPv6 Windows containers Port Forwarding Note: The new generation of the add-on is managed via Helm rather than as an AKS managed add-on. Requirements & limitations Each Virtual Nodes on ACI deployment requires 3 vCPUs and 12 GiB memory on one of the AKS cluster’s VMs Each Virtual Nodes on ACI deployment supports up to 200 pods DaemonSets are not supported Virtual Nodes on ACI requires AKS clusters with Azure CNI networking (Kubenet is not supported) Virtual Nodes on ACI is incompatible with API server authorized IP ranges for AKS (because of the subnet delegation to ACI) Deploying the Virtual Nodes managed add-on (legacy) For the sake of completeness, I will first guide you through the traditional steps of deploying the Virtual Nodes managed add-on for AKS. For this walkthrough, I'm using Bash via Windows Subsystem for Linux (WSL), along with the Azure CLI. Prerequisites A recent version of the Azure CLI An Azure subscription with sufficient ACI quota for your selected region Deployment steps These steps are adapted from the official documentation here Set up environment variables: location=northeurope rg=rg-virtualnode-demo vnetName=vnet-virtualnode-demo clusterName=aks-virtualnode-demo aksSubnetName=subnet-aks vnSubnetName=subnet-vn Create resource group for the cluster and VNet: az group create --name $rg --location $location Create Virtual Network (VNet) and AKS/ACI subnets: az network vnet create \ --resource-group $rg --name $vnetName \ --address-prefixes 10.0.0.0/8 \ --subnet-name $aksSubnetName \ --subnet-prefix 10.240.0.0/16 az network vnet subnet create \ --resource-group $rg \ --vnet-name $vnetName \ --name $vnSubnetName \ --address-prefixes 10.241.0.0/16 \ --delegations Microsoft.ContainerInstance/containerGroups Retrieve the resource IDs for the AKS and ACI subnets: az network vnet subnet show --resource-group $rg --vnet-name $vnetName --name $aksSubnetName --query id -o tsv subnetId=$(az network vnet subnet show --resource-group $rg --vnet-name $vnetName --name $aksSubnetName --query id -o tsv) vnSubnetId=$(az network vnet subnet show --resource-group $rg --vnet-name $vnetName --name $vnSubnetName --query id -o tsv) Create a small AKS cluster with 2 nodes: az aks create --resource-group $rg --name $clusterName \ --node-count 2 --node-osdisk-size 30 --node-vm-size Standard_B4ms \ --network-plugin azure --vnet-subnet-id $subnetId \ --generate-ssh-keys Enable the Virtual Nodes managed add-on (legacy): az aks enable-addons --resource-group $rg --name $clusterName --addons virtual-node --subnet-name $vnSubnetName Retrieve the Managed Identity (MSI) used by Virtual Nodes and assign it the Network Contributor role for the ACI subnet: vnIdentityId=$(az aks show \ --resource-group $rg \ --name $clusterName \ --query "addonProfiles.aciConnectorLinux.identity.resourceId" \ -o tsv) vnIdentityObjectId=$(az identity show --ids $vnIdentityId --query principalId -o tsv) az role assignment create \ --assignee-object-id "$vnIdentityObjectId" \ --assignee-principal-type ServicePrincipal \ --role "Network Contributor" \ --scope "$vnSubnetId" Download the cluster's kubeconfig file: az aks get-credentials --resource-group $rg --name $clusterName Confirm the Virtual Nodes node shows within the cluster and is in a Ready state (virtual-node-aci-linux): $ kubectl get node NAME STATUS ROLES AGE VERSION aks-nodepool1-35702456-vmss000000 Ready <none> 46m v1.33.6 aks-nodepool1-35702456-vmss000001 Ready <none> 46m v1.33.6 virtual-node-aci-linux Ready agent 3m28s v1.25.0-vk-azure-aci-1.6.2 Migrating to the next generation of Virtual Nodes on Azure Container Instances via Helm chart I will now explain how to migrate from the Virtual Nodes managed add-on (legacy) to the new generation of Virtual Nodes on ACI. For this walkthrough, I'm using Bash via Windows Subsystem for Linux (WSL), along with the Azure CLI. Direct migration is not supported, and therefore the steps below show an example of removing Virtual Nodes managed add-on and its resources and then installing the Virtual Nodes on ACI Helm chart. In this walkthrough I will explain how to delete and re-create the Virtual Nodes subnet, however if you need to preserve the VNet and/or use a custom subnet name, refer to the Helm customisation steps here. Prerequisites A recent version of the Azure CLI An Azure subscription with sufficient ACI quota for your selected region Helm Deployment steps Initialise environment variables location=northeurope rg=rg-virtualnode-demo vnetName=vnet-virtualnode-demo clusterName=aks-virtualnode-demo aksSubnetName=subnet-aks vnSubnetName=subnet-vn Scale-down any running Virtual Nodes workloads (example below): kubectl delete deploy <deploymentName> -n <namespace> Disable the Virtual Nodes managed add-on (legacy): az aks disable-addons --resource-group $rg --name $clusterName --addons virtual-node Export a backup of the original subnet configuration: az network vnet subnet show --resource-group $rg --vnet-name $vnetName --name $vnSubnetName > subnetConfigOriginal.json Delete the original subnet (subnets cannot be renamed and therefore must be re-created): az network vnet subnet delete -g $rg -n $vnSubnetName --vnet-name $vnetName Create the new Virtual Nodes on ACI subnet (replicate the configuration of the original subnet but with the specific name value of cg): vnSubnetId=$(az network vnet subnet create \ --resource-group $rg \ --vnet-name $vnetName \ --name cg \ --address-prefixes 10.241.0.0/16 \ --delegations Microsoft.ContainerInstance/containerGroups --query id -o tsv) Assign the cluster's -kubelet identity Contributor access to the infrastructure resource group, and Network Contributor access to the ACI subnet: nodeRg=$(az aks show --resource-group $rg --name $clusterName --query nodeResourceGroup -o tsv) nodeRgId=$(az group show -n $nodeRg --query id -o tsv) agentPoolIdentityId=$(az aks show --resource-group $rg --name $clusterName --query "identityProfile.kubeletidentity.resourceId" -o tsv) agentPoolIdentityObjectId=$(az identity show --ids $agentPoolIdentityId --query principalId -o tsv) az role assignment create \ --assignee-object-id "$agentPoolIdentityObjectId" \ --assignee-principal-type ServicePrincipal \ --role "Contributor" \ --scope "$nodeRgId" az role assignment create \ --assignee-object-id "$agentPoolIdentityObjectId" \ --assignee-principal-type ServicePrincipal \ --role "Network Contributor" \ --scope "$vnSubnetId" Download the cluster's kubeconfig file: az aks get-credentials -n $clusterName -g $rg Clone the virtualnodesOnAzureContainerInstances GitHub repo: git clone https://github.com/microsoft/virtualnodesOnAzureContainerInstances.git Install the Virtual Nodes on ACI Helm chart: helm install <yourReleaseName> <GitRepoRoot>/Helm/virtualnode Confirm the Virtual Nodes node shows within the cluster and is in a Ready state (virtualnode-n): $ kubectl get node NAME STATUS ROLES AGE VERSION aks-nodepool1-35702456-vmss000000 Ready <none> 4h13m v1.33.6 aks-nodepool1-35702456-vmss000001 Ready <none> 4h13m v1.33.6 virtualnode-0 Ready <none> 162m v1.33.7 Delete the previous Virtual Nodes node from the cluster: kubectl delete node virtual-node-aci-linux Test and confirm pod scheduling on Virtual Node: apiVersion: v1 kind: Pod metadata: annotations: name: demo-pod spec: containers: - command: - /bin/bash - -c - 'counter=1; while true; do echo "Hello, World! Counter: $counter"; counter=$((counter+1)); sleep 1; done' image: mcr.microsoft.com/azure-cli name: hello-world-counter resources: limits: cpu: 2250m memory: 2256Mi requests: cpu: 100m memory: 128Mi nodeSelector: virtualization: virtualnode2 tolerations: - effect: NoSchedule key: virtual-kubelet.io/provider operator: Exists If the pod successfully starts on the Virtual Node, you should see similar to the below: $ kubectl get pod -o wide demo-pod NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES demo-pod 1/1 Running 0 95s 10.241.0.4 vnode2-virtualnode-0 <none> <none> Modify your deployments to run on Virtual Nodes on ACI For Virtual Nodes managed add-on (legacy), the following nodeSelector and tolerations are used to run pods on Virtual Nodes: nodeSelector: kubernetes.io/role: agent kubernetes.io/os: linux type: virtual-kubelet tolerations: - key: virtual-kubelet.io/provider operator: Exists - key: azure.com/aci effect: NoSchedule For Virtual Nodes on ACI, the nodeSelector/tolerations are slightly different: nodeSelector: virtualization: virtualnode2 tolerations: - effect: NoSchedule key: virtual-kubelet.io/provider operator: Exists Troubleshooting Check the virtual-node-admission-controller and virtualnode-n pods are running within the vn2 namespace: $ kubectl get pod -n vn2 NAME READY STATUS RESTARTS AGE virtual-node-admission-controller-54cb7568f5-b7hnr 1/1 Running 1 (5h21m ago) 5h21m virtualnode-0 6/6 Running 6 (4h48m ago) 4h51m If these pods are in a Pending state, your node pool(s) may not have enough resources available to schedule them (use kubectl describe pod to validate). If the virtualnode-n pod is crashing, check the logs of the proxycri container to see whether there are any Managed Identity permissions issues (the cluster's -agentpool MSI needs to have Contributor access on the infrastructure resource group): kubectl logs -n vn2 virtualnode-0 -c proxycri Further troubleshooting guidance is available within the official documentation. Support If you have issues deploying or using Virtual Nodes on ACI, add a GitHub issue here137Views0likes0CommentsCalling all Microsoft Q&A contributors: Join Product Champions Program
🎉 Sign-ups are open for the Microsoft Q&A Product Champions Program (2026)! ✅ Sign up: https://aka.ms/AAzhkru 📘 Learn more + Welcome Guide: https://aka.ms/ProductChampionsWelcome If you love answering questions and helping others on Microsoft Q&A, we’d love to have you join. ``ExpressRoute Gateway Microsoft initiated migration
Objective The backend migration process is an automated upgrade performed by Microsoft to ensure your ExpressRoute gateways use the Standard IP SKU. This migration enhances gateway reliability and availability while maintaining service continuity. You receive notifications about scheduled maintenance windows and have options to control the migration timeline. For guidance on upgrading Basic SKU public IP addresses for other networking services, see Upgrading Basic to Standard SKU. . Important: As of September 30, 2025, Basic SKU public IPs are retired. For more information, see the official announcement. You can initiate the ExpressRoute gateway migration yourself at a time that best suits your business needs, before the Microsoft team performs the migration on your behalf. This gives you control over the migration timing. Please use the ExpressRoute Gateway Migration Tool to migrate your gateway Public IP to Standard SKU. This tool provides a guided workflow in the Azure portal and PowerShell, enabling a smooth migration with minimal service disruption. Backend migration overview The backend migration is scheduled during your preferred maintenance window. During this time, the Microsoft team performs the migration with minimal disruption. You don’t’ need to take any actions. The process includes the following steps: Deploy new gateway: Azure provisions a second virtual network gateway in the same GatewaySubnet alongside your existing gateway. Microsoft automatically assigns a new Standard SKU public IP address to this gateway. Transfer configuration: The process copies all existing configurations (connections, settings, routes) from the old gateway. Both gateways run in parallel during the transition to minimize downtime. You may experience brief connectivity interruptions may occur. Clean up resources: After migration completes successfully and passes validation, Azure removes the old gateway and its associated connections. The new gateway includes a tag CreatedBy: GatewayMigrationByService to indicate it was created through the automated backend migration Important: To ensure a smooth backend migration, avoid making non-critical changes to your gateway resources or connected circuits during the migration process. If modifications are absolutely required, you can choose (after the Migrate stage complete) to either commit or abort the migration and make your changes. Backend process details This section provides an overview of the Azure portal experience during backend migration for an existing ExpressRoute gateway. It explains what to expect at each stage and what you see in the Azure portal as the migration progresses. To reduce risk and ensure service continuity, the process performs validation checks before and after every phase. The backend migration follows four key stages: Validate: Checks that your gateway and connected resources meet all migration requirements for the Basic to Standard public IP migration. Prepare: Deploys the new gateway with Standard IP SKU alongside your existing gateway. Migrate: Cuts over traffic from the old gateway to the new gateway with a Standard public IP. Commit or abort: Finalizes the public IP SKU migration by removing the old gateway or reverts to the old gateway if needed. These stages mirror the Gateway migration tool process, ensuring consistency across both migration approaches. The Azure resource group RGA serves as a logical container that displays all associated resources as the process updates, creates, or removes them. Before the migration begins, RGA contains the following resources: This image uses an example ExpressRoute gateway named ERGW-A with two connections (Conn-A and LAconn) in the resource group RGA. Portal walkthrough Before the backend migration starts, a banner appears in the Overview blade of the ExpressRoute gateway. It notifies you that the gateway uses the deprecated Basic IP SKU and will undergo backend migration between March 7, 2026, and April 30, 2026: Validate stage Once you start the migration, the banner in your gateway’s Overview page updates to indicate that migration is currently in progress. In this initial stage, all resources are checked to ensure they are in a Passed state. If any prerequisites aren't met, validation fails and the Azure team doesn't proceed with the migration to avoid traffic disruptions. No resources are created or modified in this stage. After the validation phase completes successfully, a notification appears indicating that validation passed and the migration can proceed to the Prepare stage. Prepare stage In this stage, the backend process provisions a new virtual network gateway in the same region and SKU type as the existing gateway. Azure automatically assigns a new public IP address and re-establishes all connections. This preparation step typically takes up to 45 minutes. To indicate that the new gateway is created by migration, the backend mechanism appends _migrate to the original gateway name. During this phase, the existing gateway is locked to prevent configuration changes, but you retain the option to abort the migration, which deletes the newly created gateway and its connections. After the Prepare stage starts, a notification appears showing that new resources are being deployed to the resource group: Deployment status In the resource group RGA, under Settings → Deployments, you can view the status of all newly deployed resources as part of the backend migration process. In the resource group RGA under the Activity Log blade, you can see events related to the Prepare stage. These events are initiated by GatewayRP, which indicates they are part of the backend process: Deployment verification After the Prepare stage completes, you can verify the deployment details in the resource group RGA under Settings > Deployments. This section lists all components created as part of the backend migration workflow. The new gateway ERGW-A_migrate is deployed successfully along with its corresponding connections: Conn-A_migrate and LAconn_migrate. Gateway tag The newly created gateway ERGW-A_migrate includes the tag CreatedBy: GatewayMigrationByService, which indicates it was provisioned by the backend migration process. Migrate stage After the Prepare stage finishes, the backend process starts the Migrate stage. During this stage, the process switches traffic from the existing gateway ERGW-A to the new gateway ERGW-A_migrate. Gateway ERGW-A_migrate: Old gateway (ERGW-A) handles traffic: After the backend team initiates the traffic migration, the process switches traffic from the old gateway to the new gateway. This step can take up to 15 minutes and might cause brief connectivity interruptions. New gateway (ERGW-A_migrate) handles traffic: Commit stage After migration, the Azure team monitors connectivity for 15 days to ensure everything is functioning as expected. The banner automatically updates to indicate completion of migration: During this validation period, you can’t modify resources associated with both the old and new gateways. To resume normal CRUD operations without waiting15 days, you have two options: Commit: Finalize the migration and unlock resources. Abort: Revert to the old gateway, which deletes the new gateway and its connections. To initiate Commit before the 15-day window ends, type yes and select Commit in the portal. When the commit is initiated from the backend, you will see “Committing migration. The operation may take some time to complete.” The old gateway and its connections are deleted. The event shows as initiated by GatewayRP in the activity logs. After old connections are deleted, the old gateway gets deleted. Finally, the resource group RGA contains only resources only related to the migrated gateway ERGW-A_migrate: The ExpressRoute Gateway migration from Basic to Standard Public IP SKU is now complete. Frequently asked questions How long will Microsoft team wait before committing to the new gateway? The Microsoft team waits around 15 days after migration to allow you time to validate connectivity and ensure all requirements are met. You can commit at any time during this 15-day period. What is the traffic impact during migration? Is there packet loss or routing disruption? Traffic is rerouted seamlessly during migration. Under normal conditions, no packet loss or routing disruption is expected. Brief connectivity interruptions (typically less than 1 minute) might occur during the traffic cutover phase. Can we make any changes to ExpressRoute Gateway deployment during the migration? Avoid making non-critical changes to the deployment (gateway resources, connected circuits, etc.). If modifications are absolutely required, you have the option (after the Migrate stage) to either commit or abort the migration.168Views0likes0CommentsSecurity Baseline for M365 Apps for enterprise v2512
Security baseline for Microsoft 365 Apps for enterprise (v2512, December 2025) Microsoft is pleased to announce the latest Security Baseline for Microsoft 365 Apps for enterprise, version 2512, is now available as part of the Microsoft Security Compliance Toolkit. This release builds on previous baselines and introduces updated, security‑hardened recommendations aligned with modern threat landscapes and the latest Office administrative templates. As with prior releases, this baseline is intended to help enterprise administrators quickly deploy Microsoft recommended security configurations, reduce configuration drift, and ensure consistent protection across user environments. Download the updated baseline today from the Microsoft Security Compliance Toolkit, test the recommended configurations, and implement as appropriate. This release introduces and updates several security focused policies designed to strengthen protections in Microsoft Excel, PowerPoint, and core Microsoft 365 Apps components. These changes reflect evolving attacker techniques, partner feedback, and Microsoft’s secure by design engineering standards. The recommended settings in this security baseline correspond with the administrative templates released in version 5516. Below are the updated settings included in this baseline: Excel: File Block Includes External Link Files Policy Path: User Configuration\Administrative Templates\Microsoft Excel 2016\Excel Options\Security\Trust Center\File Block Settings\File Block includes external link files The baseline will ensure that external links to workbooks blocked by File Block will no longer refresh. Attempts to create or update links to blocked files return an error. This prevents data ingestion from untrusted or potentially malicious sources. Block Insecure Protocols Across Microsoft 365 Apps Policy Path: User Configuration\Administrative Templates\Microsoft Office 2016\Security Settings\Block Insecure Protocols The baseline will block all non‑HTTPS protocols when opening documents, eliminating downgrade paths and unsafe connections. This aligns with Microsoft’s broader effort to enforce TLS‑secure communication across productivity and cloud services. Block OLE Graph Functionality Policy Path: User Configuration\Administrative Templates\Microsoft Office 2016\Security Settings\Block OLE Graph This setting will prevent MSGraph.Application and MSGraph.Chart (classic OLE Graph components) from executing. Microsoft 365 Apps will instead render a static image, mitigating a historically risky automation interface. Block OrgChart Add‑in Policy Path: User Configuration\Administrative Templates\Microsoft Office 2016\Security Settings\Block OrgChart The legacy OrgChart add‑in is disabled, preventing execution and replacing output with an image. This reduces exposure to outdated automation frameworks while maintaining visual fidelity. Restrict FPRPC Fallback in Microsoft 365 Apps Policy Path: User Configuration\Administrative Templates\Microsoft Office 2016\Security Settings\Restrict Apps from FPRPC Fallback The baseline disables the ability for Microsoft 365 Apps to fall back to FrontPage Server Extensions RPC which is an aging protocol not designed for modern security requirements. Avoiding fallback ensures consistent use of modern, authenticated file‑access methods. PowerPoint: OLE Active Content Controls Updated Policy Path: User Configuration\Administrative Templates\Microsoft PowerPoint 2016\PowerPoint Options\Security\OLE Active Content This baseline enforces disabling interactive OLE actions, no OLE content will be activate. The recommended baseline selection ensures secure‑by‑default OLE activation, reducing risk from embedded legacy objects. Deployment options for the baseline IT Admins can apply baseline settings in different ways. Depending on the method(s) chosen, different registry keys will be written, and they will be observed in order of precedence: Office cloud policies will override ADMX/Group Policies which will override end user settings in the Trust Center. Cloud policies may be deployed with the Office cloud policy service for policies in HKCU. Cloud policies apply to a user on any device accessing files in Office apps with their AAD account. In Office cloud policy service, you can create a filter for the Area column to display the current Security Baselines, and within each policy's context pane the recommended baseline setting is set by default. Learn more about Office cloud policy service. ADMX policies may be deployed with Microsoft Intune for both HKCU and HKLM policies. These settings are written to the same place as Group Policy, but managed from the cloud. There are two methods to create and deploy policy configurations: Administrative templates or the settings catalog. Group Policy may be deployed with on premise AD DS to deploy Group Policy Objects (GPO) to users and computers. The downloadable baseline package includes importable GPOs, a script to apply the GPOs to local policy, a script to import the GPOs into Active Directory Group Policy, updated custom administrative template (SecGuide.ADMX/L) file, all the recommended settings in spreadsheet form and a Policy Analyzer rules file. GPOs included in the baseline Most organizations can implement the baseline’s recommended settings without any problems. However, there are a few settings that will cause operational issues for some organizations. We've broken out related groups of such settings into their own GPOs to make it easier for organizations to add or remove these restrictions as a set. The local-policy script (Baseline-LocalInstall.ps1) offers command-line options to control whether these GPOs are installed. "MSFT Microsoft 365 Apps v2512" GPO set includes “Computer” and “User” GPOs that represent the “core” settings that should be trouble free, and each of these potentially challenging GPOs: “DDE Block - User” is a User Configuration GPO that blocks using DDE to search for existing DDE server processes or to start new ones. “Legacy File Block - User” is a User Configuration GPO that prevents Office applications from opening or saving legacy file formats. "Legacy JScript Block - Computer" disables the legacy JScript execution for websites in the Internet Zone and Restricted Sites Zone. “Require Macro Signing - User” is a User Configuration GPO that disables unsigned macros in each of the Office applications. If you have questions or issues, please let us know via the Security Baseline Community or this post. Related: Learn about Microsoft Baseline Security Mode4.5KViews0likes3CommentsAnnouncing new public preview capabilities in Azure Monitor pipeline
Azure Monitor pipeline, similar to ETL (Extract, Transform, Load) process, enhances traditional data collection methods. It streamlines data collection from various sources through a unified ingestion pipeline and utilizes a standardized configuration approach that is more efficient and scalable. As Azure Monitor pipeline is used in more complex and security‑sensitive environments — including on‑premises infrastructure, edge locations, and large Kubernetes clusters — certain patterns and challenges show up consistently. Based on what we’ve been seeing across these deployments, we’re sharing a few new capabilities now available in public preview. These updates focus on three areas that tend to matter most at scale: secure ingestion, control over where pipeline instances run, and processing data before it lands in Azure Monitor. Here’s what’s new — and why it matters. Secure ingestion with TLS and mutual TLS (mTLS) Pod placement controls for Azure Monitor pipeline Transformations and Automated Schema Standardization Secure ingestion with TLS and mutual TLS (mTLS) Why is this needed? As telemetry ingestion moves beyond Azure and closer to the edge, security expectations increase. In many environments, plain TCP ingestion is no longer sufficient. Teams often need: Encrypted ingestion paths by default Strong guarantees around who is allowed to send data A way to integrate with existing PKI and certificate management systems In regulated or security‑sensitive setups, secure authentication at the ingestion boundary is a baseline requirement — not an optional add‑on. What does this feature do? Azure Monitor pipeline now supports TLS and mutual TLS (mTLS) for TCP‑based ingestion endpoints in public preview. With this support, you can: Encrypt data in transit using TLS Enable mutual authentication with mTLS, so both the client and the pipeline endpoint validate each other Use your own certificates Enforce security requirements at ingestion time, before data is accepted This makes it easier to securely ingest data from network devices, appliances, and on‑prem workloads without relying on external proxies or custom gateways. Learn more. If the player doesn’t load, open the video in a new window: Open video Pod placement controls for Azure Monitor pipeline Why is it needed? As Azure Monitor pipeline scales in Kubernetes environments, default scheduling behavior often isn’t sufficient. In many deployments, teams need more control to: Isolate telemetry workloads in multi‑tenant clusters Run pipelines on high‑capacity nodes for resource‑intensive processing Prevent port exhaustion by limiting instances per node Enforce data residency or security zone requirements Distribute instances across availability zones for better resiliency and resource use Without explicit placement controls, pipeline instances can end up running in sub‑optimal locations, leading to performance and operational issues. What does this feature do? With the new executionPlacement configuration (public preview), Azure Monitor pipeline gives you direct control over how pipeline instances are scheduled. Using this feature, you can: Target specific nodes using labels (for example, by team, zone, or node capability) Control how instances are distributed across nodes Enforce strict isolation by allowing only one instance per node Apply placement rules per pipeline group, without impacting other workloads These rules are validated and enforced at deployment time. If the cluster can’t satisfy the placement requirements, the pipeline won’t deploy — making failures clear and predictable. This gives you better control over performance, isolation, and cluster utilization as you scale. Learn more. Transformations and Automated Schema Standardization Why is this needed? Telemetry data is often high‑volume, noisy, and inconsistent across sources. In many deployments, ingesting everything as‑is and cleaning it up later isn’t practical or cost‑effective. There’s a growing need to: Filter or reduce data before ingestion Normalize formats across different sources Route data directly into standard tables without additional processing What does this feature do? Azure Monitor pipeline data transformations, already in public preview, let you process data before it’s ingested. With transformations, you can: Filter, aggregate, or reshape incoming data Convert raw syslog or CEF messages into standardized schemas Choose sample KQL templates to perform transformations instead of manually writing KQL queries Route data directly into built‑in Azure tables Reduce ingestion volume while keeping the data that matters Check out the recent blog about the transformations preview, or you can learn more here. Getting started All of these capabilities are available today in public preview as part of Azure Monitor pipeline. If you’re already using the pipeline, you can start experimenting with secure ingestion, pod placement, and transformations right away. As always, feedback is welcome as we continue to refine these features on the path to general availability.347Views0likes0CommentsPublic Preview: Azure Monitor pipeline transformations
Overview The Azure Monitor pipeline extends the data collection capabilities of Azure Monitor to edge and multi-cloud environments. It enables at-scale data collection (data collection over 100k EPS), and routing of telemetry data before it's sent to the cloud. The pipeline can cache data locally and sync with the cloud when connectivity is restored and route telemetry to Azure Monitor in cases of intermittent connectivity. Learn more about this here - Configure Azure Monitor pipeline - Azure Monitor | Microsoft Learn Why transformations matter Lower Costs: Filter and aggregate before ingestion to reduce ingestion volume and in turn lower ingestion costs Better Analytics: Standardized schemas mean faster queries and cleaner dashboards. Future-Proof: Built-in schema validation prevents surprises during deployment. Azure Monitor pipeline solves the challenges of high ingestion costs and complex analytics by enabling transformations before ingestion, so your data is clean, structured, and optimized before it even hits your Log Analytics Workspace. Check out a quick demo here - If the player doesn’t load, open the video in a new window: Open video Key features in public preview 1. Schema change detection One of the most exciting additions is schema validation for Syslog and CEF : Integrated into the “Check KQL Syntax” button in the Strato UI. Detects if your transformation introduces schema changes that break compatibility with standard tables. Provides actionable guidance: Option 1: Remove schema-changing transformations like aggregations. Option 2: Send data to a custom tables that support custom schemas. This ensures your pipeline remains robust and compliant with analytics requirements. For example, in the picture below, extending to new columns that don't match the schema of the syslog table throws an error during validation and asks the user to send to a custom table or remove the transformations. While in the case of the example below, filtering does not modify the schema of the data at all and so no validation error is thrown, and the user is able to send it to a standard table directly. 2. Pre-built KQL templates Apply ready-to-use templates for common transformations. Save time and minimize errors when writing queries. 3. Automatic schema standardization for syslog and CEF Automatically schematize CEF and syslog data to fit standard tables without any added transformations to convert raw data to syslog/CEF from the user. 4. Advanced filtering Drop unwanted events based on attributes like: Syslog: Facility, ProcessName, SeverityLevel. CEF: DeviceVendor, DestinationPort. Reduce noise and optimize ingestion costs. 5. Aggregation for high-volume logs Group events by key fields (e.g., DestinationIP, DeviceVendor) into 1-minute intervals. Summarize high-frequency logs for actionable insights. 6. Drop unnecessary fields Remove redundant columns to streamline data and reduce storage overhead. Supported KQL sunctions 1. Aggregation summarize (by), sum, max, min, avg, count, bin 2. Filtering where, contains, has, in, and, or, equality (==, !=), comparison (>, >=, <, <=) 3. Schematization extend, project, project-away, project-rename, project-keep, iif, case, coalesce, parse_json 4. Variables for Expressions or Functions let 5. Other Functions String: strlen, replace_string, substring, strcat, strcat_delim, extract Conversion: tostring, toint, tobool, tofloat, tolong, toreal, todouble, todatetime, totimespan Get started today Head to the Azure Portal and explore the new Azure Monitor pipeline transformations UI. Apply templates, validate your KQL, and experience the power of Azure Monitor pipeline transformations. Find more information on the public docs here - Configure Azure Monitor pipeline transformations - Azure Monitor | Microsoft Learn762Views1like0CommentsSecurity baseline for Windows Server 2025, version 2602
Microsoft is pleased to announce the February 2026 Revision (v2602) of the security baseline package for Windows Server 2025! You can download the baseline package from the Microsoft Security Compliance Toolkit, test the recommended configurations in your environment, and customize / implement them as appropriate. Summary of Changes in This Release This release includes several changes made since the Security baseline for Windows Server 2025, version 2506 to further assist in the security of enterprise customers along with better aligning with the latest capabilities and standards. The changes include what is now depicted in the table below. Security Policy Change Summary Configure the behavior of the sudo command Configured as Enabled: Disabled on both MS and DC Configure Validation of ROCA-vulnerable WHfB keys during authentication Configured as Enabled: Block on DC to block Windows Hello for Business (WHfB) keys that are vulnerable to the Return of Coppersmith's attack (ROCA) Disable Internet Explorer 11 Launch Via COM Automation Configured as Enabled to prevent legacy scripts and applications from programmatically launching Internet Explorer 11 using COM automation interfaces Do not apply the Mark of the Web tag to files copied from insecure sources Configured as Disabled on both MS and DC Network security: Restrict NTLM: Audit Incoming NTLM Traffic Configured as Enable auditing for all accounts on both MS and DC Network security: Restrict NTLM: Audit NTLM authentication in this domain Configured as Enable all on DC Network security: Restrict NTLM: Outgoing NTLM traffic to remote servers Configured as Audit all on both MS and DC NTLM Auditing Enhancements Already enabled by default to improve visibility into NTLM usage within your environment Prevent downloading of enclosures Remove from the baseline as it is not applicable for Windows Server 2025. It depends on IE – RSS feed Printer: Configure RPC connection settings Enforce the default, RPC over TCP with Authentication Enabled, on both MS and DC Printer: Configure RPC listener settings Configure as RPC over TCP | Kerberos on MS Printer: Impersonate a client after authentication Add RESTRICTED SERVICES\PrintSpoolerService to allow the Print Spooler’s restricted service identity to impersonate clients securely Configure the behavior of the sudo command Sudo for Windows can be used as a potential escalation of privilege vector when enabled in certain configurations. It may allow attackers or malicious insiders to run commands with elevated privileges, bypassing traditional UAC prompts. This is especially concerning in environments with Active Directory or domain controllers. We recommend to configuring the policy Configure the behavior of the sudo command (System) as Enabled with the maximum allowed sudo mode as Disabled to prevent the sudo command from being used. Configure Validation of ROCA-vulnerable WHfB keys during authentication To mitigate Windows Hello for Business (WHfB) keys that are vulnerable to the Return of Coppersmith's attack (ROCA), we recommend enabling the setting Configure Validation of ROCA-vulnerable WHfB keys during authentication (System\Security Account Manager) in a Block mode in domain controllers. To ensure there are no incompatible devices/orphaned/vulnerable keys in use that will break when blocked, please see Using WHfBTools PowerShell module for cleaning up orphaned Windows Hello for Business Keys - Microsoft Support. Note: A reboot is not required for changes to this setting to take effect. Disable Internet Explorer 11 Launch Via COM Automation Similar to the Windows 11 version 25H2 security baseline, we recommend disabling Internet Explorer 11 Launch Via COM Automation (Windows Components\Internet Explorer) to prevent legacy scripts and applications from programmatically launching Internet Explorer 11 using COM automation interfaces such as CreateObject("InternetExplorer.Application"). Allowing such behavior poses a significant risk by exposing systems to the legacy MSHTML and ActiveX components, which are vulnerable to exploitation. Do not apply the Mark of the Web tag to files copied from insecure sources We have included the setting Do not apply the Mark of the Web tag to files copied from insecure sources (Windows Components\File Explorer) configured as Disabled, which is consistent with Windows 11 security baseline. When this configuration is set to Disabled, Windows applies the Mark of the Web (MotW) tag to files copied from locations classified as Internet or other untrusted zones. This tag helps enforce additional protections such as SmartScreen checks and Office macro blocking, reducing the risk of malicious content execution. NTLM Auditing As part of our ongoing effort to help customers transition away from NTLM and adopt Kerberos for a more secure environment, we introduce new recommendations to strengthen monitoring and prepare for future NTLM restrictions on Windows Server 2025. Configure Network security: Restrict NTLM: Audit Incoming NTLM Traffic (Security Options) to Enable auditing for all accounts on both member servers and domain controllers. When enabled, the server logs events for all NTLM authentication requests that would be blocked once incoming NTLM traffic restrictions are enforced. Configure Network security: Restrict NTLM: Audit NTLM authentication in this domain (Security Options) to Enable all on domain controllers. This setting logs NTLM pass-through authentication requests from servers and accounts that would be denied when NTLM authentication restrictions are applied at the domain level. Configure Outgoing NTLM traffic to remote servers (Security Options) to Audit all on both member servers and domain controllers to log an event for each NTLM authentication request sent to a remote server, helping identify servers that still receive NTLM traffic. In addition, there are two new NTLM auditing capabilities enabled by default that were recently introduced in Windows Server 2025 and Windows 11 version 25H2. These enhancements provide detailed audit logs to help security teams monitor and investigate authentication activity, identify insecure practices, and prepare for future NTLM restrictions. Since these auditing improvements are enabled by default, no additional configuration is required, and thus the baseline does not explicitly enforce them. For more details, see Overview of NTLM auditing enhancements in Windows 11 and Windows Server 2025. Prevent Downloading of Enclosures The policy Prevent downloading of enclosures (Windows Components\RSS Feeds) has been removed from the Windows Server 2025 security baseline. This setting is not applicable to Windows Server 2025 because it depends on Internet Explorer functionality for RSS feeds. Printer security enhancements There are two new policies in Windows Server 2025 designed to significantly improve security posture of printers: Require IPPS for IPP printers (Printers) Set TLS/SSL security policy for IPP printers (Printers) Enabling these policies may cause operational challenges in environments that still rely on IPP or use self-signed or locally issued certificates. For this reason, these policies are not ter enforced in the Windows Server 2025 security baseline. However, we do recommend customers transition out of IPP or self-signed certificates and restricting them for a more secure environment. In addition, there are some changes to printer security Added RESTRICTED SERVICES\PrintSpoolerServiceto the Impersonate a client after authentication (User Rights Assignments) policy for both member servers and domain controllers, consistent with security baseline for Windows 11 version 25H2. Enforced the default setting for Configure RPC connection settings (Printers) to always use RPC over TCP with Authentication Enabled on both member servers and domain controllers. This prevents misconfiguration that could introduce security risks. Raised the security bar of the policy Configure RPC listener settings (Printers) from Negotiate (default) to Kerberos on member servers. This change encourages customers to move away from NTLM and adopt Kerberos for a more secure environment. Secure Boot certificate update To help organizations deploy, manage, and monitor the Secure Boot certificate update, Windows includes several policy settings under Administrative Templates\Windows Components\Secure Boot. These settings are deployment controls and aids. Enable Secure Boot Certificate Deployment allows an organization to explicitly initiate certificate deployment on a device. When enabled, Windows begins the Secure Boot certificate update process the next time the Secure Boot task runs. This setting does not override firmware compatibility checks or force updates onto unsupported devices. Automatic Certificate Deployment via Updates controls whether Secure Boot certificate updates are applied automatically through monthly Windows security and non‑security updates. By default, devices that Microsoft has identified as capable of safely applying the updates will receive and apply them automatically as part of cumulative servicing. If this setting is disabled, automatic deployment is blocked and certificate updates must be initiated through other supported deployment methods. Certificate Deployment via Controlled Feature Rollout allows organizations to opt devices into a Microsoft‑managed Controlled Feature Rollout for Secure Boot certificate updates. When enabled, Microsoft assists with coordinating deployment across enrolled devices to reduce risk during rollout. Devices participating in a Controlled Feature Rollout must have diagnostic data enabled. Devices that are not enrolled will not participate. Secure Boot certificate updates depend on device firmware support. Some devices have known firmware limitations that can prevent updates from being applied safely. Organizations should test representative hardware, monitor Secure Boot event logs, and consult the deployment guidance at https://aka.ms/GetSecureBoot for detailed recommendations and troubleshooting information. SMB Server hardening feature SMB Server has been susceptible to relay attacks (e.g., CVE-2025-55234), and Microsoft has released multiple features to protect against the relay attacks including SMB Server signing, which can be enabled with the setting of Microsoft network server: Digitally sign communications (always) (Security Option) SMB Server extended protection for authentication (EPA), which can be enabled with the setting of Microsoft network server: Server SPN target name validation level (Security Option) To further support customers to adopt these SMB Server hardening features, in the September 2025 Security Updates, Microsoft has released support for Audit events, across all supported in-market platforms, to audit SMB client compatibility for SMB Server signing as well as SMB Server EPA. These audit capabilities can be controlled via the two policies located at Network\Lanman Server Audit client does not support signing Audit SMB client SPN support This allows you to identify any potential device or software incompatibility issues before deploying the hardening measures that are already supported by SMB Server. Our recommendation is For domain controllers, the SMB signing is already enabled by default so there is no action needed for hardening purposes. For member servers, first enabling the two new audit features to assess the environment and then decide whether SMB Server Signing or EPA should be used to mitigate the attack vector. Please let us know your thoughts by commenting on this post or through the Security Baseline Community.1.8KViews2likes0CommentsAnnouncing new hybrid deployment options for Azure Virtual Desktop
Today, we’re excited to announce the limited preview of Azure Virtual Desktop for hybrid environments, a new platform for bringing the power of cloud-native desktop virtualization to on-premises infrastructure.24KViews12likes31Comments