<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Azure Networking Blog articles</title>
    <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/bg-p/AzureNetworkingBlog</link>
    <description>Azure Networking Blog articles</description>
    <pubDate>Sun, 19 Apr 2026 00:10:21 GMT</pubDate>
    <dc:creator>AzureNetworkingBlog</dc:creator>
    <dc:date>2026-04-19T00:10:21Z</dc:date>
    <item>
      <title>Introducing the Container Network Insights Agent for AKS: Now in Public Preview</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/introducing-the-container-network-insights-agent-for-aks-now-in/ba-p/4512197</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;We are thrilled to announce public preview of &lt;STRONG&gt;Container Network Insights Agent - Agentic AI &lt;/STRONG&gt;network troubleshooting&amp;nbsp;for your workloads running in Azure Kubernetes Service (AKS).&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;div data-video-id="https://www.youtube.com/watch?v=cxwq8rEchFI/1776372906450" data-video-remote-vid="https://www.youtube.com/watch?v=cxwq8rEchFI/1776372906450" class="lia-video-container lia-media-is-center lia-media-size-large"&gt;&lt;iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fcxwq8rEchFI%3Ffeature%3Doembed&amp;amp;display_name=YouTube&amp;amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dcxwq8rEchFI&amp;amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fcxwq8rEchFI%2Fhqdefault.jpg&amp;amp;type=text%2Fhtml&amp;amp;schema=youtube" allowfullscreen="" style="max-width: 100%"&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;H4&gt;The Challenge&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;AKS networking is layered by design. Azure CNI,&amp;nbsp;eBPF, Cilium,&amp;nbsp;CoreDNS,&amp;nbsp;NetworkPolicy,&amp;nbsp;CiliumNetworkPolicy, Hubble. Each layer&amp;nbsp;contributes&amp;nbsp;capabilities, and some of these can fail silently in ways the surrounding layers cannot&amp;nbsp;observe.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;When something breaks, the evidence usually exists. Operators already have&amp;nbsp;the tools&amp;nbsp;such as Azure Monitor for metrics, Container Insights for cluster health,&amp;nbsp;Prometheus&amp;nbsp;and Grafana for dashboarding, Cilium and Hubble for pod network observation, and&amp;nbsp;Kubectl&amp;nbsp;for direct inspection. However, correlating different signals and&amp;nbsp;identifying&amp;nbsp;the root cause takes time.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Imagine this scenario: An application performance alert fires. The on-call engineer checks dashboards, reviews events, inspects&amp;nbsp;pod health. Each tool shows its own slice. But the root cause usually lives in the relationship between signals, not in any single tool.&amp;nbsp;So&amp;nbsp;the real work begins to manually cross-reference Hubble flows,&amp;nbsp;NetworkPolicy&amp;nbsp;specs, DNS state, node-level stats, and verdicts. Each check is a separate query, a separate context switch, a separate mental model of how the layers interact.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This process is&amp;nbsp;manual,&amp;nbsp;it is slow, needs domain knowledge, and does not scale. Mean time to resolution (MTTR) stays high not because engineers lack skill, but because the investigation surface is wide and the interactions between the layers are complex.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;SPAN data-contrast="auto"&gt;The&amp;nbsp;solution: Container Network Insights Agent&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Container Network Insights Agent is agentic AI to simplify and speed up AKS network troubleshooting&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Rather than replacing your existing observability tools, the container network insights agent&amp;nbsp;correlates&amp;nbsp;signals on demand to help you quickly&amp;nbsp;identify&amp;nbsp;and resolve network issues. You describe a problem in natural language, and the agent runs a structured investigation across layers. It delivers a diagnosis with the evidence, the root cause, and the exact commands to fix it.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The container network insights agent gets its visibility through two data sources:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;-&lt;STRONG&gt; &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;AKS MCP server&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt; container network insight agent integrates with the AKS MCP (Model Context Protocol) server, a standardized and secure interface&amp;nbsp;to&amp;nbsp;kubectl, Cilium, and Hubble. Every diagnostic command runs through the same tools operators already use, via a well-defined protocol that enforces security boundaries. No ad-hoc scripts, no custom API integrations.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;-&lt;STRONG&gt; &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Linux Networking plugin&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;  For diagnostics that require visibility below the Kubernetes API layer, container network insights agent collects kernel-level telemetry directly from cluster nodes. This includes NIC ring buffer stats, kernel packet counters, SoftIRQ distribution, and socket buffer utilization. This is how it pinpoints packet drops and network saturation that surface-level metrics cannot explain.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;When you describe a symptom, the container network insights agent:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;- Classifies the issue and plans an investigation tailored to the symptom pattern&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;- Gathers evidence through the AKS MCP server and its Linux networking plugin across DNS, service routing, network policies, Cilium, and node-level statistics&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;- Reasons across layers to&amp;nbsp;identify&amp;nbsp;how a failure in one&amp;nbsp;component&amp;nbsp;manifests in another&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;- Delivers a structured report with pass/fail evidence, root cause analysis, and specific remediation guidance&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The container network insight agent is scoped to AKS networking: DNS failures, packet drops, connectivity issues, policy conflicts, and Cilium dataplane health. It does not modify workloads or change configurations. All remediation guidance is advisory. The agent tells you what to run, and you decide whether to apply it.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt; &lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;SPAN data-contrast="auto"&gt;What makes the container network insights agent different&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Deep telemetry, not just surface metrics&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt; Most&amp;nbsp;observability tools&amp;nbsp;operate&amp;nbsp;at the Kubernetes API level.&amp;nbsp;container&amp;nbsp;network insight agent goes deeper, collecting kernel-level network statistics, BPF program drop counters, and interface-level diagnostics that pinpoint exactly where packets are being lost and why. This is the difference between knowing&amp;nbsp;something is&amp;nbsp;wrong and knowing precisely what is causing it.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Cross-layer reasoning&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt; Networking incidents rarely have single-layer explanations. The container network insights agent correlates evidence from DNS, service routing, network policy, Cilium, and node-level statistics together. It surfaces causal relationships that span layers. For example: node-level RX drops caused by a Cilium policy denial triggered by a label mismatch after a routine Helm deployment, even though the pods themselves appear healthy.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Structured and auditable&lt;/STRONG&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Every conclusion&amp;nbsp;traces&amp;nbsp;to a specific check, its output, and its pass/fail status. If all checks pass, container network insights agent reports no issue. It does not invent problems. Investigations are deterministic and reproducible. Results can be reviewed, shared, and rerun.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Guidance, not just findings&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt; The&amp;nbsp;container network insights agent explains what the evidence means,&amp;nbsp;identifies&amp;nbsp;the root cause, and provides specific remediation commands. The analysis is done; the operator reviews and decides.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;SPAN data-contrast="auto"&gt;Where the container network insights agent fits&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The container network insights agent is not another monitoring tool. It does not collect continuous metrics or replace dashboards. Your existing observability stack, including Azure Monitor, Prometheus, Grafana, Container Insights, and your log pipelines, keeps doing what it does. The agent complements those tools by adding an intelligence layer that turns fragmented signals into actionable diagnosis. Your alerting detects the problem; this agent helps you understand it.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Safe by Design&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The container network insights agent is built for production clusters.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;- &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Read-only access&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt; Minimal RBAC scoped to pods, services, endpoints, nodes, namespaces, network policies, and Cilium resources. container network insight agent deploys a temporary debug&amp;nbsp;DaemonSet&amp;nbsp;only for packet-drop diagnostics that require host-level stats.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;- &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Advisory remediation only&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt; The&amp;nbsp;container network insights agent tells you what to run. It never executes changes.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;- &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Evidence-backed conclusions&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt; Every root&amp;nbsp;cause&amp;nbsp;traces to a specific failed check. No speculation.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;- &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Scoped and enforced&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt; &lt;/STRONG&gt;The&amp;nbsp;agent handles AKS networking questions only. It does not respond to off-topic requests. Prompt injection defenses are built in.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;- &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Credentials stay in the cluster&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt; The&amp;nbsp;container network insights agent authenticates via managed identity with workload identity federation. No secrets, no static credentials. Only a&amp;nbsp;session&amp;nbsp;ID cookie reaches the browser.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;SPAN data-contrast="auto"&gt;Get Started&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Container network insights agent is available in Public Preview in&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;**Central US, East US, East US 2, UK South, and West US 2**&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The agent&amp;nbsp;deploys as&amp;nbsp;an AKS cluster extension and uses your own Azure OpenAI resource, giving you control over model configuration and data residency. Full capabilities require Cilium and Advanced Container Networking Services. DNS and packet drop diagnostics work on all supported AKS clusters.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;To try it:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;- Review the Container Network Insights Agent overview on Microsoft Learn &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/container-network-insights-agent-overview" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;https://learn.microsoft.com/en-us/azure/aks/container-network-insights-agent-overview&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;- Follow the&amp;nbsp;quickstart&amp;nbsp;to deploy container network&amp;nbsp;insights&amp;nbsp;agent and run your first diagnostic&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;- Share feedback via the Azure feedback channel or the thumbs-up and thumbs-down feedback controls on each response&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Your feedback shapes the roadmap. If the agent gets something wrong or misses a scenario you&amp;nbsp;encounter, we want to hear about it.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 16 Apr 2026 21:05:45 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/introducing-the-container-network-insights-agent-for-aks-now-in/ba-p/4512197</guid>
      <dc:creator>chandanAggarwal</dc:creator>
      <dc:date>2026-04-16T21:05:45Z</dc:date>
    </item>
    <item>
      <title>Enabling fallback to internet for Azure Private DNS Zones in hybrid architectures</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/enabling-fallback-to-internet-for-azure-private-dns-zones-in/ba-p/4511131</link>
      <description>&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Introduction&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Azure Private Endpoint enables secure connectivity to Azure PaaS services such as:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Azure SQL Managed Instance&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Azure Container Registry&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Azure Key Vault&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Azure Storage Account&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;through private IP addresses within a virtual network.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;When Private Endpoint is enabled for a service, Azure DNS automatically changes the name resolution path using&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;CNAME Redirection&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Example:&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;myserver.database.windows.net &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;↓&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;myserver.privatelink.database.windows.net&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;↓&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Private IP&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Azure Private DNS Zones are then used to resolve this Private Endpoint FQDN within the&amp;nbsp;VNet.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;However, this introduces a &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;critical DNS limitation&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; in:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Hybrid cloud architectures (AWS → Azure SQL MI)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Multiregion&amp;nbsp;deployments (DR region access)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Crosstenant&amp;nbsp;/&amp;nbsp;Crosssubscription&amp;nbsp;access&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;MultiVNet&amp;nbsp;isolated networks&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;If the Private DNS zone does not&amp;nbsp;contain&amp;nbsp;a corresponding record, Azure DNS returns:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;NXDOMAIN (NonExistent&amp;nbsp;Domain)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;When a DNS resolver receives a negative response (NXDOMAIN), it sends no DNS response to the DNS client and the query fails.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This results in:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;❌&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Application connectivity failure&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;❌&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Database connection timeout&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;❌&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;AKS pod DNS resolution errors&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;❌&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;DR failover application outage&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Problem statement&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;In traditional Private Endpoint DNS resolution:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="3" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;DNS&amp;nbsp;query is sent from the application.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="3" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Azure DNS checks&amp;nbsp;linked&amp;nbsp;Private DNS Zone.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="3" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;If no matching record exists: NXDOMAIN returned&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;DNS queries for Azure Private Link and network isolation scenarios across different&amp;nbsp;tenants&amp;nbsp;and resource groups have unique name resolution paths which can affect the ability to reach Private&amp;nbsp;Linkenabled&amp;nbsp;resources outside a tenant's control.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Azure &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;does not retry resolution using public DNS by default&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Therefore:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="4" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Public Endpoint resolution never occurs&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="4" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;DNS query fails permanently&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="4" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Application cannot connect&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Microsoft native solution&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Fallback to internet (NxDomainRedirect)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Azure introduced a DNS resolution policy:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;resolutionPolicy&amp;nbsp;=&amp;nbsp;NxDomainRedirect&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This property enables public recursion via Azure’s recursive resolver fleet when an authoritative NXDOMAIN response is received for a Private Link zone.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;When enabled:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;✅&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Azure DNS retries the query&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;✅&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Public endpoint resolution occurs&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;✅&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Application connectivity continues&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;✅&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;No custom DNS forwarder&amp;nbsp;required&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Fallback policy is configured at:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Private DNS Zone →&amp;nbsp;virtualnetwork&amp;nbsp;link&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Resolution policy is enabled at the virtual network link level with the&amp;nbsp;NxDomainRedirect&amp;nbsp;setting.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;In the Azure portal this appears as:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;         Enable fallback to internet&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;How it works&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;SPAN data-contrast="auto"&gt;Without fallback:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Application → Azure DNS&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;         → Private DNS Zone&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;         → Record&amp;nbsp;missing&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;         → NXDOMAIN&amp;nbsp;returned&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;         → Connection failure&lt;/SPAN&gt; &lt;SPAN data-ccp-props="{&amp;quot;469777462&amp;quot;:[9360],&amp;quot;469777927&amp;quot;:[0],&amp;quot;469777928&amp;quot;:[4]}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;SPAN data-contrast="auto"&gt;With fallback&amp;nbsp;enabled:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Application → Azure DNS&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;         → Private DNS Zone&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;         → Record&amp;nbsp;missing&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;         → NXDOMAIN&amp;nbsp;returned&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;         → Azure recursive resolver&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;         → Public DNS resolution&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;         → Public endpoint IP returned&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;         → Connection successful&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Azure recursive resolver retries the query using the public endpoint QNAME each time NXDOMAIN is received from the private zone scope&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Real world&amp;nbsp;use case&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;AWS Application Connecting to Azure SQL Managed Instance&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;You are running:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="5" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;SQL MI in Azure&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="5" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Private Endpoint&amp;nbsp;enabled&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="5" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Private DNS Zone: privatelink.database.windows.net&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;AWS application tries to connect:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;my-mi.database.windows.net&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;If DR region DNS record is not available:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Without fallback:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;DNS query → NXDOMAIN → App failure&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;With fallback enabled:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;DNS query → Retry public DNS → Connection success&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Step-by-step&amp;nbsp;configuration&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Method 1 – Azure&amp;nbsp;portal&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="6" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Go to:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="6" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Private DNS Zones&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="6" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Select your Private Link DNS Zone:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Example:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;privatelink.database.windows.net&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Select:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Virtual network links&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Open your linked&amp;nbsp;VNet&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Enable:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;✅&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Enable fallback to internet&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="8" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Click:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="8" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Save&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Method 2 – Azure CLI&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;You can configure fallback policy using:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;az&amp;nbsp;network private-dns&amp;nbsp;link&amp;nbsp;vnet&amp;nbsp;update \&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;      --resource-group RG-Network \&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;      --zone-name privatelink.database.windows.net \&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;      --name VNET-Link \&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;      --resolution-policy&amp;nbsp;NxDomainRedirect&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Validation steps&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Run from Azure VM:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;nslookup&amp;nbsp;my-mi.database.windows.net&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Expected:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;✔&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Private IP (if available)&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;✔&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Public IP (if fallback triggered)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Security considerations&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Fallback to internet:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;✅&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Does NOT expose data&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;✅&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Only&amp;nbsp;impacts&amp;nbsp;DNS resolution&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;✅&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Network traffic still governed by:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="9" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;NSG&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="9" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Azure Firewall&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="9" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;UDR&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="9" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Service Endpoint Policies&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;DNS resolution fallback only&amp;nbsp;triggers on&amp;nbsp;NXDOMAIN and does not change&amp;nbsp;networklevel&amp;nbsp;firewall&amp;nbsp;controls.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;When should you enable this?&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Recommended in:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="10" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Hybrid AWS → Azure connectivity&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="10" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Multiregion&amp;nbsp;DR deployments&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="10" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;AKS accessing Private Endpoint services&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="10" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;CrossTenant&amp;nbsp;connectivity&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="10" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="5" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Private Link + VPN / ExpressRoute scenarios&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Conclusion&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Fallback to Internet using&amp;nbsp;NxDomainRedirect&amp;nbsp;provides:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="11" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Seamless hybrid connectivity&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="11" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Reduced DNS complexity&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="11" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;No custom forwarders&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="11" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Improved application resilience&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;and simplifies DNS resolution for modern Private&amp;nbsp;Endpointenabled&amp;nbsp;architectures.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2026 15:57:15 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/enabling-fallback-to-internet-for-azure-private-dns-zones-in/ba-p/4511131</guid>
      <dc:creator>kirankumar_manchiwar04</dc:creator>
      <dc:date>2026-04-15T15:57:15Z</dc:date>
    </item>
    <item>
      <title>A demonstration of Virtual Network TAP</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/a-demonstration-of-virtual-network-tap/ba-p/4479136</link>
      <description>&lt;P&gt;Azure &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-tap-overview" target="_blank" rel="noopener"&gt;Virtual Network Terminal Access Point (VTAP)&lt;/A&gt;, at the time of writing in April 2026 in public preview in select regions, copies network traffic from source Virtual Machines to a collector or traffic analytics tool, running as a Network Virtual Appliance (NVA). VTAP creates a full copy of all traffic sent and received by Virtual Machine Network Interface Card(s) (NICs) designated as VTAP source(s). This includes packet payload content - in contrast to VNET Flow Logs, which only collect traffic meta data. Traffic collectors and analytics tools are 3rd party partner products, available from the Azure Marketplace, amongst which are the major Network Detection and Response solutions.&lt;/P&gt;
&lt;P&gt;VTAP is an agentless, cloud-native traffic tap at the Azure network infrastructure level. It is entirely out-of-band; it has no impact on the source VM's network performance and the source VM is unaware of the tap. Tapped traffic is VXLAN-encapsulated and delivered to the collector NVA, in the same VNET as the source VMs, or in a peered VNET.&lt;/P&gt;
&lt;P&gt;This post demonstrates the basic functionality of VTAP: copying traffic into and out of a source VM, to a destination VM.&lt;/P&gt;
&lt;P&gt;The demo consists of 3 three Windows VMs in one VNET, each running a basic web server that responds with the VM's name. Another VNET contains the target - a Windows VM on which Wireshark is installed, to inspect traffic forwarded by VTAP. This demo does not use 3rd party&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-tap-overview#virtual-network-tap-partner-solutions" target="_blank" rel="noopener"&gt;VTAP partner solutions&lt;/A&gt; from the Marketplace.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The lab for this demonstration is available on Github: &lt;A href="https://github.com/mddazure/virtual-network-tap-lab" target="_blank" rel="noopener"&gt;Virtual Network TAP&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;The VTAP resource is configured with the target VM's NIC as the destination.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;All traffic captured from sources is VXLAN-encapsulated and sent to the destination on UDP port 4789 (this cannot be changed).&lt;/P&gt;
&lt;P&gt;We use a single source to easier inspect the traffic flows in Wireshark; we will see that communication from the other VMs to our source VM is captured and copied to the destination. In a real world scenario, multiple or all of the VMs in an environment could be set up as TAP sources.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The source VM, vm1, generates traffic through a script that continuously polls vm2 and vm3 on http://10.0.2.5 and http://10.0.2.6, and &lt;A class="lia-external-url" href="http://ipconfig.io" target="_blank" rel="noopener"&gt;https://ipconfig.io&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;On the destination VM, we use Wireshark to observe captured traffic.&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The filter on UDP port 4789 causes Wireshark to only capture the VXLAN encapsulated traffic forwarded by VTAP.&lt;/P&gt;
&lt;P&gt;Wireshark automatically decodes VXLAN and displays the actual traffic to and from vm1, which is set up as the (only) VTAP source. Wireshark's capture panel shows the decapsulated TCP and HTTP exchanges, including the TCP handshake, between vm1 and the other VMs, and&amp;nbsp;&lt;A href="https://ipconfig.io/" target="_blank" rel="noopener"&gt;https://ipconfig.io&lt;/A&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Expanding the lines in the detail panel below the capture panel shows the details of the VXLAN encapsulation. The outer IP packets, encapsulating the VXLAN frames in UDP, originate from the source VM's IP address, 10.0.2.4, and have the target VM's address, 10.1.1.4, as the destination.&lt;/P&gt;
&lt;P&gt;The VXLAN frames contain all the details of the original Ethernet frames sent from and received by the source VM, and the IP packets within those. The Wireshark trace shows the full exchange between vm1 and the destinations it speaks with.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;This brief demonstration uses Wireshark to simply visualize the operation of VTAP.&amp;nbsp;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;The&lt;/SPAN&gt;&lt;A class="lia-external-url" style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-tap-overview#virtual-network-tap-partner-solutions" target="_blank" rel="noopener"&gt; partner solutions&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; available from the Azure Marketplace operate on the captured traffic to implement their specific functionality.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2026 10:30:01 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/a-demonstration-of-virtual-network-tap/ba-p/4479136</guid>
      <dc:creator>Marc de Droog</dc:creator>
      <dc:date>2026-04-15T10:30:01Z</dc:date>
    </item>
    <item>
      <title>Connecting an ExpressRoute circuit to Megaport Virtual Edge</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/connecting-an-expressroute-circuit-to-megaport-virtual-edge/ba-p/4510770</link>
      <description>&lt;P&gt;Megaport is an ExpressRoute partner in many &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/expressroute/expressroute-locations?tabs=america%2Cj-m%2Cus-government-cloud%2Ca-C#global-commercial-azure" target="_blank" rel="noopener"&gt;locations&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;The &lt;A class="lia-external-url" href="https://docs.megaport.com/mcr/" target="_blank" rel="noopener"&gt;Megaport Cloud Router (MCR)&lt;/A&gt; allows ExpressRoute customers to connect leased lines to their on-premise locations, and to connect other Cloud Providers. MCR is easy to set up and operate, it even automatically configures the ExpressRoute Private Peering on both the Megaport and Azure sides, but it does not have a command line interface and does not permit advanced configuration.&lt;/P&gt;
&lt;P&gt;For advanced scenario's,&amp;nbsp;&lt;A class="lia-external-url" href="https://docs.megaport.com/mve/" target="_blank" rel="noopener"&gt;Megaport Virtual Edge (MVE)&lt;/A&gt; provides a platform to run fully configurable Network Virtual Appliances (NVAs) from a variety of vendors.&lt;/P&gt;
&lt;P&gt;This post describes how to connect ExpressRoute to MVE running a Cisco 8000v NVA.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;H1&gt;Create the Expressroute Circuit&lt;/H1&gt;
&lt;P&gt;In the Azure portal, create an &amp;nbsp;ExpressRoute circuit with Standard Resiliency in a Peering location where Megaport is available.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;When the circuit deployment is completed, copy the Service key.&lt;/P&gt;
&lt;img /&gt;
&lt;H1&gt;Create MVE and ExpressRoute connections&lt;/H1&gt;
&lt;P&gt;Log in to the &lt;A class="lia-external-url" href="https://portal.megaport.com/" target="_blank" rel="noopener"&gt;Megaport management portal&lt;/A&gt;, go to Services and click Create MVE.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Select Cisco C8000 as the Vendor / Product.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;On the next screen:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Select the Location where the MVE is to be deployed - use the ExpressRoute peering location.&lt;/LI&gt;
&lt;LI&gt;Select the MVE size.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;On the following screen:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Select Autonomous under Appliance Mode.&lt;/LI&gt;
&lt;LI&gt;Paste a &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/viva/glint/setup/sftp-ssh-key-gen" target="_blank"&gt;2048-bit RS SSH public key&lt;/A&gt; in the box.&lt;/LI&gt;
&lt;LI&gt;Under Virtual Interfaces (vNICs), add vNICs as needed. One ExpressRoute circuit requires 2 vNICs, one for each path.&lt;BR /&gt;vNIC0 will be used to connect a Megaport Internet VXC for SSH access to the device.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;On the following screen, give the MVE a name under Finalize Details in the left bar, verify the Summary, and and click Add MVE.&lt;/P&gt;
&lt;P&gt;Clicking Create Megaport Internet in the pop up that now appears lets you directly to provision an internet VXC:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Select the location with the lowest latest latency to the MVE - this will be at the top of the list.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;On the next screen:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Leave the name as proposed or change as needed.&lt;/LI&gt;
&lt;LI&gt;Set Rate Limit to 20 Mbps (lowest possible, this is for SSH access only).&lt;/LI&gt;
&lt;LI&gt;Leave A-vNIC set to vNIC-0.&lt;/LI&gt;
&lt;LI&gt;Leave Preferred A-End VLAN at Untagged.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;On the next screen verify the configuration and click Add VXC.&lt;/P&gt;
&lt;P&gt;On the main Services page, the MVE and Internet VXC now show with the note "Order pending".&lt;/P&gt;
&lt;P&gt;Click +Connection in the MVE box to connect a VXC to the ExpressRoute Circuit.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Under Choose Destination Type select Cloud.&lt;/LI&gt;
&lt;LI&gt;Then select Microsoft Azure as the Provider.&lt;/LI&gt;
&lt;LI&gt;Paste in the circuit's Service Key and select Port for the Primary path.&lt;/LI&gt;
&lt;LI&gt;Click Next.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;On the next screen:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Give the connection a name.&lt;/LI&gt;
&lt;LI&gt;Leave the Rate Limit as proposed, this is set to the bandwidth of the circuit.&lt;/LI&gt;
&lt;LI&gt;At A-end vNIC, select vNIC-1 (do not leave this at vNIC-0!).&lt;/LI&gt;
&lt;LI&gt;At Preferred A-End VLAN, turn off Untag and enter a VLAN number. This will be used to set the sub-interface in the MVE configuration later.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Scroll down to Azure peering VLAN.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Leave Configure Azure Peering VLAN turned on.&lt;/LI&gt;
&lt;LI&gt;Enter the same VLAN ID that will be used in the configuration of the Private Peering on the Azure end.&lt;/LI&gt;
&lt;LI&gt;Click Next.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Verify the configuration summary and click Add VXC.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Repeat the process to add the Secondary path, terminating on vNIC-2. Enter a different VLAN ID for Preferred A-End VLAN. Enter the same VLAN ID that will be used in the Private Peering under Azure peering VLAN.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;When the second ExpressRoute VXC is configured, click Review Order in the right hand bar of the Services screen.&lt;/P&gt;
&lt;P&gt;When the validation completes, click Order Now.&lt;/P&gt;
&lt;P&gt;This will provision the MVE and the VXC. It will take a few minutes for all services to come up.&lt;/P&gt;
&lt;P&gt;In the Azure portal, the Provider Status of the ExpressRoute circuit will change to Provisioned.&lt;/P&gt;
&lt;H1&gt;Configure Private Peering&lt;/H1&gt;
&lt;P&gt;Go back to the ExpressRoute circuit in the Azure portal. The Provider Status will now be Provisioned, and the Private Peering can be enabled. Click on Peerings under Settings and then click Azure private.&lt;/P&gt;
&lt;P&gt;Enter the Peer ASN and Primary and Secondary subnets. Under VLAN ID enter the&amp;nbsp;&lt;STRONG&gt;same number as configured under Azure Peering VLAN in the Primary and Secondary VXC configurations&lt;/STRONG&gt;&amp;nbsp;in the Megaport portal.&lt;/P&gt;
&lt;img /&gt;
&lt;H1&gt;Configure Cisco IOS&lt;/H1&gt;
&lt;P&gt;Establish an SSH session to the MVE. Use the public ip address from the internet VXC, and the private key that belongs with the public key used when deploying the MVE.&lt;/P&gt;
&lt;LI-CODE lang="shell-session"&gt;ssh -i &amp;lt;private-key-file&amp;gt; mveadmin@&amp;lt;public ip&amp;gt;&lt;/LI-CODE&gt;
&lt;P&gt;Configure interfaces:&lt;/P&gt;
&lt;LI-CODE lang="shell-session"&gt;interface GigabitEthernet2
 no ip address
 no shutdown
 negotiation auto
!
interface GigabitEthernet2.100
 encapsulation dot1Q 100
 ip address 192.168.0.1 255.255.255.252
!
interface GigabitEthernet3
 no ip address
 no shutdown
 negotiation auto
!
interface GigabitEthernet3.101
 encapsulation dot1Q 101
 ip address 192.168.0.5 255.255.255.252&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;Use the Preferred A-end VLAN values set in the primary and secondary VXCs to configure the encapsulation on the subinterfaces. Use the lower address of the /30 subnets configured on the Private Peering.&lt;/P&gt;
&lt;P&gt;The higher IP addresses of the Private Peering should now respond to ping:&lt;/P&gt;
&lt;LI-CODE lang="shell-session"&gt;ping 192.168.0.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.0.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms&lt;/LI-CODE&gt;
&lt;P&gt;If ping does not work there likely is an ARP resolution issue. Run `show arp` and `debug arp` and check the&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/troubleshoot/azure/expressroute/expressroute-troubleshooting-arp-resource-manager" target="_blank" rel="noopener"&gt;ARP table&lt;/A&gt; of the Private Peering.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Configure BGP:&lt;/P&gt;
&lt;LI-CODE lang="shell-session"&gt;router bgp 64000
 bgp log-neighbor-changes
 neighbor 192.168.0.2 remote-as 12076
 neighbor 192.168.0.2 soft-reconfiguration inbound
 neighbor 192.168.0.6 remote-as 12076
 neighbor 192.168.0.6 soft-reconfiguration inbound&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;Verify both neighbors show BGP state = Established:&lt;/P&gt;
&lt;LI-CODE lang="shell-session"&gt;sh ip bgp neighbor 192.168.0.2
BGP neighbor is 192.168.0.2,  remote AS 12076, external link
  BGP version 4, remote router ID 192.168.0.2
  BGP state = Established, up for 1d21h
  ...&lt;/LI-CODE&gt;
&lt;P&gt;This completes the basic configuration of ExpressRoute to MVE.&lt;/P&gt;</description>
      <pubDate>Mon, 13 Apr 2026 13:18:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/connecting-an-expressroute-circuit-to-megaport-virtual-edge/ba-p/4510770</guid>
      <dc:creator>Marc de Droog</dc:creator>
      <dc:date>2026-04-13T13:18:18Z</dc:date>
    </item>
    <item>
      <title>Announcing public preview: Cilium mTLS encryption for Azure Kubernetes Service</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/announcing-public-preview-cilium-mtls-encryption-for-azure/ba-p/4504423</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;We are thrilled to announce the public preview of&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Cilium&amp;nbsp;mTLS&amp;nbsp;encryption in Azure Kubernetes Service (AKS)&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;, delivered as part of&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/advanced-container-networking-services-overview" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;&lt;STRONG&gt;Advanced Container Networking Services&lt;/STRONG&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;and powered by the&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Azure CNI&amp;nbsp;dataplane&amp;nbsp;built on Cilium&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This capability is the result of a close engineering collaboration between Microsoft and&amp;nbsp;Isovalent&amp;nbsp;(now part of Cisco). It brings transparent,&amp;nbsp;workload&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;level mutual TLS (mTLS) to AKS without sidecars, without application changes, and without introducing a separate service mesh stack.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This public preview&amp;nbsp;represents&amp;nbsp;a major step forward in delivering secure,&amp;nbsp;high&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;performance, and operationally simple networking for AKS customers. In this post,&amp;nbsp;we’ll&amp;nbsp;walk through how Cilium&amp;nbsp;mTLS&amp;nbsp;works, when to use it, and how to get started.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Why Cilium &lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;mTLS&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;e&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;ncryption&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;m&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;atters&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Traditionally, teams looking to&amp;nbsp;in-transit&amp;nbsp;traffic&amp;nbsp;encryption&amp;nbsp;in Kubernetes have had two primary options:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Node-level encryption&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;(for example,&amp;nbsp;WireGuard&amp;nbsp;or&amp;nbsp;virtual network encryption), which secures traffic in transit but lacks workload identity and authentication.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Service meshes&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;, which provide strong identity and&amp;nbsp;mTLS&amp;nbsp;guarantees but introduce operational complexity.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;T&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;his&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;trade&lt;/SPAN&gt;&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;off&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;&amp;nbsp;has become increasingly problematic&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;, as m&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;any teams want&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;workload&lt;/SPAN&gt;&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;level&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;&amp;nbsp;encryption and authentication&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;, but without the cost, overhead, and architectural impact of deploying and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;operating&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;&amp;nbsp;a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;full-service&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;&amp;nbsp;mesh.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:2,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;Cilium&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;mTLS&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;&amp;nbsp;closes this gap directly in the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;dataplane&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;. It delivers transparent, inline&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;mTLS&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;&amp;nbsp;encryption and authentication for&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;pod&lt;/SPAN&gt;&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;to&lt;/SPAN&gt;&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;pod&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;&amp;nbsp;TCP traffic, enforced below the application layer.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;And&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;&amp;nbsp;implemented natively in the Azure CNI&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;dataplane&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;&amp;nbsp;built on Cilium,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;so&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;customers gain&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;workload&lt;/SPAN&gt;&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;level&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Normal (Web)"&gt;&amp;nbsp;security without introducing a separate service mesh, resulting in a simpler architecture with lower operational overhead.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;To see how this works under the hood, the next section breaks down the Cilium mTLS architecture and follows a pod&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;to&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;pod&amp;nbsp;TCP flow from interception to authentication and encryption.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN data-contrast="none"&gt;Architecture and&amp;nbsp;design: How Cilium&amp;nbsp;mTLS&amp;nbsp;works&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Cilium&amp;nbsp;mTLS&amp;nbsp;achieves&amp;nbsp;workload&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;level&amp;nbsp;authentication and encryption by combining&amp;nbsp;three key&amp;nbsp;components, each responsible for a specific part of the&amp;nbsp;authentication and encryption&amp;nbsp;lifecycle.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;Cilium&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;a&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;gent&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;Transparent&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;t&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;raffic&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;i&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;nterception and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;w&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;iring&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Cilium agent which already exists on any cluster running with Azure CNI powered by cilium, is responsible for making mTLS invisible to applications. When a namespace is labelled with “io.cilium/mtls-enabled=true”, The Cilium agent enrolls all pods in that namespace. It enters each pod's network namespace and installs iptables rules that redirect outbound traffic to ztunnel on port 15001.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;It is also&amp;nbsp;responsible&amp;nbsp;for&amp;nbsp;passing&amp;nbsp;workload metadata (such as pod&amp;nbsp;IP&amp;nbsp;and namespace context) to&amp;nbsp;ztunnel.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;Z&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;tunnel&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;Node&lt;/SPAN&gt;&lt;/SPAN&gt;‑&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;l&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;evel&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;mTLS&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;e&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="Subtitle"&gt;nforcement&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Ztunnel is an open source,&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;lightweight,&amp;nbsp;node&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;level&amp;nbsp;Layer 4 proxy&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;that was&amp;nbsp;originally&amp;nbsp;created by&amp;nbsp;Istio&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt; &lt;SPAN data-contrast="auto"&gt;Ztunnel runs as a DaemonSet, on the source node it looks up the destination workload via XDS (streamed from the Cilium agent) and establishes mutually authenticated TLS 1.3 sessions between source and destination nodes. Connections are held inline until authentication is complete, ensuring that traffic is never sent in plaintext.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The destination&amp;nbsp;ztunnel&amp;nbsp;decrypts the traffic and delivers it into the target pod, bypassing the interception rules via an in-pod mark. The application sees a normal plaintext connection — it is completely unaware encryption happened.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;SPAN data-contrast="none"&gt;SPIRE: Workload&amp;nbsp;identity and&amp;nbsp;trust&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;SPIRE (SPIFFE Runtime Environment) provides the&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;identity foundation&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;for Cilium&amp;nbsp;mTLS.&amp;nbsp;SPIRE acts as the cluster Certificate Authority, issuing&amp;nbsp;short&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;lived&amp;nbsp;X.509 certificates (SVIDs)&amp;nbsp;that are automatically rotated and&amp;nbsp;validated.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This is a&amp;nbsp;key design principle of Cilium&amp;nbsp;mTLS&amp;nbsp;&amp;nbsp;-&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;trust is based on workload identity, not network topology&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Each workload receives a cryptographic identity derived from:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="5" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Kubernetes namespace&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="5" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Kubernetes&amp;nbsp;ServiceAccount&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;These identities are issued and rotated automatically by SPIRE and&amp;nbsp;validated&amp;nbsp;on both sides of every connection. As a result:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="6" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Identity&amp;nbsp;remains&amp;nbsp;stable across pod restarts and rescheduling&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="6" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Authentication is decoupled from IP addresses&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="6" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Trust decisions align naturally with Kubernetes RBAC and namespace boundaries&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This enables a&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;zero&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;trust&amp;nbsp;networking model&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;that fits cleanly into existing AKS security practices.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN data-contrast="none"&gt;End&lt;/SPAN&gt;‑&lt;SPAN data-contrast="none"&gt;to&lt;/SPAN&gt;‑&lt;SPAN data-contrast="none"&gt;End&amp;nbsp;workflow&amp;nbsp;example&lt;/SPAN&gt;&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;To see how these components work together, consider a simple pod&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;to&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;pod&amp;nbsp;connection:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="23" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;A pod&amp;nbsp;initiates&amp;nbsp;a TCP connection to another pod.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="23" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Traffic&amp;nbsp;intercepted inside&amp;nbsp;the&amp;nbsp;pod network&amp;nbsp;namespace&amp;nbsp;and redirected&amp;nbsp;to the local&amp;nbsp;ztunnel&amp;nbsp;instance.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="23" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;ztunnel&amp;nbsp;retrieves the workload identity using certificates issued by SPIRE.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="23" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;ztunnel&amp;nbsp;establishes a mutually authenticated TLS session with the destination node’s&amp;nbsp;ztunnel.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="23" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="5" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Traffic is encrypted and sent between&amp;nbsp;pods.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="" data-listid="23" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="6" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;The destination&amp;nbsp;ztunnel&amp;nbsp;decrypts the traffic and delivers it to the target pod.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Every packet from an enrolled pod is encrypted.&amp;nbsp;There is no plaintext window,&amp;nbsp;and&amp;nbsp;no dropped first packets. The connection is held inline by&amp;nbsp;ztunnel&amp;nbsp;until the&amp;nbsp;mTLS&amp;nbsp;tunnel is&amp;nbsp;established, then traffic flows bidirectionally through an HBONE (HTTP/2 CONNECT) tunnel.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN data-contrast="none"&gt;Workload enrolment and scope&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Cilium&amp;nbsp;mTLS&amp;nbsp;in AKS is&amp;nbsp;opt&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;in&amp;nbsp;and scoped at the namespace level.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Platform teams enable&amp;nbsp;mTLS&amp;nbsp;by applying a single label to a namespace. From that point on:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;All pods in that namespace&amp;nbsp;participate&amp;nbsp;in&amp;nbsp;mTLS&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Authentication and encryption are mandatory between enrolled workloads&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="7" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Non-enrolled namespaces continue to&amp;nbsp;operate&amp;nbsp;unchanged&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Encryption is applied only when both pods are enrolled. Traffic between enrolled and non&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;enrolled workloads continues in plaintext without causing connectivity issues or hard failures.&amp;nbsp;This model enables&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;gradual rollout&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;, staged migrations, and low-risk adoption across environments.&lt;/SPAN&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Getting&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;s&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;tarted in AKS&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Cilium&amp;nbsp;mTLS&amp;nbsp;encryption is available in&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;public preview&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;for AKS clusters that use:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="11" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Azure CNI powered by Cilium&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="11" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Advanced Container Networking Services&lt;/STRONG&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;You can enable&amp;nbsp;mTLS:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="12" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;When creating a new cluster, or&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="12" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;On an existing cluster by updating the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Advanced Container Networking Services&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;configuration&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Once enabled, enrolling workloads is as simple as&amp;nbsp;labelling&amp;nbsp;a namespace.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;👉&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Learn more&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="13" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/container-network-security-cilium-mutual-tls-concepts" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Concepts:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;&amp;nbsp;How Cilium mTLS works, architecture, and trust boundaries&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="13" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;STRONG&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/container-network-security-cilium-mutual-tls-how-to" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;How-to guide:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;&amp;nbsp;Step-by-step instructions to enable and verify mTLS in AKS&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Looking &lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;a&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;head&lt;/SPAN&gt;&lt;/SPAN&gt;&amp;nbsp;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This public preview&amp;nbsp;represents&amp;nbsp;an important step&amp;nbsp;forward in simplifying network security for&amp;nbsp;AKS and&amp;nbsp;reflects a deep collaboration between Microsoft and&amp;nbsp;Isovalent&amp;nbsp;to bring open,&amp;nbsp;standards&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;based&amp;nbsp;innovation into&amp;nbsp;production&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;ready&amp;nbsp;cloud platforms.&amp;nbsp;We’re&amp;nbsp;continuing to work closely with the community to improve the feature and move it toward&amp;nbsp;general&amp;nbsp;availability.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;If&amp;nbsp;you’re&amp;nbsp;looking for&amp;nbsp;workload&lt;/SPAN&gt;‑&lt;SPAN data-contrast="auto"&gt;level&amp;nbsp;encryption without the overhead of a traditional service mesh, we invite you to try Cilium&amp;nbsp;mTLS&amp;nbsp;in AKS and share your experience.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 23 Mar 2026 01:50:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/announcing-public-preview-cilium-mtls-encryption-for-azure/ba-p/4504423</guid>
      <dc:creator>chandanAggarwal</dc:creator>
      <dc:date>2026-03-23T01:50:18Z</dc:date>
    </item>
    <item>
      <title>Azure Front Door: Resiliency Series – Part 2: Faster recovery (RTO)</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/azure-front-door-resiliency-series-part-2-faster-recovery-rto/ba-p/4503091</link>
      <description>&lt;P&gt;In &lt;A href="https://aka.ms/AzureFrontDoor/Resiliency-Part1" target="_blank" rel="noopener"&gt;Part 1&lt;/A&gt; of this blog series, we outlined our four‑pillar strategy for resiliency in Azure Front Door: configuration resiliency, data plane resiliency, tenant isolation, and accelerated Recovery Time Objective (RTO). Together, these pillars help Azure Front Door remain continuously available and resilient at global scale.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://aka.ms/AzureFrontDoor/Resiliency-Part1" target="_blank" rel="noopener"&gt;Part 1&lt;/A&gt; focused on the first two pillars: configuration and data plane resiliency. Our goal is to make configuration propagation safer, so incompatible changes never escape pre‑production environments. We discussed how incompatible configurations are blocked early, and how data plane resiliency ensures the system continues serving traffic from a last‑known‑good (LKG) configuration even if a bad change manages to propagate. We also introduced ‘Food Taster’, a dedicated sacrificial process running in each edge server’s data plane, that pretests every configuration change in isolation, before it ever reaches the live data plane.&lt;/P&gt;
&lt;P&gt;In this post, we turn to the recovery pillar. We describe how we have made key enhancements to the Azure Front Door recovery path so the system can return to full operation in a predictable and bounded timeframe. For a global service like Azure Front Door, serving hundreds of thousands of tenants across 210+ edge sites worldwide, we set an explicit target: to be able to recover any edge site – or all edge sites – within approximately 10 minutes, even in worst‑case scenarios. In typical data plane crash scenarios, we expect recovery in under a second.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Repair status &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The first blog post in this series mentioned the two Azure Front Door incidents from October 2025 – learn more by watching our Azure Incident Retrospective session recordings for the &lt;A href="https://aka.ms/AIR/QNBQ-5W8" target="_blank" rel="noopener"&gt;October 9&lt;SUP&gt;th&lt;/SUP&gt; incident&lt;/A&gt; and/or the &lt;A href="https://aka.ms/AIR/YKYN-BWZ" target="_blank" rel="noopener"&gt;October 29&lt;SUP&gt;th&lt;/SUP&gt; incident&lt;/A&gt;. Before diving into our platform investments for improving our Recovery Time Objectives (RTO), we wanted to provide a quick update on the &lt;STRONG&gt;overall repair items&lt;/STRONG&gt; from these incidents. We are pleased to report that the work on configuration propagation and data plane resiliency is now complete and fully deployed across the platform (in the table below, “Completed” means broadly deployed in production). With this, we have reduced configuration propagation latency from &lt;STRONG&gt;~45 minutes to ~20 minutes&lt;/STRONG&gt;. We anticipate reducing this even further – to ~15 minutes by the end of April 2026, while ensuring that platform stability remains our top priority.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Learning category&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Goal&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Repairs&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Status&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Safe customer configuration deployment&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Incompatible configuration never propagates beyond ‘EUAP or canary regions’&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Control plane and data plane defect fixes&lt;/P&gt;
&lt;P&gt;Forced synchronous configuration processing&lt;/P&gt;
&lt;P&gt;Additional stages with extended bake time&lt;/P&gt;
&lt;P&gt;Early detection of crash state&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;Completed&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td rowspan="2"&gt;
&lt;P&gt;&lt;STRONG&gt;Data plane resiliency&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td rowspan="2"&gt;
&lt;P&gt;Configuration processing cannot impact data plane availability&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Manage data-plane lifecycle to prevent outages caused by configuration-processing defects.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;Completed&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Isolated work-process in every data plane server to process and load the configuration.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;Completed&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td rowspan="2"&gt;
&lt;P&gt;&lt;STRONG&gt;100% Azure Front Door resiliency posture for Microsoft internal services&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td rowspan="2"&gt;
&lt;P&gt;Microsoft operates an isolated, independent Active/Active fleet with automatic failover for critical Azure services&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Phase 1:&amp;nbsp;&lt;/STRONG&gt;Onboarded critical services batch impacted on Oct 29&lt;SUP&gt;th&lt;/SUP&gt;&amp;nbsp;outage running on a day old configuration&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;Completed&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Phase 2:&lt;/STRONG&gt;&amp;nbsp;Automation &amp;amp; hardening of operations, auto-failover and self-management of Azure Front Door onboarding for additional services&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;March &lt;/STRONG&gt;&lt;STRONG&gt;2026&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td rowspan="2"&gt;
&lt;P&gt;&lt;STRONG&gt;Recovery improvements&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td rowspan="2"&gt;
&lt;P&gt;Data plane crash recovery in under 10 minutes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Data plane boot-up time optimized via local cache (~1 hour)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;Completed&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Accelerate recovery time &amp;lt; 10 minutes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;April &lt;/STRONG&gt;&lt;STRONG&gt;2026&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Tenant isolation&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No configuration or traffic regression can impact other tenants&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Micro cellular Azure Front Door with ingress layered shards&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;June &lt;/STRONG&gt;&lt;STRONG&gt;2026&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 25.0154%" /&gt;&lt;col style="width: 25.0154%" /&gt;&lt;col style="width: 39.4674%" /&gt;&lt;col style="width: 10.4709%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H2&gt;Why recovery at edge scale is deceptively hard&lt;/H2&gt;
&lt;P&gt;To understand why recovery took as long as it did, it helps to first understand how the Azure Front Door data plane processes configuration.&lt;/P&gt;
&lt;P&gt;Azure Front Door operates in 210+ edge sites with multiple servers per site. The data plane of each edge server hosts multiple processes. A &lt;STRONG&gt;master process&lt;/STRONG&gt; orchestrates the lifecycle of multiple &lt;STRONG&gt;worker processes&lt;/STRONG&gt;, that serve customer traffic. A separate &lt;STRONG&gt;configuration translator&lt;/STRONG&gt;&lt;STRONG&gt; &lt;/STRONG&gt;process runs alongside the data plane processes, and is responsible for converting customer configuration bundles from the control plane into optimized binary &lt;STRONG&gt;FlatBuffer&lt;/STRONG&gt; files. This translation step, covering hundreds of thousands of tenants, represents hours of cumulative computation. A per edge server cache is kept locally at each server level – to enable a fast recovery of the data plane, if needed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once the configuration translator process produces these FlatBuffer files, each worker processes them independently and &lt;STRONG&gt;memory-maps&lt;/STRONG&gt; them for zero-copy access. Configuration updates flow through a &lt;STRONG&gt;two-phase commit&lt;/STRONG&gt;: new FlatBuffers are first loaded into a staging area and validated, then atomically swapped into production maps. In-flight requests continue using the old configuration, until the last request referencing them completes.&lt;/P&gt;
&lt;P&gt;The data process recovery is designed to be resilient to different failure modes. A failure or crash at the worker process level has a typical recovery time of less than one second. Since each server has multiple such worker processes which serve customer traffic, this type of crash has no impact on the data plane. In the case of a master process crash, the system automatically tries to recover using the local cache. When the local cache is reused, the system is able to recover quickly – in approximately 60 minutes – since most of the configurations in the cache were already loaded into the data plane before the crash. However, in certain cases if the cache becomes unavailable or must be invalidated because of corruption, the recovery time increases significantly.&lt;/P&gt;
&lt;P&gt;During the October 29&lt;SUP&gt;th&lt;/SUP&gt; incident, a data plane crash triggered a complete recovery sequence that took approximately 4.5 hours. This was not because restarting a process is slow, it is because a defect in the recovery process invalidated the local cache, which meant that “restart” meant &lt;EM&gt;rebuilding everything from scratch.&lt;/EM&gt; The configuration translator process then had to re-fetch and re-translate every one of the hundreds of thousands of customer configurations, before workers could memory-map them and begin serving traffic.&lt;/P&gt;
&lt;P&gt;This experience has crystallized three fundamental learnings related to our recovery path:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Expensive rework:&lt;/STRONG&gt; A subset of crashes discarded all previously translated FlatBuffer artifacts, forcing the configuration translator process to repeat hours of conversion work that had already been validated and stored.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;High restart costs:&lt;/STRONG&gt; Every worker on every node had to wait for the configuration translator process to complete the full translation, before it could memory-map any configuration and begin serving requests.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Unbounded recovery time:&lt;/STRONG&gt; Recovery time grew linearly with total tenant footprint rather than with active traffic, creating a ‘scale penalty’ as more tenants onboarded to the system.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Separately and together, the insight was clear: &lt;STRONG&gt;recovery must stop being proportional to the total configuration size.&lt;/STRONG&gt;&lt;/P&gt;
&lt;H2&gt;Persisting ‘validated configurations’ across restarts&lt;/H2&gt;
&lt;P&gt;One of the key recovery improvements was strengthening how validated customer configurations are cached and reused across failures, rather than rebuilding configuration states from scratch during recovery. Azure Front Door already cached customer configurations on host‑mounted storage prior to the October incident. The platform enhancements post outage focused on making the local configuration cache resilient to crashes, partial failures, and bad tenant inputs.&lt;/P&gt;
&lt;P&gt;Our goal was to ensure that recovery behavior is dominated by &lt;EM&gt;serving traffic safely&lt;/EM&gt;, not by reconstructing configuration state. This led us to two explicit design goals…&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Design goals&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;No category of crash should invalidate the configuration cache&lt;/STRONG&gt;: Configuration cache invalidation must never be the default response to failures. Whether the failure is a worker crash, master crash, data plane restart, or coordinated recovery action, previously validated customer configurations should remain usable—unless there is a proven reason to discard it.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Bad tenant configuration must not poison the entire cache: &lt;/STRONG&gt;A single faulty or incompatible tenant configuration should result in &lt;EM&gt;targeted eviction&lt;/EM&gt; of that tenant’s configuration only—not wholesale cache invalidation across all tenants.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Platform enhancements&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Previously, customer configurations persisted to host‑mounted storage, but &lt;STRONG&gt;certain failure paths treated the cache as unsafe and invalidated it entirely&lt;/STRONG&gt;. In those cases, recovery implicitly meant reloading and reprocessing configuration for hundreds of thousands of tenants before traffic could resume, even though the vast majority of cached data was still valid.&lt;/P&gt;
&lt;P&gt;We changed the recovery model to &lt;STRONG&gt;avoid&lt;/STRONG&gt; &lt;STRONG&gt;invalidating customer configurations&lt;/STRONG&gt;, with strict scoping around when and how cached entries are discarded:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Cached configurations are no longer invalidated based on crash &lt;EM&gt;type&lt;/EM&gt;. Failures are assumed to be orthogonal to configuration correctness unless explicitly proven otherwise.&lt;/LI&gt;
&lt;LI&gt;Cache eviction is &lt;STRONG&gt;granular and tenant‑scoped&lt;/STRONG&gt;. If a cached configuration fails validation or load checks, &lt;EM&gt;only that tenant’s configuration&lt;/EM&gt; is discarded and reloaded. All other tenant configurations remain available.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This ensures that recovery does not regress into a fleet‑wide rebuild due to localized or unrelated faults.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Safety and correctness&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Durability is paired with strong correctness controls, to prevent unsafe configurations from being served:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Per‑tenant validation on load:&lt;/STRONG&gt; Each cached tenant configuration is validated during the ‘load and verification’ phase, before being promoted for traffic serving. Therefore, failures are contained to that tenant.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Targeted re‑translation:&lt;/STRONG&gt; When validation fails, only the affected tenant’s configuration is reloaded or reprocessed. Therefore, the cache for other tenants is left untouched.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Operational escape hatch:&lt;/STRONG&gt; Operators retain the ability to explicitly instruct a clean rebuild of the configuration cache (with proper authorization), preserving control without compromising the default fast‑recovery path.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Resulting behavior&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;With these changes, recovery behavior now aligns with real‑world traffic patterns - configuration defects impact tenants &lt;EM&gt;locally and predictably&lt;/EM&gt;, rather than globally. The system now prefers &lt;STRONG&gt;isolated tenant impact&lt;/STRONG&gt;, and &lt;STRONG&gt;continued service using last-known-good&lt;/STRONG&gt; over aggressive invalidation, both of which are critical for predictable recovery at the scale of Azure Front Door.&lt;/P&gt;
&lt;H2&gt;Making recovery scale with active traffic, not total tenants&lt;/H2&gt;
&lt;P&gt;Reusing configuration cache solves the problem of &lt;EM&gt;rebuilding&lt;/EM&gt; configuration in its entirety, but even with a warm cache, the original startup path had a second bottleneck: &lt;STRONG&gt;eagerly loading a large volume of tenant configurations into memory before serving any traffic.&lt;/STRONG&gt; At our scale, memory-mapping, parsing hundreds of thousands of FlatBuffers, constructing internal lookup maps, adding Transport Layer Security (TLS) certificates and configuration blocks for each tenant, collectively added almost an hour to startup time. This was the case even when a majority of those tenants had no active traffic at that moment.&lt;/P&gt;
&lt;P&gt;We addressed this by fundamentally changing &lt;EM&gt;when&lt;/EM&gt; configuration is loaded into workers. Rather than eagerly loading most of the tenants at startup across all edge locations, Azure Front Door now uses an Machine Learning (ML)-optimized &lt;STRONG&gt;lazy loading&lt;/STRONG&gt; model.&lt;/P&gt;
&lt;P&gt;In the new architecture, instead of loading a large number of tenant configurations, we only load a small subset of tenants that are known to be historically active in a given site, we call this the “warm tenants” list. The warm tenants list per edge site is created through a sophisticated traffic analysis pipeline that leverages ML. However, loading the warm tenants is not good enough, because when a request arrives and we don’t have the configuration in memory, we need to know two things. Firstly, is this a request from a real Azure Front Door tenant – and, if it is, where can I find the configuration?&lt;/P&gt;
&lt;P&gt;To answer these questions, each worker maintains a &lt;STRONG&gt;hostmap&lt;/STRONG&gt; that tracks the state of each tenant’s configuration. This hostmap is constructed during startup, as we process each tenant configuration – if the tenant is in the warm list, we will process and load their configuration fully; if not, then we will just add an entry into the hostmap where all their domain names are mapped to the configuration path location. When a request arrives for one of these tenants, the worker loads and validates that tenant’s configuration on demand, and immediately begins serving traffic. This allows a node to start serving its busiest tenants within a few minutes of startup, while additional tenants are loaded incrementally only when traffic actually arrives—allowing the system to progressively absorb cold tenants as demand increases.&lt;/P&gt;
&lt;P&gt;The effect on recovery is transformative. Instead of recovery time scaling with the total number of tenants configured on a server, it scales with the number of tenants &lt;EM&gt;actively receiving traffic&lt;/EM&gt;. In practice, even at our busiest edge sites, the active tenant set is a small fraction of the total.&lt;/P&gt;
&lt;P&gt;Just as importantly, this modified form of lazy loading provides a natural &lt;STRONG&gt;failure isolation boundary&lt;/STRONG&gt;. Most Edge sites won’t ever load a faulty configuration of an inactive tenant. When a request for an inactive tenant &lt;EM&gt;with an incompatible configuration&lt;/EM&gt; arrives, impact is contained to a single worker.&lt;/P&gt;
&lt;P&gt;The configuration load architecture now prefers serving &lt;EM&gt;as many&lt;/EM&gt; customers &lt;EM&gt;as quickly&lt;/EM&gt; as possible, rather than waiting until &lt;EM&gt;everything&lt;/EM&gt; is ready before serving &lt;EM&gt;anyone&lt;/EM&gt;. The above changes are slated to complete in April 2026 and will bring our RTO from the current ~1 hour to under 10 minutes – for complete recovery from a worst case scenario.&lt;/P&gt;
&lt;H3&gt;Continuous validation through Game Days&lt;/H3&gt;
&lt;P&gt;A critical element of our recovery confidence comes from &lt;STRONG&gt;GameDay fault-injection testing&lt;/STRONG&gt;. We don’t simply design recovery mechanisms and assume they work—we break the system deliberately and observe how it responds. Since late 2025, we have conducted recurring GameDay drills that simulate the exact failure scenarios we are defending against:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Food Taster crash scenarios:&lt;/STRONG&gt; Injecting deliberately faulty tenant configurations, to verify that they are caught and isolated with zero impact on live traffic. In our January 2026 GameDay, the Food Taster process crashed as expected, the system halted the update within approximately 5 seconds, and no customer traffic was affected.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Master process crash scenarios:&lt;/STRONG&gt; Triggering master process crashes across test environments to verify that workers continue serving traffic, that the Local Config Shield engages within 10 seconds, and that the coordinated recovery tool restores full operation within the expected timeframe.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Multi-region failure drills:&lt;/STRONG&gt; Simulating simultaneous failures across multiple regions to validate that global Config Shield mechanisms engage correctly, and that recovery procedures scale without requiring manual per-region intervention.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Fallback test drills for critical Azure services running behind Azure Front Door:&lt;/STRONG&gt; In our February 2026 GameDay, we simulated the complete unavailability of Azure Front Door, and successfully validated failover for critical Azure services with no impact to traffic.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These drills have both surfaced corner cases and built operational confidence. They have transformed recovery from a theoretical plan into tested, repeatable muscle memory. As we noted in an internal communication to our team: &lt;EM&gt;“Game day testing is a deliberate shift from assuming resilience to actively proving it—turning reliability into an observed and repeatable outcome.”&lt;/EM&gt;&lt;/P&gt;
&lt;H2&gt;Closing&lt;/H2&gt;
&lt;P&gt;Part 1 of this series emphasized preventing unsafe configurations from reaching the data plane, and data plane resiliency in case an incompatible configuration reaches production. This post has shown that prevention alone is not enough—when failures do occur, recovery must be &lt;STRONG&gt;fast, predictable, and bounded&lt;/STRONG&gt;. By ensuring that the FlatBuffer cache is never invalidated, by loading only active tenants, and by building safe coordinated recovery tooling, we have transformed failure handling from a fleet-wide crisis into a controlled operation.&lt;/P&gt;
&lt;P&gt;These recovery investments work in concert with the prevention mechanisms described in Part 1. Together, they ensure that the path from incident detection to full service restoration is measured in minutes, with customer traffic protected at every step.&lt;/P&gt;
&lt;P&gt;In the next post of this series, we will cover the third pillar of our resiliency strategy: &lt;STRONG&gt;tenant isolation&lt;/STRONG&gt;—how micro-cellular architecture and ingress-layered sharding can reduce the blast radius of any failure to a small subset, ensuring that one customer’s configuration or traffic anomaly never becomes everyone’s problem.&lt;/P&gt;
&lt;P&gt;We deeply value our customers’ trust in Azure Front Door. We are committed to transparently sharing our progress on these resiliency investments, and to exceed expectations for safety, reliability, and operational readiness.&lt;/P&gt;</description>
      <pubDate>Thu, 19 Mar 2026 17:00:54 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/azure-front-door-resiliency-series-part-2-faster-recovery-rto/ba-p/4503091</guid>
      <dc:creator>AbhishekTiwari</dc:creator>
      <dc:date>2026-03-19T17:00:54Z</dc:date>
    </item>
    <item>
      <title>ExpressRoute Gateway Microsoft initiated migration</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/expressroute-gateway-microsoft-initiated-migration/ba-p/4497689</link>
      <description>&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-border-color-8" border="3" style="width: 100%; border-width: 3px;"&gt;&lt;colgroup&gt;&lt;col style="width: 99.9074%" /&gt;&lt;/colgroup&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td class="lia-border-color-8" style="border-width: 3px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Important: &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-teams="true"&gt;Microsoft initiated Gateway migrations are temporarily paused. You will be notified when migrations resume.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H1&gt;Objective&lt;/H1&gt;
&lt;P&gt;The backend migration process is an automated upgrade performed by Microsoft to ensure your ExpressRoute gateways use the Standard IP SKU. This migration enhances gateway reliability and availability while maintaining service continuity. You receive notifications about scheduled maintenance windows and have options to control the migration timeline. For guidance on upgrading Basic SKU public IP addresses for other networking services, see &lt;A href="https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/public-ip-basic-upgrade-guidance#steps-to-complete-the-upgrade" target="_blank" rel="noopener"&gt;Upgrading Basic to Standard SKU&lt;/A&gt;.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-20"&gt;&lt;STRONG&gt;Important: &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;As of September 30, 2025, Basic SKU public IPs are retired. For more information, see the &lt;A href="https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;official announcement&lt;/STRONG&gt;&lt;/A&gt;.&lt;BR /&gt;You can initiate the ExpressRoute gateway migration yourself at a time that best suits your business needs, before the Microsoft team performs the migration on your behalf. This gives you control over the migration timing.&amp;nbsp;Please use the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/expressroute/gateway-migration" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;ExpressRoute Gateway&amp;nbsp;Migration Tool&lt;/STRONG&gt;&amp;nbsp;&lt;/A&gt;to migrate your gateway Public IP to Standard SKU. This tool provides a guided workflow in the Azure portal and PowerShell, enabling a smooth migration with minimal service disruption.&amp;nbsp;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;Backend migration overview&lt;/H2&gt;
&lt;P&gt;The backend migration is scheduled during your preferred maintenance window. During this time, the Microsoft team performs the migration with minimal disruption. You don’t need to take any actions. The process includes the following steps:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Deploy new gateway:&lt;/STRONG&gt; Azure provisions a second virtual network gateway in the same &lt;EM&gt;GatewaySubnet &lt;/EM&gt;alongside your existing gateway. Microsoft automatically assigns a new Standard SKU public IP address to this gateway.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Transfer configuration:&lt;/STRONG&gt; The process copies all existing configurations (connections, settings, routes) from the old gateway. Both gateways run in parallel during the transition to minimize downtime. You may experience brief connectivity interruptions may occur.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Clean up resources:&lt;/STRONG&gt; After migration completes successfully and passes validation, Azure removes the old gateway and its associated connections. The new gateway includes a tag &lt;STRONG&gt;CreatedBy: GatewayMigrationByService&lt;/STRONG&gt; to indicate it was created through the automated backend migration&lt;/LI&gt;
&lt;/OL&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-20"&gt;Important:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;To ensure a smooth backend migration, avoid making non-critical changes to your gateway resources or connected circuits during the migration process. If modifications are absolutely required, you can choose (after the Migrate stage complete) to either commit or abort the migration and make your changes.&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;Backend process details&lt;/H2&gt;
&lt;P&gt;This section provides an overview of the Azure portal experience during backend migration for an existing ExpressRoute gateway. It explains what to expect at each stage and what you see in the Azure portal as the migration progresses. To reduce risk and ensure service continuity, the process performs validation checks before and after every phase.&lt;/P&gt;
&lt;P&gt;The backend migration follows four key stages:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Validate&lt;/STRONG&gt;: Checks that your gateway and connected resources meet all migration requirements for the Basic to Standard public IP migration.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Prepare:&lt;/STRONG&gt; Deploys the new gateway with Standard IP SKU alongside your existing gateway.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Migrate&lt;/STRONG&gt;: Cuts over traffic from the old gateway to the new gateway with a Standard public IP.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Commit or abort&lt;/STRONG&gt;: Finalizes the public IP SKU migration by removing the old gateway or reverts to the old gateway if needed.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;These stages mirror the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/expressroute/gateway-migration" target="_blank" rel="noopener"&gt;Gateway migration &lt;/A&gt;tool process, ensuring consistency across both migration approaches.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The Azure resource group &lt;STRONG&gt;&lt;EM&gt;RGA&lt;/EM&gt;&lt;/STRONG&gt; serves as a logical container that displays all associated resources as the process updates, creates, or removes them. Before the migration begins, &lt;EM&gt;RGA&lt;/EM&gt; contains the following resources:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;H6&gt;&lt;EM&gt;This image uses an example ExpressRoute gateway named&amp;nbsp;&lt;STRONG&gt;ERGW-A&lt;/STRONG&gt; with two connections (&lt;STRONG&gt;Conn-A&lt;/STRONG&gt; and &lt;STRONG&gt;LAconn&lt;/STRONG&gt;) in the resource group &lt;STRONG&gt;RGA&lt;/STRONG&gt;.&lt;/EM&gt;&lt;/H6&gt;
&lt;H2&gt;Portal walkthrough&lt;/H2&gt;
&lt;P&gt;Before the backend migration starts, a banner appears in the &lt;STRONG&gt;Overview&lt;/STRONG&gt; blade of the ExpressRoute gateway. It notifies you that the gateway uses the deprecated Basic IP SKU and will undergo backend migration between March 7, 2026, and April 30, 2026:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Validate stage&lt;/H3&gt;
&lt;P&gt;Once you start the migration, the banner in your gateway’s &lt;STRONG&gt;Overview &lt;/STRONG&gt;page updates to indicate that migration is currently in progress.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this initial stage, all resources are checked to ensure they are in a Passed state. If any prerequisites aren't met, validation fails and the Azure team doesn't proceed with the migration to avoid traffic disruptions. No resources are created or modified in this stage.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After the validation phase completes successfully, a notification appears indicating that validation passed and the migration can proceed to the Prepare stage.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Prepare stage&lt;/H3&gt;
&lt;P&gt;In this stage, the backend process provisions a new virtual network gateway in the same region and SKU type as the existing gateway. Azure automatically assigns a new public IP address and re-establishes all connections. This preparation step typically takes up to 45 minutes.&lt;/P&gt;
&lt;P&gt;To indicate that the new gateway is created by migration, the backend mechanism appends &lt;STRONG&gt;_migrate&lt;/STRONG&gt; to the original gateway name. During this phase, the existing gateway is locked to prevent configuration changes, but you retain the option to abort the migration, which deletes the newly created gateway and its connections.&lt;/P&gt;
&lt;P&gt;After the Prepare stage starts, a notification appears showing that new resources are being deployed to the resource group:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;&lt;BR /&gt;Deployment status&lt;/H4&gt;
&lt;P&gt;In the resource group &lt;STRONG&gt;&lt;EM&gt;RGA&lt;/EM&gt;&lt;/STRONG&gt;, under &lt;STRONG&gt;Settings &lt;/STRONG&gt;&lt;STRONG&gt;→&lt;/STRONG&gt;&lt;STRONG&gt; Deployments&lt;/STRONG&gt;, you can view the status of all newly deployed resources as part of the backend migration process.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the resource group &lt;STRONG&gt;&lt;EM&gt;RGA&lt;/EM&gt;&lt;/STRONG&gt; under the &lt;STRONG&gt;Activity Log&lt;/STRONG&gt; blade, you can see events related to the Prepare stage. These events are initiated by &lt;STRONG&gt;GatewayRP&lt;/STRONG&gt;, which indicates they are part of the backend process:&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;Deployment verification&lt;/H4&gt;
&lt;P&gt;After the Prepare stage completes, you can verify the deployment details in the resource group &lt;STRONG&gt;RGA&lt;/STRONG&gt; under &lt;STRONG&gt;Settings &amp;gt; Deployments&lt;/STRONG&gt;. This section lists all components created as part of the backend migration workflow.&lt;/P&gt;
&lt;P&gt;The new gateway &lt;STRONG&gt;ERGW-A_migrate&lt;/STRONG&gt; is deployed successfully along with its corresponding connections: &lt;STRONG&gt;Conn-A_migrate&lt;/STRONG&gt; and &lt;STRONG&gt;LAconn_migrate&lt;/STRONG&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4&gt;Gateway tag&lt;/H4&gt;
&lt;P&gt;The newly created gateway &lt;STRONG&gt;ERGW-A_migrate&lt;/STRONG&gt; includes the tag &lt;STRONG&gt;CreatedBy: GatewayMigrationByService&lt;/STRONG&gt;, which indicates it was provisioned by the backend migration process.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Migrate stage&lt;/H3&gt;
&lt;P&gt;After the Prepare stage finishes, the backend process starts the Migrate stage. During this stage, the process switches traffic from the existing gateway &lt;STRONG&gt;ERGW-A&lt;/STRONG&gt; to the new gateway &lt;STRONG&gt;ERGW-A_migrate&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;Gateway ERGW-A_migrate:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Old gateway &lt;EM&gt;(ERGW-A)&lt;/EM&gt; handles traffic:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After the backend team initiates the traffic migration, the process switches traffic from the old gateway to the new gateway. This step can take up to 15 minutes and might cause brief connectivity interruptions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;New gateway &lt;EM&gt;(ERGW-A_migrate)&lt;/EM&gt; handles traffic:&lt;/P&gt;
&lt;img /&gt;
&lt;H3&gt;Commit stage&lt;/H3&gt;
&lt;P&gt;After migration, the Azure team monitors connectivity for &lt;STRONG&gt;15 days&lt;/STRONG&gt; to ensure everything is functioning as expected. The banner automatically updates to indicate completion of migration:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;During this validation period, you &lt;STRONG&gt;can’t&lt;/STRONG&gt; modify resources associated with both the old and new gateways. To resume normal CRUD operations without waiting 15 days, you have two options:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Commit&lt;/STRONG&gt;: Finalize the migration and unlock resources.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Abort&lt;/STRONG&gt;: Revert to the old gateway, which deletes the new gateway and its connections.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;To initiate &lt;STRONG&gt;Commit&lt;/STRONG&gt; before the 15-day window ends, type &lt;STRONG&gt;yes&lt;/STRONG&gt; and select&lt;STRONG&gt; Commit&lt;/STRONG&gt; in the portal.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When the commit is initiated from the backend, you will see “&lt;EM&gt;Committing migration.&lt;/EM&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;EM&gt;The operation may take some time to complete.”&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The old gateway and its connections are deleted. The event shows as initiated by &lt;STRONG&gt;GatewayRP &lt;/STRONG&gt;in the activity logs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;&lt;img /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 100.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;After old connections are deleted, the old gateway gets deleted.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Finally, the resource group &lt;EM&gt;RGA&lt;/EM&gt; contains only resources only related to the migrated gateway&lt;BR /&gt;&lt;EM&gt;ERGW-A_migrate:&lt;/EM&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;The ExpressRoute Gateway migration from Basic to Standard Public IP SKU is now complete.&lt;/STRONG&gt;&lt;/P&gt;
&lt;H3&gt;Frequently asked questions&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;How long will Microsoft team wait before committing to the new gateway?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The Microsoft team waits around 15 days after migration to allow you time to validate connectivity and ensure all requirements are met. You can commit at any time during this 15-day period.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;What is the traffic impact during migration? Is there packet loss or routing disruption?&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Traffic is rerouted seamlessly during migration. Under normal conditions, no packet loss or routing disruption is expected. Brief connectivity interruptions (typically less than 1 minute) might occur during the traffic cutover phase.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Can we make any changes to ExpressRoute Gateway deployment during the migration?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Avoid making non-critical changes to the deployment (gateway resources, connected circuits, etc.). If modifications are absolutely required, you have the option (after the Migrate stage) to either commit or abort the migration.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Mar 2026 18:20:25 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/expressroute-gateway-microsoft-initiated-migration/ba-p/4497689</guid>
      <dc:creator>MekaylaMoore</dc:creator>
      <dc:date>2026-03-30T18:20:25Z</dc:date>
    </item>
    <item>
      <title>Unlock outbound traffic insights with Azure StandardV2 NAT Gateway flow logs</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/unlock-outbound-traffic-insights-with-azure-standardv2-nat/ba-p/4493138</link>
      <description>&lt;H2&gt;Recommended Outbound Connectivity&lt;/H2&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/nat-gateway/nat-overview#standardv2-nat-gateway" target="_blank"&gt;StandardV2 NAT Gateway&lt;/A&gt; is the next evolution of outbound connectivity in Azure. As the recommended solution for providing secure, reliable outbound Internet access, NAT Gateway continues to be the default choice for modern Azure deployments. With the highly anticipated general availability of the new StandardV2 SKU, customers gain access to the following highly requested upgrades:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Zone-redundancy: Automatically maintains outbound connectivity during single‑zone failures in AZ-enabled regions.&lt;/LI&gt;
&lt;LI&gt;Enhanced performance: Up to 100 Gbps of throughput and 10 million packets per second - double the Standard SKU capacity.&lt;/LI&gt;
&lt;LI&gt;Dual-stack support: Attach up to 16 IPv6 and 16 IPv4 public IP addresses for future ready connectivity.&lt;/LI&gt;
&lt;LI&gt;Flow logs: Access historical logs of connections being established through your NAT gateway.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This blog will focus on how enabling StandardV2 NAT Gateway flow logs can be beneficial for your team along with some tips to get the most out of the data.&lt;/P&gt;
&lt;H2&gt;What are flow logs?&lt;/H2&gt;
&lt;P&gt;StandardV2 NAT Gateway flow logs are enabled through Diagnostic settings on your NAT gateway resource where the log data can be sent to Log Analytics, a storage account, or Event hub destination. “NatGatewayFlowlogV1” is the released log category, and it provides IP level information on traffic flowing through your StandardV2 NAT gateway.&lt;/P&gt;
&lt;img&gt;&lt;EM&gt;Enable NAT&lt;/EM&gt;&lt;EM&gt;Gateway Flow Logs through Diagnostics setting on your StandardV2 NAT gateway resource.&lt;/EM&gt;&lt;/img&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img&gt;&lt;EM&gt;Schema output as seen on Log Analytics for a NAT gateway traffic flow.&lt;/EM&gt;&lt;/img&gt;
&lt;H3&gt;Why should I use flow logs?&lt;/H3&gt;
&lt;P&gt;&lt;U&gt;Security and compliance visibility&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;Prior to NAT gateway flow logs, customers could not see NAT gateway information when their virtual machines connect outbound. This made it difficult to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Validate that only approved destinations were being accessed&lt;/LI&gt;
&lt;LI&gt;Audit suspicious or unexpected outbound patterns&lt;/LI&gt;
&lt;LI&gt;Satisfy compliance requirements that mandate traffic recording&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Flow logs now provide visibility to the source IP -&amp;gt; NAT gateway outbound IP -&amp;gt; destination IP, along with details on sent/dropped packets and bytes.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;Usage analytics&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;Flow logs allow you to answer usage questions such as:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Which VMs are generating the most outbound requests?&lt;/LI&gt;
&lt;LI&gt;Which destinations receive the most traffic?&lt;/LI&gt;
&lt;LI&gt;Is throughput growth caused by a specific workload pattern?&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This level of insight is especially useful when debugging unexpected throughput increases, billing spikes, and connection bottlenecks.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;EM&gt;To note: Flow logs only capture established connections. This means the TCP 3&lt;/EM&gt;&lt;EM&gt;‑&lt;/EM&gt;&lt;EM&gt;way handshake (SYN → SYN/ACK → ACK) or the UDP ephemeral session setup must complete. &lt;/EM&gt;&lt;EM&gt;If a connection never establishes, for example due to NSG denial, routing mismatch, or SNAT exhaustion, it will not appear in flow logs.&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H3&gt;Workflow of troubleshooting with flow logs&lt;/H3&gt;
&lt;P&gt;Let's walk through how you can leverage flow logs to troubleshoot a scenario where you are seeing intermittent connection drops.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Scenario: You have VMs that use a StandardV2 NAT gateway to reach the Internet. However, your VMs intermittently fail to reach github.com.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1: Check NAT gateway health&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Start with the datapath availability metric, which reflects the NAT gateway's overall health.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;If metric &amp;gt; 90%, this confirms NAT gateway is healthy and is working as expected to send outbound traffic to the internet. Continue to Step 2.&lt;/LI&gt;
&lt;LI&gt;If metric is lower, visit &lt;A href="https://learn.microsoft.com/en-us/azure/nat-gateway/troubleshoot-nat-connectivity#datapath-availability-drop-on-nat-gateway-with-connection-failures" target="_blank"&gt;Troubleshoot Azure NAT Gateway connectivity - Azure NAT Gateway | Microsoft Learn&lt;/A&gt; for troubleshooting tips.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2: Enable StandardV2 NAT Gateway Flow Logs&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To further investigate the root cause, Enable StandardV2 NAT Gateway Flow Logs (&lt;EM&gt;NatGatewayFlowLogsV1&lt;/EM&gt; log category in Diagnostics Setting) for the NAT gateway resource providing outbound connectivity for the impacted VMs. It is recommended to enable Log Analytics as a destination as it allows you to easily query the data. For the detailed steps, visit &lt;A href="https://learn.microsoft.com/en-us/azure/nat-gateway/monitor-nat-gateway-flow-logs" target="_blank"&gt;Monitor with StandardV2 NAT Gateway Flow Logs - Azure NAT Gateway | Microsoft Learn&lt;/A&gt;.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;EM&gt;Tip: You may enable flow logs even when not troubleshooting to ensure you’ll have historical data to reference when issues occur.&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3: Confirm whether the connection was established&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Use Log Analytics to query for flows with source IP == VM private IP and destination IP == IP address(es) of github.com. The following query will generate a table and chart of the total packets sent per minute from your source IP to the destination IP through your NAT gateway in the last 24 hours.&lt;LI-CODE lang="sql"&gt;NatGatewayFlowlogsV1
| where TimeGenerated &amp;gt; ago(1d)
| where SourceIP == '10.0.0.4'  //and DestinationIP == &amp;lt;"github.com IP"&amp;gt;
| summarize TotalPacketsSent = sum(PacketsSent) by TimeGenerated = bin(TimeGenerated, 1m), SourceIP, DestinationIP
| order by TimeGenerated asc&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;LI&gt;If there are no records of this connection, it is likely an issue with establishing the connection because flow logs will only capture records of established connections. Take a look at &lt;A href="https://learn.microsoft.com/en-us/azure/nat-gateway/nat-metrics#snat-connection-count" target="_blank"&gt;SNAT connection metrics&lt;/A&gt; to determine whether it may be a SNAT port exhaustion issue or NSGs/UDRs that may be blocking the traffic.&lt;/LI&gt;
&lt;LI&gt;If there are records of the connection, proceed with the next step.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 4: Check if there are any packets dropped&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In Log Analytics, query for the total "PacketsSentDropped" and "PacketsReceivedDropped" per source/outbound/destination IP connection.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;If "PacketsSentDropped" &amp;gt; 0 - NAT gateway dropped traffic sent from your VM.&lt;/LI&gt;
&lt;LI&gt;If "PacketsReceivedDropped" &amp;gt; 0, NAT gateway dropped traffic received from destination IP, github.com in this case.&lt;/LI&gt;
&lt;LI&gt;In both instances, it typically means the either the client or server is pushing more traffic through a single connection than is optimal, causing &lt;A href="https://learn.microsoft.com/en-us/azure/nat-gateway/nat-sku#sku-comparison" target="_blank"&gt;connection-level rate limiting&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;To mitigate:&lt;/LI&gt;
&lt;/OL&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;Avoid relying on one connection and instead use multiple connections.&lt;/LI&gt;
&lt;LI&gt;Distribute traffic across multiple outbound IP addresses by assigning more public IP addresses to the NAT gateway resource.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Conclusion&lt;/H3&gt;
&lt;P&gt;StandardV2 NAT Gateway Flow Logs unlock a powerful new dimension of outbound visibility and they can help you:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Validate cybersecurity readiness&lt;/LI&gt;
&lt;LI&gt;Audit outbound flows&lt;/LI&gt;
&lt;LI&gt;Diagnose intermittent connectivity issues&lt;/LI&gt;
&lt;LI&gt;Understand traffic patterns and optimize architecture&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We are excited to see how you leverage this new capability with your StandardV2 NAT gateways!&lt;/P&gt;
&lt;H3&gt;Have more questions?&lt;/H3&gt;
&lt;P&gt;As always, for any feedback, please feel free to reach us by&amp;nbsp;&lt;A href="https://feedback.azure.com/d365community/forum/8ae9bf04-8326-ec11-b6e6-000d3a4f0789" target="_blank"&gt;submitting your feedback&lt;/A&gt;. We look forward to hearing your thoughts and hope this announcement helps you build more resilient applications in Azure.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For more information on StandardV2 NAT Gateway Flow Logs and how to enable it, visit:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/nat-gateway/nat-gateway-flow-logs" target="_blank"&gt;Manage StandardV2 NAT Gateway Flow Logs - Azure NAT Gateway | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/nat-gateway/monitor-nat-gateway-flow-logs" target="_blank"&gt;Monitor with StandardV2 NAT Gateway Flow Logs - Azure NAT Gateway | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;To see the most up-to-date pricing for flow logs, visit&amp;nbsp;&lt;A href="https://azure.microsoft.com/en-us/pricing/details/azure-nat-gateway/?msockid=028aa4446a5a601f37ecb0076b7761c7" target="_blank"&gt;Azure NAT Gateway - Pricing | Microsoft Azure&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;To learn more about StandardV2 NAT Gateway, visit&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/nat-gateway/nat-overview#standardv2-nat-gateway" target="_blank"&gt;What is Azure NAT Gateway? | Microsoft Learn&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Fri, 06 Feb 2026 16:07:33 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/unlock-outbound-traffic-insights-with-azure-standardv2-nat/ba-p/4493138</guid>
      <dc:creator>cozhang</dc:creator>
      <dc:date>2026-02-06T16:07:33Z</dc:date>
    </item>
    <item>
      <title>Data Center Quantized Congestion Notification: Scaling congestion control for RoCE RDMA in Azure</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/data-center-quantized-congestion-notification-scaling-congestion/ba-p/4468417</link>
      <description>&lt;P&gt;As cloud storage demands continue to grow, the need for ultra-fast, reliable networking becomes ever more critical. Microsoft Azure’s journey to empower its storage infrastructure with RDMA (Remote Direct Memory Access) has been transformative, but it’s not without challenges—especially when it comes to congestion control at scale. Azure’s deployment of RDMA at regional scale relies on DCQCN (Data Center Quantized Congestion Notification), a protocol that’s become central to Azure’s ability to deliver high-throughput, low-latency storage services across vast, heterogeneous data center regions.&lt;/P&gt;
&lt;H2&gt;Why congestion control matters in RDMA networks&lt;/H2&gt;
&lt;P&gt;RDMA offloads the network stack to NIC hardware, reducing CPU overhead and enabling near line-rate performance. However, as Azure scaled RDMA across clusters and regions, it faced new challenges:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Heterogeneous hardware:&lt;/STRONG&gt; Different generations of RDMA NICs (Network Interface Cards) and switches, each with their own quirks.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Variable latency:&lt;/STRONG&gt; Long-haul links between datacenters introduce large round-trip time (RTT) variations.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Congestion risks:&lt;/STRONG&gt; High-speed, incast-like traffic patterns can easily overwhelm buffers, leading to packet loss and degraded performance.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;To address these, Azure needed a congestion control protocol that could operate reliably across diverse hardware and network conditions. Traditional TCP congestion control mechanisms don’t apply here, so Azure leverages &lt;STRONG&gt;DCQCN combined with Priority Flow Control (PFC)&lt;/STRONG&gt; to maintain high throughput, low latency, and near-zero packet loss.&lt;/P&gt;
&lt;H3&gt;How DCQCN works&lt;/H3&gt;
&lt;P&gt;DCQCN coordinates congestion control using three main entities:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Reaction point (RP)&lt;/STRONG&gt;: The sender adjusts its rate based on feedback.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Congestion point (CP)&lt;/STRONG&gt;: Switches mark packets using ECN when queues exceed thresholds.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Notification point (NP)&lt;/STRONG&gt;: The receiver sends Congestion Notification Packets (CNPs) upon receiving ECN-marked packets.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This feedback loop allows RDMA flows to dynamically adapt their sending rates, preventing congestion collapse while maintaining fairness.&lt;/P&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI&gt;When the switch detects congestion, it marks packets with ECN.&lt;/LI&gt;
&lt;LI&gt;The receiver NIC (NP) observes ECN marks and sends CNPs to the sender.&lt;/LI&gt;
&lt;LI&gt;The sender NIC (RP) reduces its sending rate upon receiving CNPs; otherwise, it increases the rate gradually.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Interoperability challenges across different hardware generations&lt;/H3&gt;
&lt;P&gt;Cloud infrastructure evolves incrementally, typically at the level of individual clusters or racks, as newer server hardware generations are introduced. Within a single region, clusters often differ in their NIC configurations. Our deployment includes three generations of commodity RDMA NICs—Gen1, Gen2, and Gen3—each implementing DCQCN with distinct design variations. These discrepancies create complex and often problematic interactions when NICs from different generations interoperate.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Gen1 NICs:&lt;/STRONG&gt; Firmware-based DCQCN, NP-side CNP coalescing, burst-based rate limiting.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Gen2/Gen3 NICs:&lt;/STRONG&gt; Hardware-based DCQCN, RP-side CNP coalescing, per-packet rate limiting.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Problem:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Gen2/Gen3 NICs sending to Gen1 can trigger excessive cache misses, slowing down Gen1’s receiver pipeline.&lt;/LI&gt;
&lt;LI&gt;Gen1 sending to Gen2/Gen3 can cause excessive rate reductions due to frequent CNPs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Azure’s solution:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Move CNP coalescing to NP side for Gen2/Gen3.&lt;/LI&gt;
&lt;LI&gt;Implement per-QP CNP rate limiting, matching Gen1’s timer.&lt;/LI&gt;
&lt;LI&gt;Enable per-burst rate limiting on Gen2/Gen3 to reduce cache pressure.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;DCQCN tuning: Achieving fairness and performance&lt;/H3&gt;
&lt;P&gt;DCQCN is inherently &lt;STRONG&gt;RTT-fair&lt;/STRONG&gt;—its rate adjustment is independent of round-trip time, making it suitable for Azure’s regional networks with RTTs ranging from microseconds to milliseconds.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key Tuning Strategies:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Sparse ECN marking: &lt;/STRONG&gt;Use large ECN marking thresholds (K_max - K_min) and low marking probabilities (P_max) for flows with large RTTs.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Joint buffer and DCQCN tuning: &lt;/STRONG&gt;Tune switch buffer thresholds and DCQCN parameters together to avoid premature congestion signals and optimize throughput.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Global parameter settings: &lt;/STRONG&gt;Azure’s NICs support only global DCQCN settings, so parameters must work well across all traffic types and RTTs.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Real-world results&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;High throughput &amp;amp; low latency:&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;RDMA traffic runs at line rate with near-zero packet loss.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;CPU savings:&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Freed CPU cores can be repurposed for customer VMs or application logic.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Performance metrics:&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;RDMA reduces CPU utilization by up to 34.5% compared to TCP for storage frontend traffic.&lt;/LI&gt;
&lt;LI&gt;Large I/O requests (1 MB) see up to 23.8% latency reduction for reads and 15.6% for writes.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Scalability:&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;As of November 2025, ~85% of Azure’s traffic is RDMA, supported in all public regions.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;DCQCN is a cornerstone of Azure’s RDMA-enabled storage infrastructure, enabling reliable, high-performance cloud storage at scale. By combining ECN-based signaling with dynamic rate adjustments, DCQCN ensures high throughput, low latency, and near-zero packet loss—even across heterogeneous hardware and long-haul links. Its interoperability fixes and careful tuning make it a critical enabler for RDMA adoption in modern data centers, paving the way for efficient, scalable, and resilient cloud storage.&lt;/P&gt;</description>
      <pubDate>Tue, 13 Jan 2026 22:35:21 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/data-center-quantized-congestion-notification-scaling-congestion/ba-p/4468417</guid>
      <dc:creator>VamsiVadlamuri</dc:creator>
      <dc:date>2026-01-13T22:35:21Z</dc:date>
    </item>
    <item>
      <title>Azure Front Door: Implementing lessons learned following October outages</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/azure-front-door-implementing-lessons-learned-following-october/ba-p/4479416</link>
      <description>&lt;H5&gt;Abhishek Tiwari, Vice President of Engineering, Azure Networking&lt;BR /&gt;Amit Srivastava, Principal PM Manager, Azure Networking&lt;BR /&gt;Varun Chawla, Partner Director of Engineering&lt;/H5&gt;
&lt;H2&gt;&lt;BR /&gt;Introduction&lt;/H2&gt;
&lt;P&gt;Azure Front Door is Microsoft's advanced edge delivery platform encompassing Content Delivery Network (CDN), global security and traffic distribution into a single unified offering. By using Microsoft's extensive global edge network, Azure Front Door ensures efficient content delivery and advanced security through 210+ &lt;A href="https://learn.microsoft.com/en-us/azure/frontdoor/edge-locations-by-region" target="_blank" rel="noopener"&gt;global and local points of presence (PoPs)&lt;/A&gt;&amp;nbsp;strategically positioned closely to both end users and applications.&lt;/P&gt;
&lt;P&gt;As the central global entry point from the internet onto customer applications, we power mission critical customer applications as well as many of Microsoft’s internal services. We have a highly distributed resilient architecture, which protects against failures at the server, rack, site and even at the regional level. This resiliency is achieved by the use of our intelligent traffic management layer which monitors failures and load balances traffic at server, rack or edge sites level within the primary ring, supplemented by a secondary-fallback ring which accepts traffic in case of primary traffic overflow or broad regional failures. We also deploy a traffic shield as a terminal safety net to ensure that in the event of a managed or unmanaged edge site going offline, end user traffic continues to flow to the next available edge site.&lt;/P&gt;
&lt;P&gt;Like any large-scale CDN, we deploy each customer configuration across a globally distributed edge fleet, densely shared with thousands of other tenants. While this architecture enables global scale, it carries the risk that certain incompatible configurations, if not contained, can propagate broadly and quickly which can result in a large blast radius of impact. Here we describe how the two recent service incidents impacting Azure Front Door have reinforced the need to accelerate ongoing investments in hardening our resiliency, and tenant isolation strategy to mitigate likelihood and the scale of impact from this class of risk.&lt;/P&gt;
&lt;img /&gt;
&lt;H2&gt;October incidents: recap and key learnings&lt;/H2&gt;
&lt;P&gt;Azure Front Door experienced two service incidents; on October&amp;nbsp;9&lt;SUP&gt;th&lt;/SUP&gt; and October&amp;nbsp;29&lt;SUP&gt;th&lt;/SUP&gt;, both with customer-impacting service degradation.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;On October&lt;/STRONG&gt;&lt;STRONG&gt; 9&lt;SUP&gt;th&lt;/SUP&gt;:&lt;/STRONG&gt; A manual cleanup of stuck tenant metadata bypassed our configuration protection layer, allowing incompatible metadata to propagate beyond our canary edge sites. This metadata was created on October 7&lt;SUP&gt;th&lt;/SUP&gt;, from a control-plane defect triggered by a customer configuration change. While the protection system initially blocked the propagation, the manual override operation bypassed our safeguards. This incompatible configuration reached the next stage and activated a latent data-plane defect in a subset of edge sites, causing availability impact primarily across Europe (~6%) and Africa (~16%). You can learn more about this issue in detail at &lt;A href="https://aka.ms/AIR/QNBQ-5W8" target="_blank" rel="noopener"&gt;https://aka.ms/AIR/QNBQ-5W8&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;On October&lt;/STRONG&gt;&lt;STRONG&gt;&amp;nbsp;29&lt;SUP&gt;th&lt;/SUP&gt;:&lt;/STRONG&gt; A different sequence of configuration changes across two control-plane versions produced incompatible metadata. Because the failure mode in the data-plane was asynchronous, the health checks validations embedded in our protection systems were all passed during the rollout. The incompatible customer configuration metadata successfully propagated globally through a staged rollout and also updated the “last known good” (LKG) snapshot. Following this global rollout, the asynchronous process in data-plane exposed another defect which caused crashes. This impacted connectivity and DNS resolutions for all applications onboarded to our platform. Extended recovery time amplified impact on customer applications and Microsoft services. You can learn more about this issue in detail at &lt;A href="https://aka.ms/AIR/YKYN-BWZ" target="_blank" rel="noopener"&gt;https://aka.ms/AIR/YKYN-BWZ&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;We took away a number of clear and actionable lessons from these incidents, which are applicable not just to our service, but to any multi-tenant, high-density, globally distributed system.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Configuration resiliency&lt;/STRONG&gt; – Valid configuration updates should propagate safely, consistently, and predictably across our global edge, while ensuring that incompatible or erroneous configuration never propagate beyond canary environments.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Data plane resiliency - &lt;/STRONG&gt;Additionally, configuration processing in the data plane must not cause availability impact to any customer.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Tenant isolation&lt;/STRONG&gt; – Traditional isolation techniques such as hardware partitioning and virtualization are impractical at edge sites. This requires innovative sharding techniques to ensure single tenant-level isolation – a must-have to reduce potential blast radius.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Accelerated and automated recovery time objective (RTO)&lt;/STRONG&gt; – System should be able to automatically revert to last known good configuration in an acceptable RTO. In case of a service like Azure Front Door, we deem ~10 mins to be a practical RTO for our hundreds of thousands of customers at every edge site.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Post outage, given the severity of impact which allowed an incompatible configuration to propagate globally, we made the difficult decision to temporarily block configuration changes in order to expedite rollout of additional safeguards. Between October 29&lt;SUP&gt;th&lt;/SUP&gt; to November 5&lt;SUP&gt;th&lt;/SUP&gt;, we prioritized and deployed immediate hardening steps before opening up the configuration change. We are confident that the system is stable, and we are continuing to invest in additional safeguards to further strengthen the platform's resiliency.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Learning category&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Goal&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Repairs&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Status&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Safe customer configuration deployment&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Incompatible configuration never propagates beyond Canary&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Control plane and data plane defect fixes&lt;/P&gt;
&lt;P&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Forced synchronous configuration processing&lt;/P&gt;
&lt;P&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Additional stages with extended bake time&lt;/P&gt;
&lt;P&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Early detection of crash state&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;Completed&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td rowspan="2"&gt;
&lt;P&gt;&lt;STRONG&gt;Data plane resiliency&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td rowspan="2"&gt;
&lt;P&gt;Configuration processing cannot impact data plane availability&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Manage data-plane lifecycle to prevent outages caused by configuration-processing defects.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG class="lia-align-center"&gt;&amp;nbsp; &lt;SPAN class="lia-text-color-6"&gt;Completed &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Isolated work-process in every data plane server to process and load the configuration.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;January 2026&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td rowspan="2"&gt;
&lt;P&gt;&lt;STRONG&gt;100% Azure Front Door resiliency posture for Microsoft internal services&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td rowspan="2"&gt;
&lt;P&gt;Microsoft operates an isolated, independent Active/Active fleet with automatic failover for critical Azure services&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Phase 1: &lt;/STRONG&gt;Onboarded critical services batch impacted on Oct 29&lt;SUP&gt;th&lt;/SUP&gt; outage running on a day old configuration&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;Completed&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Phase 2:&lt;/STRONG&gt; Automation &amp;amp; hardening of operations, auto-failover and self-management of Azure Front Door onboarding for additional services&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;March 2026&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td rowspan="2"&gt;
&lt;P&gt;&lt;STRONG&gt;Recovery improvements&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td rowspan="2"&gt;
&lt;P&gt;Data plane crash recovery in under 10 minutes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Data plane boot-up time optimized via local cache (~1 hour)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;Completed&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Accelerate recovery time &amp;lt; 10 minutes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;March 2026&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Tenant isolation&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No configuration or traffic regression can impact other tenants&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Micro cellular Azure Front Door with ingress layered shards&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;June&amp;nbsp; &amp;nbsp; 2026&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;This blog is the first in a multi-part series on Azure Front Door resiliency. In this blog, we will focus on configuration resiliency—how we are making the configuration pipeline safer and more robust. Subsequent blogs will cover tenant isolation and recovery improvements.&lt;/P&gt;
&lt;H2&gt;How our configuration propagation works&lt;/H2&gt;
&lt;P&gt;Azure Front Door configuration changes can be broadly classified into three distinct categories.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Service&amp;nbsp;code &amp;amp; data&lt;/STRONG&gt; – these include all aspects of Azure Front Door service like&amp;nbsp;management plane,&amp;nbsp;control plane, data plane,&amp;nbsp;configuration propagation system. Azure Front Door follows a safe deployment practice (SDP) process to&amp;nbsp;roll out&amp;nbsp;newer versions of management,&amp;nbsp;control&amp;nbsp;or data plane&amp;nbsp;over a period of&amp;nbsp;approximately 2-3 weeks.&amp;nbsp;This ensures that any regression in software does not have a global impact.&amp;nbsp;However, latent bugs that escape pre-validation and SDP rollout can remain undetected until a specific combination of customer traffic patterns or configuration changes trigger the issue.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Web Application Firewall (WAF) &amp;amp; L7 DDoS platform&amp;nbsp;data&lt;/STRONG&gt;&amp;nbsp;–&amp;nbsp;These datasets are used by Azure Front Door to deliver security and load-balancing capabilities. Examples include GeoIP data, malicious attack signatures, and IP reputation signatures. Updates to these datasets occur daily through multiple SDP stages with an extended bake time of over 12 hours to minimize the risk of global impact during rollout. This dataset is shared across all customers and the platform, and it is validated immediately since it does not depend on variations in customer traffic or configuration steps.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Customer configuration data&lt;/STRONG&gt;&amp;nbsp;–&amp;nbsp;Examples of&amp;nbsp;these&amp;nbsp;are any customer&amp;nbsp;configuration change—whether a routing rule update, backend pool modification, WAF rule change, or security policy&amp;nbsp;change. Due to&amp;nbsp;the nature&amp;nbsp;of these changes, it is expected&amp;nbsp;across the edge delivery / CDN industry&amp;nbsp;to propagate these changes globally in 5-10 mins.&amp;nbsp;Both outages stemmed from issues within this category.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;All configuration changes, including customer configuration data, are&amp;nbsp;processed through a multi-stage pipeline designed to ensure correctness before global rollout&amp;nbsp;across Azure Front Door’s 200+ edge locations.&amp;nbsp;At a high level, Azure Front Door’s configuration&amp;nbsp;propagation system has&amp;nbsp;two&amp;nbsp;distinct&amp;nbsp;components&amp;nbsp;-&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Control plane&lt;/STRONG&gt;&amp;nbsp;– Accepts customer API/portal changes (create/update/delete for profiles, routes, WAF policies, origins, etc.) and translates them into internal configuration metadata&amp;nbsp;which the data plane can understand.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Data plane&lt;/STRONG&gt;&amp;nbsp;– Globally distributed edge servers that&amp;nbsp;terminate client traffic, apply routing/WAF logic, and proxy to origins using the configuration produced by the control plane.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Between these two halves sits a&amp;nbsp;&lt;STRONG&gt;multi-stage configuration rollout pipeline&lt;/STRONG&gt;&amp;nbsp;with a dedicated protection system&amp;nbsp;(known as ConfigShield):&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Changes flow through multiple stages (pre-canary, canary, expanding waves&amp;nbsp;to production) rather than going global at once.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Each stage is&amp;nbsp;&lt;STRONG&gt;health-gated&lt;/STRONG&gt;: the data plane must remain within strict error and latency thresholds before&amp;nbsp;proceeding.&amp;nbsp;Each stage’s health check also rechecks&amp;nbsp;previous&amp;nbsp;stage’s health for any&amp;nbsp;regressions.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;A successfully completed rollout updates a&amp;nbsp;&lt;STRONG&gt;last known good (LKG) &lt;/STRONG&gt;snapshot used for automated rollback.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Historically, rollout targeted global completion in&amp;nbsp;roughly 5–10 minutes, in line with industry standards.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;H3&gt;Customer configuration processing in Azure Front Door data plane stack&lt;/H3&gt;
&lt;P&gt;Customer configuration changes in Azure Front Door traverse multiple layers—from the control plane through the deployment system—before being converted into &lt;STRONG&gt;FlatBuffers&lt;/STRONG&gt; at each Azure Front Door node. These FlatBuffers are then loaded by the Azure Front Door data plane stack, which runs as Kubernetes pods on every node.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;FlatBuffer Composition: Each FlatBuffer references several sub-resources such as WAF and Rules Engine schematic files, SSL certificate objects, and URL signing secrets.&lt;/LI&gt;
&lt;LI&gt;Data plane architecture:&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;o&amp;nbsp;&amp;nbsp; Master process: Accepts configuration changes (memory-mapped files with references) and manages the lifecycle of worker processes.&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;o&amp;nbsp;&amp;nbsp; Workers: L7 proxy processes that serve customer traffic using the applied configuration.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;H4&gt;Processing flow for each configuration update:&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;Load and apply in master: The transformed configuration is loaded and applied in the master process. Cleanup of unused references occurs synchronously except for certain categories à &lt;STRONG&gt;&lt;EM&gt;&lt;U&gt;October&lt;/U&gt;&lt;/EM&gt;&lt;/STRONG&gt;&lt;STRONG&gt;&lt;EM&gt;&lt;U&gt; 9 outage occurred during this step due to a crash triggered by incompatible metadata&lt;/U&gt;&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;&lt;U&gt;.&lt;/U&gt;&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;Apply to workers: Configuration is applied to all worker processes without memory overhead (FlatBuffers are memory-mapped).&lt;/LI&gt;
&lt;LI&gt;Serve traffic: Workers start consuming new FlatBuffers for new requests; in-flight requests continue using old buffers. Old buffers are queued for cleanup post-completion.&lt;/LI&gt;
&lt;LI&gt;Feedback to deployment service: Positive feedback signals readiness for rollout.Cleanup: FlatBuffers are freed asynchronously by the master process after all workers load updates à &lt;STRONG&gt;&lt;EM&gt;&lt;U&gt;October&lt;/U&gt;&lt;/EM&gt;&lt;/STRONG&gt;&lt;STRONG&gt;&lt;EM&gt;&lt;U&gt; 29 outage occurred during this step due to a latent bug in reference counting logic.&lt;/U&gt;&lt;/EM&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The October&amp;nbsp;incidents&amp;nbsp;showed we needed to strengthen&amp;nbsp;key&amp;nbsp;aspects of configuration validation, propagation safeguards, and runtime behavior.&amp;nbsp;During the Azure Front Door incident on October 9&lt;SUP&gt;th&lt;/SUP&gt;, that protection system worked as&amp;nbsp;intended but&amp;nbsp;was later bypassed by our engineering team during a manual cleanup operation. During this Azure Front Door incident on October 29&lt;SUP&gt;th&lt;/SUP&gt;, the incompatible customer configuration metadata progressed through the protection system, before the delayed asynchronous processing task resulted in the crash.&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Configuration propagation safeguards&lt;/H2&gt;
&lt;P&gt;Based on learnings from the incidents, we are implementing a comprehensive set of configuration resiliency improvements. These changes aim to guarantee that any sequence of configuration changes cannot trigger instability in the data plane, and to ensure quicker recovery in the event of anomalies.&lt;/P&gt;
&lt;H3 class="lia-align-left"&gt;Strengthening configuration generation safety&lt;/H3&gt;
&lt;P&gt;This improvement pivots on a ‘shift-left’ strategy where we want to ensure that we catch regression early before they propagate to production. It also includes fixing the latent defects which were the proximate cause of the outage.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Fixing outage specific defects&lt;/STRONG&gt; - We have fixed the control-plane defects that could generate incompatible tenant metadata under specific operation sequences. We have also remediated the associated data-plane defects.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Stronger cross-version validation - &lt;/STRONG&gt;We are expanding our test and validation suite to account for changes across multiple control plane build versions. This is expected to be fully completed by February 2026.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Fuzz testing&lt;/STRONG&gt; - Automated fuzzing and testing of metadata generation contract between the control plane and the data plane. This allows us to generate an expanded set of invalid/unexpected configuration combinations which might not be achievable by traditional test cases alone. This is expected to be fully completed by February 2026.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Preventing incompatible configurations from being propagated&lt;/H3&gt;
&lt;P&gt;This segment of the resiliency strategy strives to ensure that a potentially dangerous configuration change never propagates beyond canary stage.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Protection system is “always-on” &lt;/STRONG&gt;- Enhancements to operational procedures and tooling prevent bypass in all scenarios (including internal cleanup/maintenance), and any cleanup must flow through the same guarded stages and health checks as standard configuration changes. This is completed.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Making rollout behavior more predictable and conservative - &lt;/STRONG&gt;Configuration processing in the data plane is now fully synchronous. Every data plane issue due to incompatible meta data can be detected withing 10 seconds at every stage. This is completed.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Enhancement to deployment pipeline&lt;/STRONG&gt; - Additional stages during roll-out and extended bake time between stages serve as an additional safeguard during configuration propagation. This is completed.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Recovery tool&lt;/STRONG&gt; improvements now make it easier to revert to any previous version of LKG with a single click. This is completed.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;These changes significantly improve system safety. Post-outage we have increased the configuration propagation time to approximately 45 minutes. We are working towards reducing configuration propagation time closer to pre-incident levels once additional safeguards covered in the Data plane resiliency section below are completed by mid-January, 2026.&lt;/P&gt;
&lt;H2&gt;Data plane resiliency&lt;/H2&gt;
&lt;P&gt;The data plane recovery was the toughest part of recovery efforts during the October incidents. We must ensure fast recovery as well as resilience to configuration processing related issues for the data plane. To address this, we implemented changes that decouple the data plane from incompatible configuration changes. With these enhancements, the data plane continues operating on the last known good configuration—even if the configuration pipeline safeguards fail to protect as intended.&lt;/P&gt;
&lt;H3&gt;Decoupling data plane from configuration changes&lt;/H3&gt;
&lt;P&gt;Each server’s data plane consists of a master process which accepts configuration changes and manages lifecycle of multiple worker processes which serve customer traffic. One of the critical reasons for the prolonged outage in October was that due to latent defects in the data plane, when presented with a bad configuration the master process crashed. The master is a critical command-and-control process and when it crashes it takes down the entire data plane, in that node. Recovery of the master process involves reloading hundreds of thousands of configurations from scratch and took approximately 4.5 hours.&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;We have since made changes to the system to ensure that even in the event of the master process crash due to any reason - including incompatible configuration data being presented - the workers remain healthy and able to serve traffic. During such an event, the workers would not be able to accept new configuration changes but will continue to serve customer traffic using the last known good configuration. This work is completed.&lt;/P&gt;
&lt;H2&gt;Introducing Food Taster: strengthening config propagation resiliency&lt;/H2&gt;
&lt;P&gt;In our efforts to further strengthen Azure Front Door’s configuration propagation system, we are introducing an additional configuration safeguard known internally as &lt;STRONG&gt;Food Taster&lt;/STRONG&gt; which protects the master and worker processes from any configuration change related incidents, thereby ensuring data plane resiliency.&lt;/P&gt;
&lt;P&gt;The principle is simple: every data-plane server will have a redundant and isolated process – the Food Taster – whose only job is to ingest and process new configuration metadata first and then pass validated configuration changes to active data plane. This redundant worker does not accept any customer traffic.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;All configuration processing in this Food Taster is fully synchronous. That means we do all parsing, validation, and any expensive or risky work up front, and we do not move on until the Food Taster has either proven the configuration is safe or rejected it. Only when the Food Taster successfully loads the configuration and returns “Config OK” does the master process proceed to load the same config and then instruct the worker processes to do the same. If anything goes wrong in the Food Taster, the failure is contained to that isolated worker; the master and traffic-serving workers never see that invalid configuration.&lt;/P&gt;
&lt;P&gt;We expect this safeguard to reach production globally in January 2026 timeframe. Introduction of this component will also allow us to return closer to pre-incident level of configuration propagation while ensuring data plane safety.&lt;/P&gt;
&lt;H2&gt;Closing&lt;/H2&gt;
&lt;P&gt;This is the first in a series of planned blogs on Azure Front Door resiliency enhancements. We are continuously improving platform safety and reliability and will transparently share updates through this series. Upcoming posts will cover advancements in tenant isolation and improvements to recovery time objectives (RTO).&lt;/P&gt;
&lt;P&gt;We deeply value our customers’ trust in Azure Front Door. The October incidents reinforced how critical configuration resiliency is, and we are committed to exceeding industry expectations for safety, reliability, and transparency. By hardening our configuration pipeline, strengthening safety gates, and reinforcing isolation boundaries, we’re making Azure Front Door even more resilient so your applications can be too.&lt;/P&gt;</description>
      <pubDate>Fri, 19 Dec 2025 16:43:31 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/azure-front-door-implementing-lessons-learned-following-october/ba-p/4479416</guid>
      <dc:creator>AbhishekTiwari</dc:creator>
      <dc:date>2025-12-19T16:43:31Z</dc:date>
    </item>
    <item>
      <title>Azure Networking 2025: Powering cloud innovation and AI at global scale</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/azure-networking-2025-powering-cloud-innovation-and-ai-at-global/ba-p/4479390</link>
      <description>&lt;P&gt;&lt;BR /&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;In 2025, Azure’s networking platform proved itself as the invisible engine driving the cloud’s most transformative innovations. Consider the construction of Microsoft’s new&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Fairwater&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;AI datacenter in Wisconsin – a&amp;nbsp;&lt;/SPAN&gt;315-acre campus&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;housing hundreds of thousands of GPUs. To&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;operate&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;as one giant AI supercomputer, Fairwater&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;required&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;a&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;single flat, ultra-fast network&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;interconnecting every GPU&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;. Azure’s networking team delivered: the facility’s network fabric links GPUs at 800 Gbps speeds in a non-blocking architecture, enabling&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;10× the performance of the world’s fastest supercomputer&lt;SPAN data-ccp-charstyle="Normal"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;This feat&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;showcases&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;how fundamental networking is to cloud innovation. Whether&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;it’s&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;uniting massive AI clusters or connecting millions of everyday users, Azure’s globally distributed network is the foundation upon which new breakthroughs are built.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;In 2025, the surge of AI workloads, data-driven applications, and hybrid cloud adoption put unprecedented demands on this foundation. We responded with &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;bold network investments and innovations&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;Each new networking feature delivered in 2025, from smarter routing to faster gateways, was not just a technical upgrade but an&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;innovation&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;enabling customers to achieve more. Recapping the year’s major releases across Azure Networking services&amp;nbsp;and key highlights how AI both&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;&lt;/SPAN&gt;drive and&amp;nbsp;benefit&amp;nbsp;from&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;these advancements.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:279}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P aria-level="2"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;Unprecedented connectivity for a hybrid and AI era&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Hybrid connectivity at scale&lt;/STRONG&gt;:&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt; Azure’s network enhancements in 2025 focused on making global and hybrid connectivity faster, simpler, and ready for the next wave of AI-driven traffic. For enterprises extending on-premises infrastructure to Azure, Azure &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ExpressRoute&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;private connectivity saw a major leap in capacity: Microsoft announced support for&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;400 Gbps&lt;/STRONG&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ExpressRoute Direct ports&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;(available in 2026) to meet the needs of AI supercomputing and massive data volumes&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;. These high-speed ports – which can be aggregated into multi-terabit links – ensure that even the largest enterprises or HPC clusters can transfer data to Azure with dedicated, low-latency links. In parallel, Azure&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;VPN Gateway&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;performance reached new highs, with&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;a generally available&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;upgrade that delivers up to&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;20 Gbps aggregate throughput&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;per gateway and&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;5 Gbps per individual&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;tunnel&lt;/STRONG&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&lt;STRONG&gt;.&lt;/STRONG&gt; This is a&amp;nbsp;&lt;/SPAN&gt;3× increase&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;over&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;previous&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;limits, enabling branch offices and remote sites to connect to Azure even more seamlessly without bandwidth bottlenecks. Together, the ExpressRoute and VPN improvements give customers a spectrum of high-performance options for hybrid networking – from offices and datacenters to the cloud – supporting scenarios like large-scale data migrations, resilient multi-site architectures, and hybrid AI processing.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Simplified global networking&lt;/STRONG&gt;:&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;Azure&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Virtual WAN (vWAN)&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;continued to mature as the one-stop solution for managing global connectivity.&amp;nbsp;Virtual WAN introduced&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;forced tunneling&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;for Secure Virtual Hubs (now in preview), which allows organizations to route all Internet-bound traffic from branch offices or virtual networks back to a central hub for inspection&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;. This capability simplifies the implementation of a “backhaul to hub” security model – for example, forcing branches to use a central&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;firewall&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;or security appliance – without complex user-defined routing.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;Empowering multicloud and NVA integration&lt;/STRONG&gt;:&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;Azure recognizes that enterprise networks are diverse.&amp;nbsp;Azure&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Route Server&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt; improvements enhanced interoperability with customer equipment and third-party network virtual appliances (NVAs). Notably, Azure&amp;nbsp;Route Server now supports up to &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;500 virtual network connections&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;(spokes) per route server,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;a significant scale boost that enables larger hub-and-spoke topologies and simplified Border Gateway Protocol (BGP) route exchange even in &lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;very large&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;environments. This helps customers using SD-WAN appliances or custom firewalls in Azure to seamlessly learn routes from hundreds of&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;VNet&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;spokes –&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;maintaining&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt; central routing control without manual configuration. Additionally, Azure&amp;nbsp;Route Server introduced a preview of &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;hub routing preference&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;, giving admins the ability to influence BGP route&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;selection&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;(for example, preferring ExpressRoute over a VPN path, or vice versa)&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;. This fine-grained control means hybrid networks can be tuned for&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;optimal&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;performance and cost.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P aria-level="2"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;Resilience and reliability by design&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;Azure’s growth has been underpinned by making the network&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;“resilient by default.”&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;We shipped tools to help&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;&lt;/SPAN&gt;validate&amp;nbsp;and improve network resiliency&lt;SPAN data-ccp-charstyle="Normal"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ExpressRoute Resiliency Insights&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;was released for&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;general availability – delivering an intelligent assessment of an enterprise’s ExpressRoute setup&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;. This feature evaluates how well your ExpressRoute circuits and gateways are architected for high availability (for example, using dual circuits in diverse locations, zone-redundant gateways, etc.) and assigns a&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;resiliency index score&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;as a percentage&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;. It will highlight suboptimal configurations – such as routes advertised on only one circuit, or a gateway that&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;isn’t&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;zone-redundant – and provide recommendations for improvement. Moreover, Resiliency Insights includes a&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;failover simulation tool&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;that can test circuit redundancy by mimicking failures, so you can verify that your connections will survive real-world incidents&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;. By proactively&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;m&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;onitoring&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;and testing resilience, Azure is helping customers achieve “always-on” connectivity even in the face of fiber cuts, hardware faults, or other disruptions.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P aria-level="2"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;Security, governance, and trust in the network&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;As enterprises entrust more core business to Azure, the platform’s networking services advanced on&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;security and governance&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;– helping customers achieve Zero Trust networks and high compliance with minimal complexity. Azure DNS now offers&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;DNS Security Policies with Threat Intelligence&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;feeds&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;(GA)&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;. This capability allows organizations to protect their DNS queries from known malicious domains by&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;leveraging&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;continuously updated threat intel. For example, if a known phishing domain or C2 (command-and-control) hostname appears in DNS queries from your environment, Azure DNS can automatically block or redirect those requests. Because DNS is often the first line of detection for malware and phishing activities, this built-in filtering provides a powerful layer of defense&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;that’s&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;fully managed by Azure.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;It’s&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;essentially a&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;cloud-delivered DNS&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;firewall&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;using Microsoft’s vast threat intelligence – enabling all Azure customers to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;benefit&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;from enterprise-grade security without deploying&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;additional&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;appliances.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Network traffic governance&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;was another focus. The introduction of&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;forced tunneling in Azure Virtual WAN hubs&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;(preview)&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;shared above&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;is a prime example&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;where&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;networking meets security compliance&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P aria-level="2"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;Optimizing cloud-native and edge networks&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;We&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;previewed&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;DNS intelligent traffic control&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;features – such as filtering DNS queries to prevent data exfiltration and applying flexible recursion policies – which complement the DNS Security offering in safeguarding name resolution&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;. Meanwhile, for load balancing across regions, Azure Traffic Manager’s behind-the-scenes upgrades (as noted earlier) improved reliability, and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;it’s&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;evolving to integrate with modern container-based apps and edge scenarios.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P aria-level="2"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;AI-powered networking: Both enabling and enabled by AI&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;We are&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;infusing&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;AI into networking&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt; to make management and troubleshooting more intelligent. Networking functionality in&amp;nbsp;Azure&amp;nbsp;Copilot &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;accelerates tasks like never before:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;it outlines the&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;best practices&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;instantly and&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;troubleshooting that once required combing through docs and logs can be conversational.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;It effectively democratizes networking&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;expertise&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;, helping even smaller IT teams manage sophisticated networks by&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;leveraging&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;AI recommendations.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:1,&amp;quot;335551620&amp;quot;:1,&amp;quot;335559685&amp;quot;:0,&amp;quot;335559737&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:279}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P aria-level="2"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;The future of cloud networking in an AI world&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;As we close out 2025, one message is clear:&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;networking is strategic&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;. The network is no longer a static utility – it is the adaptive circulatory system of the cloud,&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;determining&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;how far and fast customers can go.&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;By delivering higher speeds, greater reliability, tighter security, and easier management, Azure Networking has empowered businesses to connect&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;everything to anything, anywhere – securely and at scale&lt;SPAN data-ccp-charstyle="Normal"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;These advances unlock new scenarios: global supply chains running in real-time over a trusted network, multi-player AR/VR and gaming experiences delivered without lag, and AI models trained across continents.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;Looking ahead,&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;AI-powered networking&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Normal"&gt;&amp;nbsp;will become the norm. The convergence of AI and network tech means we will see more self-optimizing networks that can heal, defend, and tune themselves with minimal human intervention.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 18 Dec 2025 23:20:29 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/azure-networking-2025-powering-cloud-innovation-and-ai-at-global/ba-p/4479390</guid>
      <dc:creator>Sudha_Mahajan</dc:creator>
      <dc:date>2025-12-18T23:20:29Z</dc:date>
    </item>
    <item>
      <title>Network Detection and Response (NDR) in Financial Services</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/network-detection-and-response-ndr-in-financial-services/ba-p/4472515</link>
      <description>&lt;P&gt;Organizations in the Financial Services industry handling sensitive account holder information must comply with the &lt;STRONG&gt;&lt;A class="lia-external-url" href="https://www.pcisecuritystandards.org/" target="_blank" rel="noopener"&gt;Payment Card Industry&lt;/A&gt; Data Security Standard (PCI DSS)&lt;/STRONG&gt;. The latest version, &lt;A class="lia-external-url" href="https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0_1.pdf" target="_blank" rel="noopener"&gt;PCI DSS v4.0.1 &lt;/A&gt;&amp;nbsp;of June 2024, reinforces the requirements for network security monitoring.&lt;/P&gt;
&lt;P&gt;Traditional network security tools, such as firewalls and Intrusion Detection and Prevention Systems (IDPS), struggle to meet these requirements because they either lack deep visibility or generate too many false positives. This is where&amp;nbsp;&lt;STRONG&gt;Network Detection and Response (NDR)&lt;/STRONG&gt; comes in. NDR solutions look at the network traffic within the Cardholder Data Environment (CDE) and use advanced methods (behavioral analytics, machine learning, threat intel) to detect anomalies or attacks in real-time, and facilitate quick responses.&lt;/P&gt;
&lt;P&gt;This post explains how NDR supports PCI DSS v4.0.1 compliance, with a focus on deployments in Azure. We will map NDR capabilities to key PCI requirements, describe how Azure’s native tools (Azure Virtual Network TAP, VNET Flow Logs, Traffic Analytics) enable an NDR solution by capturing network data, and discuss third-party NDR tools that analyze this data for threats.&lt;BR /&gt;&lt;BR /&gt;We will also evaluate Microsoft Sentinel’s role as a partial NDR solution, and highlight how Microsoft Defender for Cloud contributes to PCI compliance.&lt;/P&gt;
&lt;H1&gt;The role of NDR in PCI DSS Compliance&lt;/H1&gt;
&lt;P&gt;PCI DSS v4.0.1 is organized into 12 main requirement areas. NDR technology primarily supports the control objectives in Requirement 10 (“Log and Monitor All Access to System Components and Cardholder Data”) and Requirement 11 (“Test Security of Systems and Networks Regularly”) , while also aiding in demonstrating compliance with Requirement 4 ("Protect Cardholder Data with Strong Cryptography ...") and Requirement 12 ("Support Information Security with Organizational Policies and Programs").&lt;/P&gt;
&lt;P&gt;Below is how NDR aids in building compliance with these controls:&lt;/P&gt;
&lt;H4&gt;Logging &amp;amp; Monitoring&lt;/H4&gt;
&lt;P&gt;Organizations are required to&amp;nbsp;&lt;EM&gt;“log and monitor all access to system components and cardholder data”&lt;/EM&gt;. Requirement 10.4.1 calls for automated mechanisms to perform log reviews and detect anomalies at least daily. An NDR solution addresses this by automatically analyzing network traffic logs and generating alerts for suspicious behavior. Every connection or data transfer involving cardholder systems is continuously scrutinized.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For example, if a database containing card data suddenly starts sending large amounts of data to an unfamiliar server, NDR will log that event and flag it for investigation immediately. This satisfies the intent of Requirement 10 by ensuring that not only are network events being recorded, but they are also under active surveillance at all times. NDR essentially serves as an automated network log reviewer, catching things a manual review might miss. This helps meet Requirement 10’s mandate for &lt;EM&gt;timely&lt;/EM&gt; review so that "... incidents can be quickly identified and proactively addresses."&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;SPAN class="lia-text-color-21"&gt;Intrusion Detection&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;Requirement 11.5 requires the use of intrusion-detection and/or prevention systems (IDS/IPS) at the perimeter of, and at critical points within the CDE. Additionally, 11.5.1.1 requires service providers to employ IDS to detect covert communication attempts, such as malware trying to reach a command-and-control server.&amp;nbsp; &amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;NDR solutions fulfill the role of an IDS by continuously inspecting network traffic for attack signatures and unusual patterns. But NDR goes beyond a legacy IDS: instead of only matching known signatures, it also uses behavior analysis to catch “zero-day” or insider threats (for example, an NDR might detect lateral movement within the network based on abnormal access patterns, even if no signature exists for that behavior). All traffic inbound to, outbound from, and within the CDE is watched and any suspect activity is alerted on.&lt;BR /&gt;&lt;BR /&gt;Showing an NDR solution is in place will satisfy auditors that “Intrusion-detection and/or intrusion prevention techniques are used to detect and/or prevent intrusions into the network”. The high fidelity of NDR alerts (versus older IDS with many false positives) also means the organization is more likely to respond to real incidents – aligning with PCI’s push for effective, risk-based security.&lt;/P&gt;
&lt;H4&gt;&lt;SPAN class="lia-text-color-21"&gt;Network Segmentation and Scope Reduction&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;While not a direct requirement but rather guidance to Requirement 12.5, network segmentation of the Cardholder Data Environment is encouraged to reduce scope - i.e. the sections of the entire environment that are subject to compliance with PCI DSS.&lt;BR /&gt;&lt;BR /&gt;If segmentation is used, it must be monitored. NDR assists here by monitoring the network boundaries of the CDE. It can verify that only allowed communications occur across segments. For instance, if the CDE is isolated such that only a particular jump server should access it, and somehow a developer’s workstation tries to directly communicate with a CDE database, NDR would spot that anomaly. That alert would indicate a segmentation failure or misconfiguration that needs fixing. This continuous oversight helps prove that segmentation is effective (PCI assessors may ask for evidence that the segmentation was tested; NDR alerts or logs showing no unauthorized access attempts over months is strong evidence).&lt;/P&gt;
&lt;H4&gt;Secure Network Traffic and Encryption&lt;/H4&gt;
&lt;P&gt;Requirement 4.2 requires that cardholder data sent over networks is encrypted with strong cryptography. NDR tools can help enforce this by detecting unencrypted sensitive traffic or usage of weak protocols. Many NDR solutions will recognize when Primary Account Numbers (PANs) or Sensitive Authentication Data (SAD) appear in plaintext in network traffic and raise an alert. They also often track the SSL/TLS versions and cipher suites used in connections. For example, an NDR can alert if a server in the CDE is accepting TLS 1.0 or if any data is transmitted without encryption where encryption is expected.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/network-watcher/vnet-flow-logs-overview?tabs=Americas" target="_blank" rel="noopener"&gt;Azure VNET Flow Logs&lt;/A&gt; provide an “encryption flag” for flows when &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-encryption-overview" target="_blank" rel="noopener"&gt;VNET Encryption&lt;/A&gt; is used, which NDR or analytics can use to quickly identify non-encrypted channels. While the primary responsibility for encryption lies in configuration (enabling encryption at the application- (TLS) and infrastructure (VNET Encryption) layers), NDR provides verification. It detects PAN or SAD sent in the clear, and thus supports compliance with Requirement 4. If the NDR never or rarely alerts on cleartext, that’s evidence encryption is consistently applied; if it does alert, it allows quick remediation.&lt;/P&gt;
&lt;H4&gt;Incident Detection and Response&lt;/H4&gt;
&lt;P&gt;An incident response plan and processes for reacting to security events must be in place per Requirement 12.5. NDR significantly enhances an organization’s ability to detect and respond to incidents in a timely manner. By providing real-time alerts with rich context (like packet captures or detailed flow info), NDR ensures that when an intrusion or suspicious event happens, the security team is immediately notified with the information needed to act. With NDR integrated to alerting systems, companies can demonstrate that network alerts are automatically generated and investigated as part of their incident response program.&lt;BR /&gt;&lt;BR /&gt;For example, if NDR generates an alert about malware beaconing from a server, the analyst can respond via playbooks (possibly automated, as we’ll discuss with Sentinel) to isolate that server, and later documentation will show the alert and response timeline. This satisfies PCI’s expectation that you not only have monitoring (Req.10/11) but also act on it swiftly (Req. 12.10.5, which expects alerts to trigger the incident response process). Furthermore, the forensic data from NDR (like packet logs) will help in the investigation phase of incident response – determine what data might have been accessed, which systems were affected, etc., which is crucial for PCI breach reporting.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;NDR is a key sensor feeding the incident response process, and having it in place with documented procedures closes the loop on several PCI requirements (monitor, detect, respond).&lt;/P&gt;
&lt;H1&gt;Azure Native Tools for Enabling NDR&lt;/H1&gt;
&lt;P&gt;In Azure, implementing NDR starts with capturing the right data. Azure provides native tools to mirror network traffic and collect flow information, which NDR systems (or Azure’s own analytics) can then analyze. Key Azure-native components are Azure Virtual Network TAP, Virtual Network (VNET) Flow Logs, and Traffic Analytics.&lt;/P&gt;
&lt;H4&gt;Azure Virtual Network TAP&lt;/H4&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-tap-overview" target="_blank" rel="noopener"&gt;Virtual Network TAP (Terminal Access Point) &lt;/A&gt;&amp;nbsp;(VTAP) copies network traffic from source Virtual Machines to a collector or traffic analytics tool, running as a Network Virtual Appliance (NVA). &lt;STRONG&gt;VTAP creates a full copy of the traffic, including packet payload content.&lt;/STRONG&gt; Traffic collectors and analytics tools are &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-tap-overview#virtual-network-tap-partner-solutions" target="_blank" rel="noopener"&gt;3rd party partner products&lt;/A&gt;, amongst which are the major NDR solutions. VTAP is an agentless, cloud-native traffic tap at the Azure network infrastructure level. It is entirely out-of-band; it has no impact on the source VM's network performance and the source VM is unaware of the tap. Tapped traffic is VXLAN-encapsulated and delivered to the collector NVA, either in the same or a peered VNET as the source VMs.&lt;/P&gt;
&lt;P&gt;VTAP is crucial to building a PCI DSS compliant CDE in Azure: full visibility of all network traffic enables implementation of the IDS functionality specified in Requirement 11.5. All traffic involving cardholder data systems can be monitored: not only traffic to/from the internet, but also East-West traffic between VMs. By deploying VTAP on the subnets that make up the CDE, anything suspicious on those networks is seen. This helps meet the requirements of monitoring " ... at the perimeter of the CDE" and “ ... at critical points inside the CDE”, without needing agents on each VM.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure VTAP provides the raw network data pipeline needed for NDR. In a PCI audit, citing the use of VTAP plus an NDR appliance as evidence that “all network traffic is being captured and analyzed” is a strong compliance position.&lt;/P&gt;
&lt;H4&gt;Virtual Network Flow Logs&amp;nbsp;&lt;/H4&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/network-watcher/vnet-flow-logs-overview?tabs=Americas" target="_blank" rel="noopener"&gt;Virtual Network Flow Logs&lt;/A&gt; (VNET Flow Logs) capture &lt;STRONG&gt;IP traffic flow metadata&lt;/STRONG&gt; of traffic in a virtual network, which includes source- and destination IP addresses, Layer 4 (transport) protocol and port numbers, flow direction and state, and encryption status. Flow Logs also show whether the flow was allowed or denied by a Network Security Group (NSG) or a Security Admin Rule. VNET Flow Logs do not capture traffic content - they record who talks to whom, with all the details, but not what is said. Log records are written to a storage account in JSON format, and from there can be read for analysis by Azure Traffic Analytics and 3rd party analytics tools such as Security Information and Event Management (SIEM) and NDR solutions.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;From a PCI perspective, VNET Flow Logs serve as a comprehensive audit trail of network activity. They show every connection to and from systems in the CDE including source, destination, and whether it was permitted or blocked. This supports Requirement 10 (Log and Monitor All Access) – retain these logs for the required 12 months as evidence all network access attempts are logged.&lt;/P&gt;
&lt;P&gt;During daily monitoring, these logs can be queried to find anomalies through Traffic Analytics or a SIEM. For example, a regular query would be: “show any flows from CDE subnet to external IPs not on the whitelist” – any results would indicate a potential policy violation or compromise. If NDR is the heart of real-time detection, flow logs are the record-keeper that ensures nothing slips by unrecorded. Even if an attacker stays under the radar of detection, flow logs later allow forensic analysis to trace their actions. Azure’s VNET flow logs being ubiquitous and simple to enable means organizations have little reason not to log all network traffic. It is a baseline best practice that also satisfies the PCI DSS logging mandate. Enabling flow logs and keeping them in Azure Storage with access controls fulfills PCI requirements around log integrity and access control. They essentially answer, “If an auditor asks to see who communicated with the DB server on June 1 at 3PM, can you show that?” – with flow logs, yes, you can.&lt;/P&gt;
&lt;H4&gt;Traffic Analytics&amp;nbsp;&lt;/H4&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/network-watcher/traffic-analytics?tabs=Americas" target="_blank" rel="noopener"&gt;Traffic Analytics&amp;nbsp;&lt;/A&gt; is an Azure service that &lt;STRONG&gt;processes Flow Logs to provide insights and visualizations&lt;/STRONG&gt;. Once VNET Flow Logs are enabled, Traffic Analytics can read the raw logs at 10 minute- or hourly intervals and process them into insightful information. This information is stored in a Azure Log Analytics workspace for further evaluation. Traffic Analytics includes ready-made dashboards in the Azure Portal and its consolidated flow information is available for evaluation through Kusto queries, and tools such as Azure Sentinel and third party SIEMs.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Traffic Analytics will aggregate flows and show things like top talkers (top source/destination IPs), traffic distribution (ports, protocols), and importantly, flagged security issues. For example, it can identify if a VM has an open port to the internet that is unusual, or if there is traffic to an IP address that Microsoft Threat Intelligence marks as malicious (these show up as “MaliciousFlow” in the output). It can also highlight sudden changes in traffic volume or a high number of denied flows (potential port scan or attack attempts).&lt;/P&gt;
&lt;P&gt;Think of Traffic Analytics as a built-in basic NDR-lite or network SIEM for Azure: it won’t catch advanced threats, but it will definitely surface misconfigurations and obvious red flags. From a compliance standpoint, Traffic Analytics is useful for demonstrating proactive monitoring. Instead of showing an auditor raw JSON logs, demonstrate Traffic Analytics: e.g., “In the last week, these were the only external connections and all were expected, and no malicious IPs contacted.” It helps satisfy the requirement that logs and network events are reviewed daily, by providing an easy to interpret summary that can be checked at a glance. If something is noted (like an unexpected open port), this can be remediated as part of security operations, showing the auditor that not only do traffic is logged, it is analyzed and acted upon.&lt;/P&gt;
&lt;P&gt;It is important to note Traffic Analytics is &lt;STRONG&gt;not a full threat detection system&lt;/STRONG&gt; – it is rule-based and limited to what can be inferred from flow records (Layer 3/4 info). It does not inspect payloads (no Layer 7 analysis) and it may not catch subtler anomalies (e.g., data exfiltration hiding in allowed HTTPS traffic might not trigger any obvious threshold in flow stats). Therefore, while it is a great native feature and certainly helps with compliance reporting, for robust NDR one would use Traffic Analytics as a supplement to, not a replacement for, an advanced NDR platform. In Azure, many customers use Traffic Analytics for general network hygiene monitoring and feed its findings into Sentinel or a SIEM for follow-up.&lt;/P&gt;
&lt;P&gt;In summary, Azure’s native network monitoring tools lay the groundwork for NDR by &lt;STRONG&gt;capturing the necessary data:&lt;/STRONG&gt;&lt;BR /&gt;• &amp;nbsp; &amp;nbsp;Azure VTAP provides full-fidelity packet capture (the raw material for deep detection).&lt;BR /&gt;• &amp;nbsp; &amp;nbsp;VNET Flow Logs provide broad coverage of who talked to whom and when (excellent for audit and pattern analysis).&lt;BR /&gt;• &amp;nbsp; &amp;nbsp;Traffic Analytics provides immediate insights from those logs (great for compliance checks and basic anomaly spotting).&lt;/P&gt;
&lt;P&gt;The next step is feeding this data to powerful analytics engines to actually perform the “detection and response” – that’s where third-party NDR or advanced Azure services come into play.&lt;/P&gt;
&lt;H1&gt;Advanced Analysis with Third-Party NDR Solutions&lt;/H1&gt;
&lt;P&gt;Azure’s native capabilities collect the raw data, but &lt;STRONG&gt;effective threat detection&lt;/STRONG&gt; requires specialized analytics beyond what Azure provides. This is where third-party NDR solutions are indispensable. These solutions ingest the packet or flow data and use their own engines to detect threats like intrusions, malware traffic, or policy violations in real time.&lt;/P&gt;
&lt;P&gt;Third-party NDR platforms bring mature, cutting-edge detection algorithms that have been refined on large datasets and numerous environments. They often use a combination of machine learning (for anomaly detection) and signature/threat intelligence (for known threat detection). Azure’s Traffic Analytics or basic Defender for Cloud alerts might report, for instance, that a VM made an outbound connection on an unusual port, but a third-party NDR could dig deeper and say “that connection contained data patterns consistent with credit card numbers in clear text” or “this series of packets matches the behavior of the Cobalt Strike beacon malware.”&lt;/P&gt;
&lt;P&gt;NDR solutions do deep packet inspection (DPI) and behavioral analysis to catch subtle threats and minimize false positives. For organizations in Financial Services, targeted by &lt;A class="lia-external-url" href="https://en.wikipedia.org/wiki/Advanced_persistent_threat" target="_blank" rel="noopener"&gt;Advanced Persistent Threats&lt;/A&gt; and sophisticated attackers, this level of insight is crucial. It is also necessary for meeting the spirit of PCI’s IDS requirement – a smarter IDS means fewer missed intrusions. Many third-party NDRs also come with features like user/device identification, threat chain visualization, and built-in compliance reporting specific to PCI or other standards.&lt;/P&gt;
&lt;P&gt;Microsoft has worked with numerous security vendors to ensure their NDR solutions work seamlessly in Azure. The &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-tap-overview#virtual-network-tap-partner-solutions" target="_blank" rel="noopener"&gt;Virtual Network TAP partner list&lt;/A&gt; includes vendors in two broad categories: &lt;STRONG&gt;network packet brokers&lt;/STRONG&gt; and &lt;STRONG&gt;security analytics/NDR solutions&lt;/STRONG&gt;.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Network Packet Brokers&lt;/STRONG&gt; (e.g., Gigamon, Keysight)&lt;/EM&gt;: These tools focus on aggregating, filtering, and distributing the tapped traffic. Their strength is handling large volumes of data and directing it efficiently to multiple analysis tools. For instance, Gigamon’s GigaVUE for Azure can take the VTAP stream, filter out irrelevant traffic, and feed it to both an NDR and a performance monitoring tool simultaneously. The advantage is scalability and flexibility; the limitation is that packet brokers themselves typically don’t do deep threat analysis – they are complementary infrastructure.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Security Analytics / NDR Platforms&lt;/STRONG&gt; (e.g., Darktrace, Vectra, ExtraHop, Corelight, Fortinet, Netscout, Trend Micro, Arista, etc.)&lt;/EM&gt;: These are the actual brains performing threat detection. Each has its strengths:&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; &lt;STRONG&gt;AI-Driven NDR&lt;/STRONG&gt;: Solutions like &lt;A class="lia-external-url" href="https://www.darktrace.com/products/network" target="_blank" rel="noopener"&gt;&lt;EM&gt;Darktrace&lt;/EM&gt;&lt;/A&gt; and &lt;A class="lia-external-url" href="https://www.vectra.ai/" target="_blank" rel="noopener"&gt;&lt;EM&gt;Vectra AI&lt;/EM&gt;&lt;/A&gt; emphasize machine learning to establish a baseline of normal network behavior and then detect anomalies. Darktrace, for example, uses unsupervised AI to identify subtle deviations that could indicate a threat (like a device that suddenly starts connecting to a new domain at odd hours). These are good at catching novel or insider threats. They often come with insightful visualization of network patterns. A possible limitation is they can sometimes produce alerts that require expert interpretation (why did the AI flag this?), but vendors have improved in providing explainability.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; &lt;STRONG&gt;Protocol/Behavioral NDR&lt;/STRONG&gt;: &lt;A class="lia-external-url" href="https://www.extrahop.com/resources/datasheets/extrahop-revealx" target="_blank" rel="noopener"&gt;&lt;EM&gt;ExtraHop Reveal(x)&lt;/EM&gt;&lt;/A&gt;, &lt;A class="lia-external-url" href="https://corelight.com/products/zeek" target="_blank" rel="noopener"&gt;&lt;EM&gt;Corelight (Zeek)&lt;/EM&gt;&lt;/A&gt; and &lt;A class="lia-external-url" href="https://www.netscout.com/solutions/omnis-security" target="_blank" rel="noopener"&gt;&lt;EM&gt;Netscout Omnis&lt;/EM&gt;&lt;/A&gt; focus on deep packet and protocol analysis. ExtraHop decodes dozens of protocols (DNS, database protocols, cloud service APIs) in real-time and can detect specific issues like database exfiltration or use of deprecated TLS versions. It was even audited to exceed PCI IDS requirements by using behavior-based detections. Corelight uses the open-source Zeek (Bro) engine to log detailed metadata about traffic, which can be incredibly rich for investigation and custom detection scripts. Strength: very detailed, low false positives when tuned; limitation: may require more tuning or skilled users to get the most out of raw data (Corelight, for example, gives you great data but you might still need a SIEM or Splunk queries to raise the alerts).&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; &lt;STRONG&gt;Integrated Ecosystem Solutions&lt;/STRONG&gt;: &lt;EM&gt;Fortinet&lt;/EM&gt; and &lt;EM&gt;Arista&lt;/EM&gt; are examples where NDR is part of a broader security ecosystem. Fortinet’s &lt;A class="lia-external-url" href="https://www.fortinet.com/products/network-detection-and-response" target="_blank" rel="noopener"&gt;FortiNDR&lt;/A&gt; is attractive if you already use Fortinet, as the logs and management tie into FortiAnalyzer and you can leverage FortiGuard threat intel. A FortiGate VM can receive mirrored traffic and apply its IPS/IDS signatures and even ML-based detections. Arista Networks (which acquired Awake Security) offers &lt;A class="lia-external-url" href="https://www.arista.com/en/products/network-detection-and-response/" target="_blank" rel="noopener"&gt;Arista NDR&lt;/A&gt; that’s known for entity-centric threat hunting – it builds profiles of devices on the network and can identify rogue or compromised systems by their traffic patterns. These integrated solutions often pair well with their own hardware or cloud frameworks (Fortinet with its firewalls, Arista with its switches), and can sometimes take automated actions (e.g., instruct a FortiGate to block an IP upon detection).&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; &lt;STRONG&gt;Managed NDR Services&lt;/STRONG&gt;: &lt;A class="lia-external-url" href="https://www.esentire.com/how-we-do-it/signals/mdr-for-network" target="_blank" rel="noopener"&gt;&lt;EM&gt;eSentire MDR&lt;/EM&gt;&lt;/A&gt; for example provides technology plus a 24/7 human SOC. They deploy sensors that leverage VTAP (they are a VTAP partner) and their analysts review and respond to every alert. The strength is that you get expert eyes on everything (great for meeting PCI’s requirement that alerts are promptly addressed), and the weakness is relying on a third-party – though for many, outsourcing this is prudent.&lt;/P&gt;
&lt;P&gt;In practice, an organization in the Financial Services Industry will choose a combination of these based on the specifics of their environment, their compliance needs and existing tools and capabilities. Some might use a packet broker and an NDR together (e.g., Gigamon to optimize traffic flow and ExtraHop to analyze it). Others might use an all-in-one virtual appliance from a vendor like Vectra that directly handles ingestion and analysis.&lt;/P&gt;
&lt;H1&gt;Sentinel: Complimentary to NDR&amp;nbsp;&lt;/H1&gt;
&lt;P&gt;Microsoft Sentinel is Azure’s native &lt;STRONG&gt;Security Information and Event Management (SIEM)&lt;/STRONG&gt; and orchestration platform. It aggregates logs from many sources runs analytics on them. It is a powerful tool for security monitoring, but it is not specialized for network traffic analysis in the way NDR solutions are.&lt;/P&gt;
&lt;P&gt;Let's look at how Sentinel factors into the NDR/compliance equation:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Log Centralization and Correlation:&lt;/STRONG&gt; Sentinel ingests Azure VNET Flow Logs and Azure Firewall- and third party firewall logs, as well as alerts from NDR platforms, and any other relevant data (event logs from servers, Azure AD logs, etc.), and aggregates and correlates these events. This is very valuable in daily operations and in investigations - it suppresses the noise, surfacing meaningful, actionable events to security staff. For PCI DSS compliance, having a central SIEM like Sentinel helps satisfy the requirement that &lt;STRONG style="color: rgb(30, 30, 30);"&gt;logged events are aggregated and reviewed&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;Built-in Analytics for Network Events:&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; Sentinel provides default analytics rules and allows custom rule creation using Kusto Query Language (KQL). For network monitoring, one could create rules such as:&lt;/SPAN&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; “Alert if more than 50 denied flows hit a CDE server in 5 minutes” (indicating a possible attack).&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; “Alert if any flow originates from an IP known in Threat Intelligence to be malicious” (Sentinel integrates threat intel feeds).&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;o&amp;nbsp;&amp;nbsp; “Alert if a normally internal-only VM initiates outbound traffic” (indicating a potentially compromised server).&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;These can catch some intrusion attempts. However, setting up robust network anomaly detection in Sentinel requires effort and expertise. Out-of-the-box, Sentinel does not come with a comprehensive library of network threat detection rules – it might have a few (like detecting port scan patterns or spikes in traffic) but not the depth of an NDR product.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;No Deep Packet Inspection:&lt;/STRONG&gt; Sentinel &lt;STRONG&gt;cannot analyze raw packet payloads&lt;/STRONG&gt; or protocol details beyond what is in logs. It relies on sources like flow logs which only have IP/port info, or on other products (like an NDR or IDS) to generate an alert that Sentinel then ingests. This is a fundamental limitation – for example, Sentinel by itself would not be able to detect that an SQL query contained a suspicious UNION SELECT (something an NDR inspecting the SQL protocol might catch). Therefore, Sentinel alone, without something feeding it detailed alerts, would likely miss many attack techniques that do not manifest in log data.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Alert Fatigue and Tuning:&lt;/STRONG&gt; If one tried to use Sentinel as the primary IDS by writing custom rules on flow logs, one might end up with a lot of false positives or noise that needs tuning. NDR vendors invest heavily in fine-tuning detections to be as accurate as possible in network context (for example, distinguishing a legitimate network scan by a vulnerability management tool from a malicious scan by an attacker). With Sentinel, that tuning burden falls on the security team. While Sentinel’s analytics can be quite sophisticated, practically, in-house development of NDR logic in Sentinel is reinventing the wheel.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Compliance Sufficiency:&lt;/STRONG&gt; Could Sentinel alone satisfy PCI DSS requirements for IDS? In theory, if configured to ingest all relevant logs and set up with analytics to alert on suspicious network events, Sentinel might convince an assessor. For example, using Azure Firewall or a third-party firewall that outputs logs to Sentinel, and Sentinel alerting on those logs (like on intrusion signatures those firewalls catch), might tick the box. However,&amp;nbsp;&lt;STRONG&gt;most auditors expect a dedicated IDS/IPS or NDR technology&lt;/STRONG&gt; rather than a custom SIEM query solution. The PCI DSS 4.0.1 guidance explicitly talks about IDS/IPS having up-to-date signatures or detection capabilities for common threats – Sentinel by itself doesn’t maintain a library of network attack signatures; that’s not its function. Moreover, Requirement 11.5.1 basically assumes a distinct IDS/IPS tool. Sentinel would likely be viewed as &lt;STRONG&gt;supplemental&lt;/STRONG&gt; – great for aggregating alerts, but not the generator of those alerts.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Incident Response Automation:&lt;/STRONG&gt; Sentinel &lt;STRONG&gt;is extremely valuable&lt;/STRONG&gt; in orchestrating responses. It can trigger playbooks (with Logic Apps) based on alerts. If an NDR alert comes in (or a custom Sentinel alert triggers), Sentinel can automate actions: isolate a VM by applying a new NSG, disable a user account in Azure AD,&amp;nbsp; send notifications to the team. Sentinel can log the whole process for audit. Having such automation shows you not only detect, but &lt;EM&gt;respond swiftly and consistently&lt;/EM&gt; – aligning with PCI’s incident response testing requirements. Sentinel also retains incident history, which helps in the PCI requirement for reviewing security incidents and responses as part of annual processes.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In conclusion, Sentinel is best seen as &lt;STRONG&gt;complimentary to NDR&lt;/STRONG&gt; rather than as a complete NDR solution in itself.&lt;/P&gt;
&lt;P&gt;It is excellent in terms of log management and initiating responses (covering aspects of Requirements 10 and 12), but on its own it doesn’t fulfill the technical depth of Requirement 11’s intrusion detection. It’s best thought of as the “nerve center” that an NDR (the “sensory organ”) feeds into. In an ideal Azure PCI deployment, you would use Sentinel alongside an NDR: the NDR detects the nuanced network threats and sends alerts to Sentinel; Sentinel then correlates those with other info and kicks off response actions.&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;Microsoft Defender for Cloud and PCI DSS Compliance&lt;/H1&gt;
&lt;P&gt;Defender for Cloud is Azure’s &lt;STRONG&gt;Cloud Security Posture Management (CSPM)&lt;/STRONG&gt; and &lt;STRONG&gt;Cloud Workload Protection Platform (CWPP)&lt;/STRONG&gt;. It continuously assesses your Azure (and multi-cloud) resources against security best practices and provides threat protection via various “Defender” plans (for VMs, databases, storage, etc.). When it comes to PCI DSS compliance, Defender for Cloud is a useful service because it can&amp;nbsp;&lt;STRONG&gt;map your Azure environment to PCI requirements and help automate compliance checking&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;Here’s how Defender for Cloud contributes:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Regulatory Compliance Dashboard&lt;/STRONG&gt; – Defender for Cloud has a built-in compliance dashboard where you can enable the PCI DSS v4.0.1 standard for your Azure environment, which can span multiple subscriptions. It shows a control-by-control assessment of compliance, based on Azure Policy and Defender scans. For example, it will check that VMs have disk encryption enabled, that Key Vaults have soft delete enabled, that network watchers are enabled, etc., mapping to various PCI controls. It won’t automatically check every single PCI requirement (some require manual processes), but it covers those that can be programmatically assessed. This is extremely helpful for preparing for an audit – you get a &lt;STRONG&gt;compliance score&lt;/STRONG&gt;, see where gaps are, then remediate them. The dashboard essentially translates Azure’s security state into PCI language, saving a lot of manual effort. A practical example: for network-specific controls, it might check that NSGs are present on subnets, or that flow logging is enabled (to meet monitoring requirements).&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Continuous Configuration Monitoring&lt;/STRONG&gt; – Many PCI requirements are about having proper configurations (e.g., secure configurations for systems, firewalls in place, no default passwords). Defender for Cloud continuously monitors Azure resources and generates &lt;STRONG&gt;security recommendations&lt;/STRONG&gt; when something deviates from best practice (many of which align with PCI controls). For instance, if a critical VM in the CDE is missing an NSG or has a rule allowing “Any” source, Defender for Cloud will flag that as a recommendation to fix – effectively catching a potential PCI violation early. It also checks for things like missing vulnerability assessments on SQL, or unencrypted traffic on storage, etc. By following Defender’s recommendations, you inherently move toward PCI compliance. This addresses the preventive side of PCI – e.g., Requirement 1 (firewall configuration) and Requirement 2 (secure system configurations) are supported by these continuous assessments.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Threat Detection and Alerts:&lt;/STRONG&gt; Defender for Cloud includes &lt;STRONG&gt;Defender plans&lt;/STRONG&gt; for various resources that provide threat detection. For example, &lt;EM&gt;Defender for Servers&lt;/EM&gt; monitors VMs for suspicious processes, malware, and also does &lt;STRONG&gt;file integrity monitoring (FIM)&lt;/STRONG&gt;. FIM is actually a PCI Requirement (11.5) – you must monitor critical system files for changes. By enabling Defender for Servers on your Azure VMs, you fulfill this, as it uses the Defender for Endpoint agent to track file changes and generate alerts if, say, system binaries are modified unexpectedly. Additionally, Defender for Servers and other plans generate network-related security alerts: e.g., “Potential malicious outbound connection from VM” or “Port scanning activity detected from VM.” These come from analyzing the VM’s telemetry and network flows. While not as comprehensive as a dedicated NDR, these built-in alerts offer a baseline IDS capability. For example, if a VM in the CDE starts port scanning others, Defender for Cloud will flag it, which covers the requirement that you should detect internal reconnaissance. There are also alerts like “Suspicious SQL query activity” for Azure SQL or “Anomalous access pattern” for storage accounts. All Defender for Cloud alerts appear in its &lt;STRONG&gt;Security Alerts&lt;/STRONG&gt; blade, and can be forwarded to Sentinel. From a PCI perspective, having these alerts means you have multiple layers of monitoring (host-level and network-level), which is ideal. It demonstrates that even if an attack doesn’t trigger an NDR network alert, it might trigger a host alert that you’re also watching.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Adaptive Network Hardening:&lt;/STRONG&gt; This feature of Defender for Cloud looks at your NSG rules and actual traffic and recommends hardening (like “these IPs are the only ones seen accessing your VM, consider tightening NSG to only those”). By following these, you reduce your attack surface, which indirectly helps comply with PCI network access restrictions. It is not a mandated control, but it’s a good practice.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Vulnerability Management:&lt;/STRONG&gt; Though not directly NDR, Defender for Cloud’s integrated vulnerability scanning (through Qualys or Defender’s scanner) helps satisfy PCI Requirement 11.3 (regular vulnerability scans) and Requirement 6 (address vulnerabilities). This is complementary to NDR – one stops attacks, the other prevents them by patching. It’s worth noting in compliance documentation that you use these Azure-native capabilities to meet various requirements.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In summary, &lt;STRONG&gt;Defender for Cloud acts as a compliance and security safety net&lt;/STRONG&gt;. It ensures you have the right security controls configured (so that your NDR has a solid foundation to monitor), and it provides additional threat detection on Azure resources. For PCI DSS 4.0.1, which has many controls beyond just network monitoring, Defender for Cloud helps with:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Requirement 5&lt;/STRONG&gt; (anti-malware) by monitoring for malware on VMs.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Requirement 6/11&lt;/STRONG&gt; (vulnerability management and scanning) via its vulnerability assessments.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Requirement 10&lt;/STRONG&gt; (log retention) by advising enabling of diagnostic logs and storing them; and it keeps its alerts logs.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Requirement 11.5&lt;/STRONG&gt; (change detection) via File Integrity Monitoring on servers.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Requirement 8&lt;/STRONG&gt; (MFA, principle of least privilege) through Azure AD and Identity recommendations.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Requirement 12&lt;/STRONG&gt; (security policy and monitoring) by giving a centralized compliance view and integrating with incident workflows.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;However, like Sentinel, Defender for Cloud is not a specialized NDR. Its network alerts are generally simpler (e.g., known malicious IP or basic port scan detection) and &lt;STRONG&gt;should not be solely relied on to meet the IDS requirement&lt;/STRONG&gt;. They complement a true NDR or firewall IDS. A strong strategy is: &lt;EM&gt;use Defender for Cloud to get your Azure environment in a secure, compliant state (so all baseline controls are green), and use NDR to actively monitor and defend that environment from sophisticated attacks.&lt;/EM&gt; The synergy of the two covers a vast swath of PCI requirements in a largely automated fashion.&lt;/P&gt;
&lt;H1&gt;Bringing it all together&lt;/H1&gt;
&lt;P&gt;The Financial Services Industry operates under intense security scrutiny, and PCI DSS v4.0.1 raises the bar further by requiring proactive and continuous network monitoring. &lt;STRONG&gt;Network Detection and Response (NDR)&lt;/STRONG&gt; is a key technology that helps meet these challenges by providing advanced intrusion detection and full visibility into network traffic.&lt;/P&gt;
&lt;P&gt;In the context of PCI DSS:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;NDR ensures that all access to networks carrying cardholder data is monitored in real-time (fulfilling Requirement 10’s logging/monitoring mandate with automation).&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;It serves as the IDS/IPS required to detect malicious network activity (addressing Requirement 11’s call for intrusion detection).&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;It supports network segmentation by verifying that segmentation is not breached and alerting on any anomalous connection (thereby underpinning the isolation that PCI strongly recommends for scope reduction).&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;It feeds into a robust incident response workflow, enabling the organization to react swiftly to suspected breaches (covering the intent of Requirement 12.10 on incident response).&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In Azure, achieving a PCI-compliant NDR setup is very feasible using a combination of &lt;STRONG&gt;Azure’s native capabilities and partner solutions&lt;/STRONG&gt;. Azure provides the plumbing (VTAP for full packet mirror, VNET Flow Logs for thorough logging, Traffic Analytics for basic analysis) and the integration points (connecting to Sentinel, Defender for Cloud, etc.). Third-party NDR platforms bring in the sophisticated analysis that can identify threats with high accuracy, something that Azure’s basic tools alone might not catch. By leveraging both, organizations in Financial Services can create a layered defense:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Defender for Cloud&lt;/STRONG&gt; to maintain strong security posture and baseline compliance (it makes sure configurations are correct and provides some threat detection and compliance mapping).&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Azure vTAP + Flow Logs&lt;/STRONG&gt; to ensure no network activity goes unnoticed.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Advanced NDR&lt;/STRONG&gt; to actually inspect traffic deeply and spot intrusions or policy violations in real-time.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Microsoft Sentinel&lt;/STRONG&gt; to unify the monitoring, correlate across sources, and automate response, serving as the command center that ensures every alert is handled (thus meeting PCI’s expectation of prompt action and thorough monitoring records).&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This layered approach not only checks the compliance boxes but also materially improves security. It demonstrates the principle that &lt;STRONG&gt;“compliance is the floor, not the ceiling”&lt;/STRONG&gt; – the organization implements NDR not just to satisfy PCI requirements but to actively protect critical data.&lt;/P&gt;
&lt;P&gt;For auditors and stakeholders, the combination of evidence from NDR, Azure logs, and Defender for Cloud provides confidence that the network is secure.&lt;/P&gt;
&lt;P&gt;In conclusion, NDR is important in the financial sector for both &lt;STRONG&gt;security and compliance&lt;/STRONG&gt;. PCI DSS 4.0.1 explicitly or implicitly requires capabilities that an NDR delivers (continuous monitoring, intelligent alerting, quick incident containment). Implementing NDR in Azure using the described architecture enables organizations in Financial Services to meet those requirements effectively. They gain peace of mind that their cloud cardholder data environment is under vigilant watch, and they stand on solid ground when undergoing PCI assessments. As threats are evolving and compliance standards are tightening, this blend of Azure technology and NDR solutions exemplifies best practice: using the full power of cloud and AI-driven security to protect sensitive financial data and maintain customer trust.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="#community--1-_ftnref1" target="_blank" rel="noopener" name="_ftn1"&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="#community--1-_ftnref1" target="_blank" rel="noopener" name="_ftn1"&gt;&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 18 Dec 2025 08:31:59 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/network-detection-and-response-ndr-in-financial-services/ba-p/4472515</guid>
      <dc:creator>Marc de Droog</dc:creator>
      <dc:date>2025-12-18T08:31:59Z</dc:date>
    </item>
    <item>
      <title>Announcing Azure DNS security policy with Threat Intelligence feed general availability</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/announcing-azure-dns-security-policy-with-threat-intelligence/ba-p/4470183</link>
      <description>&lt;P&gt;A successful protective DNS strategy seamlessly hardens an entire environment without adding friction: it must protect every virtual network and region consistently, apply real-time and highly accurate threat intelligence, deliver clear visibility into what was blocked and why, integrate smoothly with existing DNS infrastructure, and maintain near-zero performance impact.&lt;/P&gt;
&lt;P&gt;Above all, it should reduce operational noise—lowering incident volume, SOC workload, and risk—while being easy to deploy, easy to trust, and impossible for attackers to slip past.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If customers do not have visibility into their DNS traffic, reliable detection and mitigation functionality, there is a risk of exposure to attacks which can turn into data theft. This translates into financial and intellectual property loss.&lt;/P&gt;
&lt;P&gt;Therefore, it’s key to enable scenarios where Azure DNS customers can mitigate security threats such as data theft, compromised workloads with Zero Day threats, and provide the customers with the right service to detect, visualize, and mitigate this overseen attack vector.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With DNS security policy customers have access to a rich value proposition where not only is it possible to filter DNS traffic with allow/block functionality but also gain visibility of DNS traffic at the virtual network level in all regions whilst integrating with known facilities such as Log Analytics, Event Hubs, or storage accounts to keep their logs.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We are excited to share that Azure DNS security policy with Threat Intelligence is now in general availability.&lt;/P&gt;
&lt;H2&gt;A quick overview of Azure DNS security policy&lt;/H2&gt;
&lt;P&gt;DNS security policy was launched recently and offers the ability to filter and log DNS queries at the virtual network (VNET) level. Policy applies to both public and private DNS traffic within a VNET. DNS logs can be sent to a storage account, log analytics workspace, or event hubs. You can choose to allow, alert, or block DNS queries.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With DNS security policy you can:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Create rules to protect against DNS-based attacks by blocking name resolution of known or malicious domains.&lt;/LI&gt;
&lt;LI&gt;Save and view detailed DNS logs to gain insight into your DNS traffic.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;A DNS security policy has the following associated elements and properties:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/dns/dns-security-policy#location" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Location&lt;/STRONG&gt;&lt;/A&gt;: The Azure region where the security policy is created and deployed.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/dns/dns-security-policy#dns-traffic-rules" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;DNS traffic rules&lt;/STRONG&gt;&lt;/A&gt;: Rules that allow, block, or alert based on priority and domain lists.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/dns/dns-security-policy#virtual-network-links" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Virtual network links&lt;/STRONG&gt;&lt;/A&gt;: A link that associates the security policy to a virtual vetwork.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/dns/dns-security-policy#dns-domain-lists" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;DNS domain lists&lt;/STRONG&gt;&lt;/A&gt;: Location-based lists of DNS domains.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;What is being announced today?&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;Azure DNS security policy with Threat Intelligence feed allows early detection and prevention of security incidents on customer Virtual Networks where known malicious domains sourced &lt;A href="https://www.microsoft.com/en-us/msrc" target="_blank" rel="noopener"&gt;by Microsoft’s Security Response Center (MSRC)&lt;/A&gt; can be blocked from name resolution.&lt;/P&gt;
&lt;P&gt;Azure DNS security policy with Threat Intelligence feed is being announced to all customers and will have regional availability in all public regions.&lt;/P&gt;
&lt;P&gt;For more information about the capabilities available, please visit the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/dns/dns-security-policy" target="_blank" rel="noopener"&gt;Azure DNS security policy&amp;nbsp;&lt;/A&gt;technical documentation webpage.&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;What can customers start doing with Azure DNS Threat Intelligence feed today?&lt;/H3&gt;
&lt;P&gt;Apart from the features which were announced earlier for DNS security policy, the feed will be available as a managed domain list and will enable you to protect your workloads against known malicious domains with Microsoft’s own managed Threat Intelligent feed.&lt;/P&gt;
&lt;P&gt;With Threat Intelligence you will benefit from the following:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;Smart protection&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Almost all attacks begin with a DNS query. Threat Intelligence managed domain list enables you to detect and prevent security incidents early.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;Continuous updates&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The feed is automatically updated by Microsoft so that you stay protected against newly detected malicious domains.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Monitoring and blocking known malicious domains&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;You have the flexibility of just observing the activity in Alert only mode or block the suspected activity in blocking mode.&lt;/LI&gt;
&lt;LI&gt;Enabling logging gives you visibility into all DNS traffic in the virtual network.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;DNS security policy Threat Intelligence feed in GA is also available to use via PowerShell, CLI, .NET, Java, Python, REST, Typescript, Go, ARM, and Terraform.&lt;/P&gt;
&lt;H3&gt;Key use cases for this service:&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Configure Threat Intelligence as a managed domain list in your DNS security policies for an additional layer of protection against known malicious domains.&lt;/LI&gt;
&lt;LI&gt;Get visibility of compromised hosts which are trying to resolve known malicious domains from your virtual networks.&lt;/LI&gt;
&lt;LI&gt;Log and setup alerts if malicious domains are being resolved in any given virtual network where the Threat Intel feed is configured.&lt;/LI&gt;
&lt;LI&gt;Seamlessly integrate with your virtual networks and other services such as Azure Private DNS Zones, Private Resolver, and other services in the VNET.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Fully managed:&lt;/H3&gt;
&lt;P&gt;Built-in high availability, zone redundancy, and low latency name resolution.&lt;/P&gt;
&lt;H3&gt;Cost reduction:&lt;/H3&gt;
&lt;P&gt;Reduce operating costs and run at a fraction of the price of traditional IaaS solutions. There is no need to provision additional instances of IaaS Virtualization Appliances or VM-based solutions and added operational complexity.&lt;/P&gt;
&lt;H3&gt;Protect and monitor your DNS traffic:&lt;/H3&gt;
&lt;P&gt;Capture DNS logs from your virtual networks into Log Analytics, Event Hubs, storage accounts, and apply Threat Intelligence as a managed domain list to your DNS filtering rules for additional protection of your workloads.&lt;/P&gt;
&lt;H3&gt;DevOps friendly&lt;/H3&gt;
&lt;P&gt;Build your pipelines with Terraform, ARM, or Bicep.&lt;/P&gt;
&lt;H2&gt;Get started and share your feedback&lt;/H2&gt;
&lt;P&gt;You can try Azure DNS security policy with Threat Intelligence feed today. For more information about the capabilities available, please visit the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/dns/dns-security-policy" target="_blank" rel="noopener"&gt;Azure DNS security policy&amp;nbsp;&lt;/A&gt;technical documentation webpage. Post your ideas and suggestions on the&amp;nbsp;&lt;A href="https://feedback.azure.com/d365community/forum/8ae9bf04-8326-ec11-b6e6-000d3a4f0789" target="_blank" rel="noopener"&gt;networking community page&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Thu, 20 Nov 2025 20:22:48 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/announcing-azure-dns-security-policy-with-threat-intelligence/ba-p/4470183</guid>
      <dc:creator>Sergio Figueiredo</dc:creator>
      <dc:date>2025-11-20T20:22:48Z</dc:date>
    </item>
    <item>
      <title>Announcing the public preview of StandardV2 NAT Gateway and StandardV2 public IPs</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/announcing-the-public-preview-of-standardv2-nat-gateway-and/ba-p/4458292</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;In today’s rapidly changing digital landscape, organizations are innovating faster and delivering cloud native experiences at global scale. With this acceleration comes higher expectations: applications must remain always available. An outage in a single availability zone can have a ripple effect on application performance, user experience, and business continuity. To safeguard against zone outages, making your cloud architecture zone resilient isn't an option—it’s a necessity.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;A key part of any resilient design is ensuring reliable outbound connectivity. Azure NAT Gateway is a fully managed network address translation service that provides highly scalable and secure internet connectivity for resources inside your virtual networks.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;We’re excited to announce the public preview of the &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;StandardV2 SKU NAT Gateway&lt;/STRONG&gt;,&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; an evolution of Azure NAT Gateway built for the next generation of scale, performance, and resiliency. StandardV2 NAT Gateway delivers &lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;zone redundancy&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;,&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;enhanced&amp;nbsp;data processing limits&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;, &lt;STRONG&gt;IPv6 support&lt;/STRONG&gt;, and&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;flow logs&lt;/STRONG&gt; &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;all &lt;STRONG&gt;at&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;the same price as the Standard SKU NAT Gateway&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;. &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This release also marks the public preview of&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;StandardV2 SKU public IP&amp;nbsp;addresses&amp;nbsp;and prefixes&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-contrast="auto"&gt;. A new SKU of public IPs that must be used with StandardV2 NAT Gateway.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;In combination, StandardV2 IPs provide high-throughput connectivity to support demanding workloads.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img&gt;StandardV2 NAT Gateway is zone resilient and provides dual-stack connectivity.&lt;/img&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;What’s new in StandardV2 NAT Gateway&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:299,&amp;quot;335559739&amp;quot;:299}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Zone redundancy&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:281,&amp;quot;335559739&amp;quot;:281}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;StandardV2 NAT Gateway is&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;zone-redundant by default&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;in regions&amp;nbsp;with&amp;nbsp;availability zones.&amp;nbsp;Deployed as a single resource operating across multiple zones, StandardV2 NAT Gateway ensures outbound connectivity even if one zone becomes unavailable.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img&gt;StandardV2 NAT Gateway spans across multiple availability zones in a region.&lt;/img&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;For example, a virtual machine in zone 2 connects outbound&amp;nbsp;from zones 1, 2,&amp;nbsp;or 3&amp;nbsp;through a&amp;nbsp;StandardV2&amp;nbsp;NAT Gateway&amp;nbsp;(as&amp;nbsp;shown&amp;nbsp;in the&amp;nbsp;figure). If zone 1 experiences an outage, existing connections in that zone may fail, but new connections will seamlessly flow through zones 2 and 3—keeping your applications online and resilient.&amp;nbsp;All existing connections through zones 2 and 3 will persist.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;To learn more, see &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/nat-gateway/nat-availability-zones#standardv2-sku-nat-gateway-zone-redundant" target="_blank" rel="noopener"&gt;StandardV2 NAT Gateway zone-redundancy&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;img&gt;In zone down scenarios, new connections flow through the remaining healthy zones with StandardV2 NAT Gateway.&lt;/img&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&lt;SPAN data-contrast="auto"&gt;This kind of resiliency is essential for any critical workload to ensure high availability. Whether you’re a global SaaS provider that needs to maintain service continuity for your customers during zonal outages or an e-commerce platform that needs to ensure high availability during peak shopping seasons, StandardV2 NAT Gateway can help you achieve greater protection against zonal outages.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Higher performance&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&lt;SPAN data-contrast="auto"&gt;StandardV2 doubles the performance of the Standard SKU, supporting up to 100 Gbps throughput and 10 million packets per second. These enhanced data processing limits are ideal for data-intensive and latency-sensitive applications requiring consistent, high-throughput outbound access to the internet.&lt;/SPAN&gt; For more information, see&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/nat-gateway/nat-gateway-resource#performance" target="_blank" rel="noopener"&gt;StandardV2 NAT Gateway performance&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;StandardV2 public IPs&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Alongside NAT Gateway, StandardV2 public IP Addresses and Prefixes are now available in public preview. StandardV2 SKU public IPs are a new offering of public IPs that must be used with StandardV2 NAT Gateway to provide outbound connectivity. Standard SKU public IPs are not compatible with StandardV2 NAT Gateway. &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;See &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/create-public-ip-portal#create-a-standardv2-sku-public-ip-address" target="_blank" rel="noopener"&gt;how to deploy StandardV2 public IPs&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;IPv6 (dual-stack) support &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:281,&amp;quot;335559739&amp;quot;:281}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&lt;SPAN data-contrast="auto"&gt;StandardV2 NAT Gateway now supports dual-stack (IPv4 + IPv6) connectivity, enabling organizations to meet regulatory requirements, optimize performance for modern architectures, and future-proof workloads at internet scale. Each NAT Gateway supports up to 16 IPv4 and 16 IPv6 StandardV2 public IP addresses or prefixes.&lt;/SPAN&gt;&amp;nbsp;See&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/nat-gateway/nat-sku#ipv6-support" target="_blank" rel="noopener"&gt; IPv6 support for StandardV2 NAT gateway&lt;/A&gt; for more information.&lt;/SPAN&gt;&lt;/P&gt;
&lt;img&gt;StandardV2 NAT Gateway support for dual stack connectivity now in public preview.&lt;/img&gt;
&lt;H3 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Flow logs&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:160,&amp;quot;335559739&amp;quot;:80}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;With StandardV2, you can now enable flow logs to gain deeper visibility into outbound traffic patterns. Flow logs capture detailed IP-level traffic information, helping you:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="14" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Troubleshoot connectivity issues more efficiently.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:299,&amp;quot;335559739&amp;quot;:299}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="14" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Identify top talkers behind the NAT Gateway (which virtual machines initiate the most connections outbound)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:299,&amp;quot;335559739&amp;quot;:299}"&gt;.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="14" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Analyze traffic for compliance and security auditing for your organization&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:299,&amp;quot;335559739&amp;quot;:299}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Learn more at &lt;/SPAN&gt;&lt;A href="https://aka.ms/natflowlogs" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Enable flow logs on StandardV2 NAT Gateway&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:299,&amp;quot;335559739&amp;quot;:299}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Deploying StandardV2 NAT Gateway&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt; and public IPs&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:299,&amp;quot;335559739&amp;quot;:299}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;You can&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/nat-gateway/quickstart-create-nat-gateway-v2" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;deploy StandardV2 NAT Gateway&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/create-public-ip-portal?tabs=option-1-create-public-ip-standardv2" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;StandardV2 public IPs&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; using &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;ARM templates, &lt;/SPAN&gt;Bicep, PowerShell, or CLI.&lt;/P&gt;
&lt;P&gt;Portal and Terraform support is coming soon. &lt;SPAN data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;For more information on client support, see &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/nat-gateway/nat-sku#known-limitations" target="_blank" rel="noopener"&gt;StandardV2 NAT Gateway SKU&lt;/A&gt;.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="3"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 3"&gt;Learn More&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI aria-level="3"&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/nat-gateway/nat-overview#standardv2-nat-gateway" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;StandardV2 NAT Gateway&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/public-ip-addresses#sku" target="_blank" rel="noopener"&gt;StandardV2 public IPs&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://learn.microsoft.com/en-us/azure/nat-gateway/quickstart-create-nat-gateway-v2" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;Create and validate StandardV2 NAT Gateway&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A style="font-style: normal; font-weight: 400; background-color: rgb(255, 255, 255);" href="https://azure.microsoft.com/en-us/pricing/details/azure-nat-gateway/" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;NAT Gateway pricing&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN style="color: rgb(30, 30, 30);" data-ccp-props="{&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Tue, 18 Nov 2025 17:30:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/announcing-the-public-preview-of-standardv2-nat-gateway-and/ba-p/4458292</guid>
      <dc:creator>aimeelittleton</dc:creator>
      <dc:date>2025-11-18T17:30:00Z</dc:date>
    </item>
    <item>
      <title>Integrating Azure Application Gateway v2 with Azure API Management for secure and scalable API</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/integrating-azure-application-gateway-v2-with-azure-api/ba-p/4470804</link>
      <description>&lt;P&gt;Why Application Gateway v2 + Azure API Management?&lt;/P&gt;
&lt;P&gt;- Layer-7 routing with path-based rules, host headers, URL rewrites, and WAF protection (OWASP Core Rule Set). &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Azure API Management provides API abstraction, versioning, throttling, caching, JWT validation, and per-API policies. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Combined, App Gateway becomes the internet-facing secure entry point and Azure API Management the control plane for API governance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Scenario:&lt;/P&gt;
&lt;P&gt;Internet → App Gateway (WAF) → Azure API Management (External) → Backends&lt;/P&gt;
&lt;P&gt;Best when Azure API Management needs to be publicly reachable but protected by WAF and central routing.&lt;/P&gt;
&lt;P&gt;[Client] ─HTTPS──&amp;gt; [App Gateway v2 (WAF)] ─HTTPS──&amp;gt; [Azure API Management (External)] ─&amp;gt; [Private/On-prem/Azure Backends]&lt;/P&gt;
&lt;P&gt;Pros: Simple, fast to implement, WAF in front, supports CDN/Front Door chaining. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;Cons: Azure API Management is public; additional steps required for IP allow-lists and mTLS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Scenario2:&lt;/P&gt;
&lt;P&gt;Internet → App Gateway (WAF) → Azure API Management (Internal) via Private Endpoint&lt;/P&gt;
&lt;P&gt;Azure API Management is internal; only App Gateway is public. Zero-trust friendly.&lt;/P&gt;
&lt;P&gt;[Client] ─HTTPS──&amp;gt; [App Gateway v2 (WAF, Public)] ─HTTPS──&amp;gt; [Private Endpoint] ─&amp;gt; [Azure API Management (Internal)] ─&amp;gt; [Backends]&lt;/P&gt;
&lt;P&gt;Pros: Azure API Management is not exposed to the internet; traffic flows through App Gateway + Private Link. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;Cons: Requires vNet planning, DNS, and App Gateway-to-Private Link name resolution.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Scenario3:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Azure API Management (External) → App Gateway (Internal) → Private Backends&lt;/P&gt;
&lt;P&gt;Azure API Management is the public front door; App Gateway does L7 routing to internal services.&lt;/P&gt;
&lt;P&gt;[Client] ─HTTPS──&amp;gt; [Azure API Management (External)] ─HTTPS──&amp;gt; [App Gateway (Internal/WAF)] ─&amp;gt; [Backends]&lt;/P&gt;
&lt;P&gt;Pros: Privileged Identity Management security &amp;amp; governance is your internet front door. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;Cons: More Azure API Management policy work; App Gateway must be reachable from Azure API Management.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Network &amp;amp; DNS design checklist:&lt;/P&gt;
&lt;P&gt;- Virtual networks &amp;amp; subnets:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; - App Gateway-Subnet (required dedicated subnet) &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; - Azure API Management-Subnet (for internal tier) &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; - Shared-services-Subnet for Bastion/jumpbox/logging &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Private Link: Enable Azure API Management private endpoint in Azure API Management-Subnet.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Private DNS Zones: privatelink.azure-api.net for Azure API Management, custom zones for backends. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Name resolution: App Gateway must resolve Azure API Management private FQDN via vNet DNS or Azure Private DNS. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Firewall &amp;amp; NSGs: Restrict inbound/outbound; allow only required ports to Azure API Management, Key Vault, Log Analytics. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Hybrid Connectivity: Site-to-site VPN or ExpressRoute for on-prem backends; consider Azure Firewall or NVA.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Certificates &amp;amp; TLS&lt;/P&gt;
&lt;P&gt;- Custom Domains:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; - App Gateway: api.contoso.com&lt;/P&gt;
&lt;P&gt;&amp;nbsp; - Azure API Management: gateway.contoso.com (with custom hostname on Azure API Management) &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- TLS Ports: HTTPS 443 end-to-end; disable TLS 1.0/1.1. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Cert storage: Use Azure Key Vault for SSL certs; integrate App Gateway &amp;amp; Azure API Management with Key Vault (managed identity). &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- mTLS (Client Certs): Enforce on Azure API Management with policies; optionally on App Gateway via mutual auth for selected listeners.&lt;/P&gt;
&lt;P&gt;WAF (Web Application Firewall) on App Gateway v2&lt;/P&gt;
&lt;P&gt;- Modes: Detection vs prevention (recommend prevention once tuned). &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- CRS: Start with 3.2; baseline exclusions for APIs (JSON payloads). &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Managed rules: Enable bot protection, set anomaly scoring, create exclusions for headers like Authorization.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Logging: Send WAF logs to Log Analytics; build alerts for blocked requests spikes.&lt;/P&gt;
&lt;P&gt;Azure API Management Policies – Common patterns&lt;/P&gt;
&lt;P&gt;- Inbound: `validate-jwt`, `check-header`, `rate-limit`, `ip-filter`, `set-backend-service` &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Backend: `retry`, `forward-request` with mTLS &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Outbound: `set-header`, `find-and-replace`, `cache` &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Global vs API vs Operation: Keep global minimal; override at API/operation for precision. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Dev, Test, Prod: Parameterize via named values and Key Vault references.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Example – JWT validation and rate limit:&lt;/P&gt;
&lt;P&gt;xml&lt;/P&gt;
&lt;P&gt;&amp;lt;policies&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;lt;inbound&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;lt;base /&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;lt;validate-jwt header-name="Authorization" failed-validation-httpcode="401"&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;openid-config url="https://login.microsoftonline.com/&amp;lt;tenant-id&amp;gt;/v2.0/.well-known/openid-configuration" /&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;audiences&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;audience&amp;gt;api://contoso-app-id&amp;lt;/audience&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;/audiences&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;issuers&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;issuer&amp;gt;https://sts.windows.net/&amp;lt;tenant-id&amp;gt;/&amp;lt;/issuer&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;lt;/issuers&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;lt;/validate-jwt&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;lt;rate-limit calls="100" renewal-period="60" /&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;lt;/inbound&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;lt;backend&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;lt;forward-request /&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;lt;/backend&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;lt;outbound&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;lt;base /&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;lt;/outbound&amp;gt;&lt;/P&gt;
&lt;P&gt;&amp;lt;/policies&amp;gt;&lt;/P&gt;
&lt;P&gt;## Terraform – Core Resources (App Gateway v2 + Azure API Management)&lt;/P&gt;
&lt;P&gt;Note: Simplified example; parameterize for prod, add Key Vault integrations, diagnostics, and role assignments.&lt;/P&gt;
&lt;P&gt;# Variables (example)&lt;/P&gt;
&lt;P&gt;variable "location" { default = "eastus" }&lt;/P&gt;
&lt;P&gt;variable "rg_name" { default = "rg-appgw-apim" }&lt;/P&gt;
&lt;P&gt;variable "vnet_name" { default = "vnet-core" }&lt;/P&gt;
&lt;P&gt;variable "appgw_subnet" { default = "snet-appgw" }&lt;/P&gt;
&lt;P&gt;variable "apim_subnet" { default = "snet-apim" }&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;provider "azurerm" { features {} }&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;resource "azurerm_resource_group" "rg" {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; name &amp;nbsp; &amp;nbsp; = var.rg_name&lt;/P&gt;
&lt;P&gt;&amp;nbsp; location = var.location&lt;/P&gt;
&lt;P&gt;}&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;resource "azurerm_virtual_network" "vnet" {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= var.vnet_name&lt;/P&gt;
&lt;P&gt;&amp;nbsp; location &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= var.location&lt;/P&gt;
&lt;P&gt;&amp;nbsp; resource_group_name = azurerm_resource_group.rg.name&lt;/P&gt;
&lt;P&gt;&amp;nbsp; address_space &amp;nbsp; &amp;nbsp; &amp;nbsp; = ["10.10.0.0/16"]&lt;/P&gt;
&lt;P&gt;}&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;resource "azurerm_subnet" "snet_appgw" {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = var.appgw_subnet&lt;/P&gt;
&lt;P&gt;&amp;nbsp; resource_group_name &amp;nbsp;= azurerm_resource_group.rg.name&lt;/P&gt;
&lt;P&gt;&amp;nbsp; virtual_network_name = azurerm_virtual_network.vnet.name&lt;/P&gt;
&lt;P&gt;&amp;nbsp; address_prefixes &amp;nbsp; &amp;nbsp; = ["10.10.1.0/24"]&lt;/P&gt;
&lt;P&gt;}&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;resource "azurerm_subnet" "snet_apim" {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = var.apim_subnet&lt;/P&gt;
&lt;P&gt;&amp;nbsp; resource_group_name &amp;nbsp;= azurerm_resource_group.rg.name&lt;/P&gt;
&lt;P&gt;&amp;nbsp; virtual_network_name = azurerm_virtual_network.vnet.name&lt;/P&gt;
&lt;P&gt;&amp;nbsp; address_prefixes &amp;nbsp; &amp;nbsp; = ["10.10.2.0/24"]&lt;/P&gt;
&lt;P&gt;}&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;# Public IP for App Gateway&lt;/P&gt;
&lt;P&gt;resource "azurerm_public_ip" "appgw_pip" {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= "pip-appgw"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; resource_group_name = azurerm_resource_group.rg.name&lt;/P&gt;
&lt;P&gt;&amp;nbsp; location &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= var.location&lt;/P&gt;
&lt;P&gt;&amp;nbsp; allocation_method &amp;nbsp; = "Static"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; sku &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = "Standard"&lt;/P&gt;
&lt;P&gt;}&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;# Application Gateway v2 (WAF)&lt;/P&gt;
&lt;P&gt;resource "azurerm_application_gateway" "appgw" {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= "agw-v2-waf"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; resource_group_name = azurerm_resource_group.rg.name&lt;/P&gt;
&lt;P&gt;&amp;nbsp; location &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= var.location&lt;/P&gt;
&lt;P&gt;&amp;nbsp; sku {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; name = "WAF_v2"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; tier = "WAF_v2"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; capacity = 2&lt;/P&gt;
&lt;P&gt;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;&amp;nbsp; gateway_ip_configuration {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; name &amp;nbsp; &amp;nbsp; &amp;nbsp;= "appgw-ipcfg"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; subnet_id = azurerm_subnet.snet_appgw.id&lt;/P&gt;
&lt;P&gt;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;&amp;nbsp; frontend_port {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; name = "https-port"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; port = 443&lt;/P&gt;
&lt;P&gt;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;&amp;nbsp; frontend_ip_configuration {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = "appgw-feip"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; public_ip_address_id = azurerm_public_ip.appgw_pip.id&lt;/P&gt;
&lt;P&gt;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;&amp;nbsp; ssl_certificate {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; name &amp;nbsp; &amp;nbsp; = "ssl-agw"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; data &amp;nbsp; &amp;nbsp; = filebase64("certs/agw.pfx")&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; password = var.agw_pfx_password&lt;/P&gt;
&lt;P&gt;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;&amp;nbsp; http_listener {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = "listener-https"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; frontend_ip_configuration_name = "appgw-feip"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; frontend_port_name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = "https-port"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; protocol &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = "Https"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; ssl_certificate_name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = "ssl-agw"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; host_name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= "api.contoso.com"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;&amp;nbsp; backend_address_pool {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; name = "apim-bepool"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; # For Azure API Management private endpoint, use FQDN via custom probe, or IP when static&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; fqdns = ["gateway.contoso.internal"]&lt;/P&gt;
&lt;P&gt;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;&amp;nbsp; backend_http_settings {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= "https-settings"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; cookie_based_affinity = "Disabled"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; port &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= 443&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; protocol &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= "Https"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; pick_host_name_from_backend_address = true&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; request_timeout &amp;nbsp; &amp;nbsp; &amp;nbsp; = 30&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; probe_name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= "apim-probe"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;&amp;nbsp; probe {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= "apim-probe"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; protocol &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= "Https"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; path &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= "/status-0123456789abcdef"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; pick_host_name_from_backend_http_settings = true&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; interval &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= 30&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; timeout &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = 30&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; unhealthy_threshold = 3&lt;/P&gt;
&lt;P&gt;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;&amp;nbsp; request_routing_rule {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = "route-to-apim"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; rule_type &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= "Basic"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; http_listener_name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = "listener-https"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; backend_address_pool_name &amp;nbsp;= "apim-bepool"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; backend_http_settings_name = "https-settings"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;&amp;nbsp; waf_configuration {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; enabled &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= true&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; firewall_mode &amp;nbsp; &amp;nbsp; &amp;nbsp;= "Prevention"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; rule_set_type &amp;nbsp; &amp;nbsp; &amp;nbsp;= "OWASP"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; rule_set_version &amp;nbsp; = "3.2"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;}&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;# Azure API Management (Developer SKU for demo – use Premium for prod &amp;amp; VNET integration)&lt;/P&gt;
&lt;P&gt;resource "azurerm_api_management" "apim" {&lt;/P&gt;
&lt;P&gt;&amp;nbsp; name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= "apim-contoso"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; location &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= var.location&lt;/P&gt;
&lt;P&gt;&amp;nbsp; resource_group_name = azurerm_resource_group.rg.name&lt;/P&gt;
&lt;P&gt;&amp;nbsp; publisher_name &amp;nbsp; &amp;nbsp; &amp;nbsp;= "Contoso"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; publisher_email &amp;nbsp; &amp;nbsp; = "admin@contoso.com"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; sku_name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= "Developer_1"&lt;/P&gt;
&lt;P&gt;&amp;nbsp; virtual_network_type = "None" # Use "Internal" for vNet, then add private endpoint&lt;/P&gt;
&lt;P&gt;}&lt;/P&gt;
&lt;P&gt;## Azure DevOps – CI/CD YAML (App Gateway + Azure API Management via Terraform)&lt;/P&gt;
&lt;P&gt;`yaml&lt;/P&gt;
&lt;P&gt;trigger:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; branches:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; include: [ main ]&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;pool:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; vmImage: 'ubuntu-latest'&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;variables:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; TF_VERSION: '1.8.5'&lt;/P&gt;
&lt;P&gt;&amp;nbsp; ARM_USE_MSI: true&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;stages:&lt;/P&gt;
&lt;P&gt;- stage: Validate&lt;/P&gt;
&lt;P&gt;&amp;nbsp; jobs:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; - job: tf_validate&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; steps:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; - task: Bash@3&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; displayName: 'Install Terraform'&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; inputs:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; targetType: 'inline'&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; script: |&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y unzip&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; curl -L -o tf.zip https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; unzip tf.zip &amp;amp;&amp;amp; sudo mv terraform /usr/local/bin/&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; terraform -version&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; - task: AzureCLI@2&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; displayName: 'Terraform init &amp;amp; validate'&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; inputs:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; azureSubscription: '$(AZURE_SERVICE_CONNECTION)'&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; scriptType: 'bash'&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; scriptLocation: 'inlineScript'&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inlineScript: |&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; terraform init -backend-config=backend.hcl&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; terraform fmt -check&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; terraform validate&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;- stage: Plan&lt;/P&gt;
&lt;P&gt;&amp;nbsp; jobs:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; - job: tf_plan&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; steps:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; - task: AzureCLI@2&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; inputs:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; azureSubscription: '$(AZURE_SERVICE_CONNECTION)'&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; scriptType: 'bash'&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; scriptLocation: 'inlineScript'&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inlineScript: |&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; terraform plan -out=tfplan&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; - publish: tfplan&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; artifact: tfplan&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;- stage: Apply&lt;/P&gt;
&lt;P&gt;&amp;nbsp; condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))&lt;/P&gt;
&lt;P&gt;&amp;nbsp; jobs:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; - job: tf_apply&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; steps:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; - download: current&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; artifact: tfplan&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; - task: AzureCLI@2&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; inputs:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; azureSubscription: '$(AZURE_SERVICE_CONNECTION)'&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; scriptType: 'bash'&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; scriptLocation: 'inlineScript'&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; inlineScript: |&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; terraform apply -auto-approve tfplan&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;##Observability &amp;amp; diagnostics&lt;/P&gt;
&lt;P&gt;- Access Logs: App Gateway &amp;amp; WAF logs to Log Analytics; query with KQL. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Azure API Management Metrics: Requests, backend duration, cache hits; enable diagnostic settings to Log Analytics/Storage/Event Hub. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- End-to-end tracing: Correlate `x-correlation-id` across App Gateway, Azure API Management, and backend logs. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Alerts: 4xx/5xx thresholds, WAF blocks spike, Azure API Management throttling events, TLS certificate expiry.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;## Security hardening&lt;/P&gt;
&lt;P&gt;- Enforce TLS 1.2+, disable weak ciphers. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- WAF exclusions tuned minimally; regular rule reviews. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Azure API Management IP allow-lists for admin endpoints; use RBAC and separate admin vs gateway hostnames. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Private Endpoints for Azure API Management &amp;amp; backends; deny public network access where possible. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- mTLS from App Gateway→Azure API Management or Client→Azure API Management when required (Key Vault for client certs). &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- DDoS Protection on vNet with public exposure; consider Azure Front Door WAF for global edge.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;## Cost &amp;amp; Performance&lt;/P&gt;
&lt;P&gt;- Right-size App Gateway v2 capacity; enable autoscaling for variable traffic. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Use Azure API Management Premium only if you need vNet, multi-region, or zone redundancy; otherwise consider Standard/Developer for non-prod. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Caching policies in Azure API Management reduce backend load; use response compression. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Health probes optimized for backend responsiveness (avoid tight intervals).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;## Troubleshooting&amp;nbsp;&lt;/P&gt;
&lt;P&gt;- App Gateway 502/504: Check backend health, probe path, SNI/host header, TLS ciphers, DNS resolution to Azure API Management. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Azure API Management 401/403:&amp;nbsp; Validate JWT audience/issuer; clock skew; named values; policy order. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Private Endpoint:&amp;nbsp; DNS record in `privatelink.azure-api.net` exists; App Gateway subnet can resolve; NSG not blocking. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Cert Issues: PFX password correct; full chain present; key usage supports server auth. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Performance: Turn on App Gateway autoscaling; review Azure API Management throttling; check backend rate limits.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;## Production checklist&lt;/P&gt;
&lt;P&gt;- Custom domains &amp;amp; cert rotation via Key Vault &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- WAF in Prevention with tuned exclusions &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Azure API Management policies for auth, rate limiting, cache, headers &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Private endpoints + DNS validated end-to-end &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Autoscaling &amp;amp; health probes tuned &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Diagnostics &amp;amp; alerts configured &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- CI/CD gated approvals; Terraform state secured &amp;nbsp;&lt;/P&gt;
&lt;P&gt;- Runbooks for failover &amp;amp; certificate renewal &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 18 Nov 2025 17:33:43 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/integrating-azure-application-gateway-v2-with-azure-api/ba-p/4470804</guid>
      <dc:creator>ranjsharma</dc:creator>
      <dc:date>2025-11-18T17:33:43Z</dc:date>
    </item>
    <item>
      <title>Azure Virtual Network Manager + Azure Virtual WAN</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/azure-virtual-network-manager-azure-virtual-wan/ba-p/4469991</link>
      <description>&lt;P&gt;Azure continues to expand its networking capabilities, with Azure Virtual Network Manager and Azure Virtual WAN (vWAN) standing out as two of the most transformative services. When deployed together, they offer the best of both worlds: the operational simplicity of a managed hub architecture combined with the ability for spoke VNets to communicate directly, avoiding additional hub hops and minimizing latency&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Revisiting the classic hub-and-spoke pattern&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="border-width: 1px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Element&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="border-width: 1px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Traditional hub-and-spoke role&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 1px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Hub VNet&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="border-width: 1px;"&gt;
&lt;P&gt;Centralized network that hosts shared services including firewalls (e.g., Azure Firewall, NVAs), VPN/ExpressRoute gateways, DNS servers, domain controllers, and central route tables for traffic management. Acts as the connectivity and security anchor for all spoke networks.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 1px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Spoke VNets&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="border-width: 1px;"&gt;
&lt;P&gt;Host individual application workloads and peer directly to the hub VNet. Traffic flows through the hub for north-south connectivity (to/from on-premises or internet) and cross-spoke communication (east-west traffic between spokes).&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="border-width: 1px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Benefits&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="border-width: 1px;"&gt;
&lt;P&gt;• Single enforcement point for security policies and network controls&lt;BR /&gt;• No duplication of shared services across environments&lt;BR /&gt;• Simplified routing logic and traffic flow management&lt;BR /&gt;• Clear network segmentation and isolation between workloads&lt;BR /&gt;• Cost optimization through centralized resources&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;However, this architecture comes with a trade-off: every spoke-to-spoke packet must route through the hub, introducing additional network hops, increased latency, and potential throughput constraints.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;How Virtual WAN modernizes that design&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Virtual WAN replaces a do-it-yourself hub VNet with a fully managed hub service:&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Managed hubs – Azure owns and operates the hub infrastructure.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Automatic route propagation – routes learned once are usable everywhere.&lt;/LI&gt;
&lt;LI&gt;Integrated add-ons – Firewalls, VPN, and ExpressRoute ports are first-class citizens.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;By default, Virtual WAN enables any-to-any routing between spokes. Traffic transits the hub fabric automatically—no configuration required.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why direct spoke mesh?&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Certain patterns require single-hop connectivity&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Micro-service meshes that sit in different spokes and exchange chatty RPC calls.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Database replication / backups where throughput counts, and hub bandwidth is precious.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Dev / Test / Prod spokes that need to sync artifacts quickly yet stay isolated from hub services.&lt;/LI&gt;
&lt;LI&gt;Segmentation mandates where a workload must bypass hub inspection for compliance yet still talk to a partner VNet.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;EM&gt;Benefits&lt;/EM&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Lower latency – the hub detour disappears.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Better bandwidth – no hub congestion or firewall throughput cap.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Higher resilience – spoke pairs can keep talking even if the hub is under maintenance.&amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;The peering explosion problem&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;With pure VNet peering, the math escalates fast:&amp;nbsp;&lt;BR /&gt;For n spokes you need n × (n-1)/2 links. Ten spokes? 45 peerings. Add four more? Now 91.&lt;/P&gt;
&lt;P&gt;Each extra peering forces you to:&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Touch multiple route tables.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Update NSG rules to cover the new paths.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Repeat every time you add or retire a spoke.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Troubleshoot an ever-growing spider web.&amp;nbsp; &amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Where Azure Virtual Network Manager Steps In?&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure Virtual Network Manager introduces Network Groups plus a Mesh connectivity policy:&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 870px; height: 251px; border-width: 1px;"&gt;&lt;thead&gt;&lt;tr style="height: 39px;"&gt;&lt;td style="height: 39px; border-width: 1px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Azure Virtual Network Manager Concept&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39px; border-width: 1px;"&gt;
&lt;P&gt;&lt;STRONG&gt;What it gives you&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr style="height: 67px;"&gt;&lt;td style="height: 67px; border-width: 1px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Network group&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67px; border-width: 1px;"&gt;
&lt;P&gt;A logical container that groups multiple VNets together, allowing you to apply configurations and policies to all members simultaneously&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 67px;"&gt;&lt;td style="height: 67px; border-width: 1px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Mesh connectivity&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67px; border-width: 1px;"&gt;
&lt;P&gt;Automated peering between all VNets in the group, ensuring every member can communicate directly with every other member without manual configuration&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 39px;"&gt;&lt;td style="height: 39px; border-width: 1px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Declarative config&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39px; border-width: 1px;"&gt;
&lt;P&gt;Intent-based approach where you define the desired network state, and Azure Virtual Network Manager handles the implementation and ongoing maintenance&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 39px;"&gt;&lt;td style="height: 39px; border-width: 1px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Dynamic updates&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39px; border-width: 1px;"&gt;
&lt;P&gt;Automatic topology management—when VNets are added to or removed from a group, Azure Virtual Network Manager reconfigures all necessary connections without manual intervention&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;Operational complexity collapses from O(n²) to O(1)—you manage a group, not 100+ individual peerings.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;A complementary model: Azure Virtual Network Manager mesh inside vWAN&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Since Azure Virtual Network Manager works on any Azure VNet—including the VNets you already attach to a vWAN hub—you can apply mesh policies on top of your existing managed hub architecture:&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Spoke VNets join a vWAN hub for branch connectivity, centralized firewalling, or multi-region reach.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;The same spokes are added to an Azure Virtual Network Manager Network Group with a mesh policy.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Azure Virtual Network Manager builds direct peering links between the spokes, while vWAN continues to advertise and learn routes.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Result:&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;All VNets still benefit from vWAN’s global routing and on-premises integration.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Latency-critical east-west flows now travel the shortest path—one hop—as if the VNets were traditionally peered.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Rather than choosing one over the other, organizations can leverage both vWAN and Azure Virtual Network Manager together, as the combination enhances the strengths of each service.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Performance illustration&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Spoke-to-Spoke Communication with Virtual WAN without Azure Virtual Network Manager mesh:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Spoke-to-Spoke Communication with Virtual WAN with Azure Virtual Network Manager mesh:&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Observability &amp;amp; protection&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;NSG flow logs – granular packet logs on every peered VNet.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Azure Virtual Network Manager admin rules – org-wide guardrails that trump local NSGs.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Azure Monitor + SIEM – route flow logs to Log Analytics, Sentinel, or third-party SIEM for threat detection.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;Layered design – hub firewalls inspect north-south traffic; NSGs plus admin rules secure east-west flows.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Putting it all together&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Virtual WAN offers fully managed global connectivity, simplifying the integration of branch offices and on-premises infrastructure into your Azure environment.&lt;/LI&gt;
&lt;LI&gt;Azure Virtual Network Manager mesh establishes direct communication paths between spoke VNets, making it ideal for workloads requiring high throughput or minimal latency in east-west traffic patterns.&lt;/LI&gt;
&lt;LI&gt;When combined, these services provide architects with granular control over traffic routing. Each flow can be directed through hub services when needed or routed directly between spokes for optimal performance—all without re-architecting your network or creating additional management complexity.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;By pairing Azure Virtual Network Manager’s group-based mesh with VWAN’s managed hubs, you get the best of both worlds: worldwide reach, centralized security, and single-hop performance where it counts.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 17 Nov 2025 16:54:52 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/azure-virtual-network-manager-azure-virtual-wan/ba-p/4469991</guid>
      <dc:creator>SimonaTarantola</dc:creator>
      <dc:date>2025-11-17T16:54:52Z</dc:date>
    </item>
    <item>
      <title>Delivering web applications over IPv6</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/delivering-web-applications-over-ipv6/ba-p/4469638</link>
      <description>&lt;P&gt;The IPv4 address space pool has been exhausted for some time now, meaning there is no new public address space available for allocation from Internet Registries. The internet continues to run on IPv4 through technical measures such as Network Address Translation (NAT) and&amp;nbsp;&lt;A class="lia-external-url" href="https://en.wikipedia.org/wiki/Carrier-grade_NAT" target="_blank" rel="noopener"&gt;Carrier Grade NAT&lt;/A&gt;, and reallocation of address space through &lt;A class="lia-external-url" href="https://iptrading.com/" target="_blank" rel="noopener"&gt;IPv4 address space trading&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://en.wikipedia.org/wiki/IPv6" target="_blank" rel="noopener"&gt;IPv6&lt;/A&gt; will ultimately be the dominant network protocol on the internet, as IPv4 life-support mechanisms used by network operators, hosting providers and ISPs will eventually reach the limits of their scalability. Mobile networks are already changing to IPv6-only APNs; reachability of IPv4-only destinations from these mobile network is through 6-4 NAT gateways, which sometimes causes problems.&lt;/P&gt;
&lt;P&gt;Client uptake of IPv6 is progressing steadily. &lt;A class="lia-external-url" href="https://www.google.com/intl/en/ipv6/statistics.html#tab=ipv6-adoption" target="_blank" rel="noopener"&gt;Google&lt;/A&gt; reports 49% of clients connecting to its services over IPv6 globally, with France leading at 80%.&lt;/P&gt;
&lt;P&gt;IPv6 client access measured by Google:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Meanwhile, countries around the world are requiring IPv6 reachability for public web services. Examples are the&amp;nbsp;&lt;A class="lia-external-url" href="http://(https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-07.pdf" target="_blank" rel="noopener"&gt;United States&lt;/A&gt;, European Union member states among which &lt;A class="lia-external-url" href="https://www.forumstandaardisatie.nl/ipv6" target="_blank" rel="noopener"&gt;the Netherlands &lt;/A&gt;&amp;nbsp;and &lt;A class="lia-external-url" href="http://(https://lovdata.no/dokument/SF/forskrift/2013-04-05-959#shareModal" target="_blank" rel="noopener"&gt;Norway&lt;/A&gt;, and &lt;A class="lia-external-url" href="https://dot.gov.in/ipv6-transition-across-stakeholders" target="_blank" rel="noopener"&gt;India&lt;/A&gt;, and Japan.&lt;/P&gt;
&lt;P&gt;IPv6 adoption per country measured by Google:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Entities needing to comply with these mandates are looking at Azure's networking capabilities for solutions. Azure supports IPv6 for both private and public networking, and capabilities have developed and expanded over time.&lt;/P&gt;
&lt;P&gt;This article discusses strategies to build and deploy IPv6-enabled public, internet-facing applications that are reachable from IPv6(-only) clients.&lt;/P&gt;
&lt;H2&gt;Azure Networking IPv6 capabilities&lt;/H2&gt;
&lt;P&gt;Azure's private networking capabilities center on Virtual Networks (VNETs) and the components that are deployed within. Azure VNETs are &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/ipv6-overview" target="_blank" rel="noopener"&gt;IPv4/IPv6 dual stack&lt;/A&gt; capable: a VNET &lt;STRONG&gt;must &lt;/STRONG&gt;always have IPv4 address space allocated, and &lt;STRONG&gt;can &lt;/STRONG&gt;also have IPv6 address space. Virtual machines in a dual stack VNET will have both an IPv4 and an IPv6 address from the VNET range, and can be behind IPv6 capable External- and Internal Load Balancers. VNETs can be connected through VNET peering, which effectively turns the peered VNETs into a single routing domain. It is now possible to peer only the IPv6 address spaces of VNETs, so that the IPv4 space assigned to VNETs can overlap and communication across the peering is over IPv6. The same is true for connectivity to on-premise over ExpressRoute: the Private Peering can be enabled for IPv6 only, so that VNETs in Azure do not have to have unique IPv4 address space assigned, which may be in short supply in an enterprise.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Not all internal networking components are IPv6 capable yet. Most notable exceptions are VPN Gateway, Azure Firewall and Virtual WAN; IPv6 compatibility is on the roadmap for these services, but target availability dates have not been communicated.&lt;/P&gt;
&lt;P&gt;But now let's focus on Azure's externally facing, public, network services. Azure is ready to let customers publish their web applications over IPv6.&lt;/P&gt;
&lt;P&gt;IPv6 capable externally facing network services include:&lt;/P&gt;
&lt;P&gt;- &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/frontdoor/front-door-overview#global-delivery-scale-using-microsofts-network" target="_blank" rel="noopener"&gt;Azure Front Door&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;- &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/application-gateway/ipv6-application-gateway-portal" target="_blank" rel="noopener"&gt;Application Gateway&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;- &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/load-balancer/deploy-ipv4-ipv6-dual-stack-standard-load-balancer" target="_blank" rel="noopener"&gt;External Load Balancer&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;- &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/public-ip-addresses#ip-address-version" target="_blank" rel="noopener"&gt;Public IP addresses&lt;/A&gt; and &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/public-ip-address-prefix" target="_blank" rel="noopener"&gt;Public IP address prefixes&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;- &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/dns/dns-faq#do-azure-dns-name-servers-resolve-over-ipv6--" target="_blank" rel="noopener"&gt;Azure DNS&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;- &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/ddos-protection/ddos-protection-sku-comparison#tiers" target="_blank" rel="noopener"&gt;Azure DDOS Protection&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;- &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-faqs#does-traffic-manager-support-ipv6-endpoints" target="_blank" rel="noopener"&gt;Traffic Manager&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;- &lt;A class="lia-external-url" href="https://azure.github.io/AppService/2024/11/08/Announcing-Inbound-IPv6-support.html" target="_blank" rel="noopener"&gt;App Service&lt;/A&gt; (IPv6 support is in public preview)&lt;/P&gt;
&lt;H2&gt;IPv6 Application Delivery&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;IPv6 Application Delivery&lt;/STRONG&gt; refers to the architectures and services that enable your web application to be accessible via IPv6. The goal is to provide an IPv6 address and connectivity for clients, while often continuing to run your application on IPv4 internally.&lt;/P&gt;
&lt;P&gt;Key benefits of adopting IPv6 in Azure include:&lt;/P&gt;
&lt;P&gt;✅&amp;nbsp;&lt;STRONG&gt;Expanded Client Reach: &lt;/STRONG&gt;IPv4-only websites risk being unreachable to IPv6-only networks. By enabling IPv6, you expand your reach into growing mobile and IoT markets that use IPv6 by default. Governments and enterprises increasingly mandate IPv6 support for public-facing services.&lt;/P&gt;
&lt;P&gt;✅&lt;STRONG&gt;Address Abundance &amp;amp; No NAT:&lt;/STRONG&gt;&amp;nbsp;IPv6 provides a virtually unlimited address pool, mitigating IPv4 exhaustion concerns. This abundance means each service can have its own public IPv6 address, often removing the need for complex NAT schemes. End-to-end addressing can simplify connectivity and troubleshooting.&lt;/P&gt;
&lt;P&gt;✅ &lt;STRONG&gt;Dual-Stack Compatibility:&lt;/STRONG&gt;&amp;nbsp;Azure supports dual-stack deployments where services listen on both IPv4 and IPv6. This allows a single application instance or endpoint to serve both types of clients seamlessly. Dual-stack ensures you don’t lose any existing IPv4 users while adding IPv6 capability.&lt;/P&gt;
&lt;P&gt;✅&lt;STRONG&gt;Performance and Future Services:&lt;/STRONG&gt; Some networks and clients might experience better performance over IPv6. Also, being IPv6-ready prepares your architecture for future Azure features and services as IPv6 integration deepens across the platform.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;General steps to enable IPv6 connectivity&lt;/STRONG&gt;&amp;nbsp;for a web application in Azure are:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Plan and Enable IPv6 Addressing in Azure&lt;/STRONG&gt;: Define an IPv6 address space in your Azure Virtual Network. Azure allows adding IPv6 address space to existing VNETs, making them dual-stack. A /56 segment for the VNET is recommended, /64 segment for subnets are required (Azure &lt;EM&gt;requires&lt;/EM&gt; /64 subnets). If you have existing infrastructure, you might need to create new subnets or migrate resources, especially since older &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-faq#does-application-gateway-support-ipv6" target="_blank" rel="noopener"&gt;Application Gateway v1 instances cannot simply be “upgraded” to dual-stack&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Deploy or Update Frontend Services with IPv6&lt;/STRONG&gt;: Choose a suitable Azure service (Application Gateway, External / Global Load Balancer, etc.) and configure it with a public IPv6 address on the frontend. This usually means selecting *Dual Stack* configuration so the service gets both an IPv4 and IPv6 public IP. For instance, when creating an Application Gateway v2, you would specify &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/application-gateway/ipv6-application-gateway-portal" target="_blank" rel="noopener"&gt;IP address type: DualStack (IPv4 &amp;amp; IPv6&lt;/A&gt;). Azure Front Door &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/frontdoor/front-door-overview" target="_blank" rel="noopener"&gt;by default&lt;/A&gt;&amp;nbsp;provides dual-stack capabilities with its global endpoints.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Configure Backends and Routing&lt;/STRONG&gt;: Usually your backend servers or services will remain on IPv4. At the time of writing this in October 2025, Azure Application Gateway does not support IPv6 for backend pool addresses. This is fine because the frontend terminates the IPv6 network connection from the client, and the backend initiates an IPv4 connection to the backend pool or origin. Ensure that your load balancing rules, listener configurations, and health probes are all set up to route traffic to these backends. Both IPv4 and IPv6 frontend listeners can share the same backend pool. Azure Front Door does support IPv6 origins.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Update DNS Records&lt;/STRONG&gt;: Publish a DNS&amp;nbsp;&lt;STRONG&gt;AAAA record&lt;/STRONG&gt; for your application’s host name, pointing to the new IPv6 address. This step is critical so that IPv6-only clients can discover the IPv6 address of your service. If your service also has an IPv4 address, you will have both&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/dns/dns-zones-records#record-types" target="_blank" rel="noopener"&gt;A (IPv4) and AAAA (IPv6) records&lt;/A&gt; for the same host name. DNS will thus allow clients of either IP family to connect. (In multi-region scenarios using Traffic Manager or Front Door, DNS configuration might be handled through those services as discussed later).&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&amp;nbsp;&lt;STRONG&gt;Test IPv6 Connectivity&lt;/STRONG&gt;: Once set up, test from an IPv6-enabled network or use online tools to ensure the site is reachable via IPv6. Azure’s services like Application Gateway and Front Door will handle the dual-stack routing, but it’s good to verify that content loads on an IPv6-only connection and that SSL certificates, etc., work over IPv6 as they do for IPv4.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Next, we explore specific Azure services and architectures for IPv6 web delivery in detail.&lt;/P&gt;
&lt;H2&gt;External Load Balancer - single region&lt;/H2&gt;
&lt;P&gt;Azure &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-overview" target="_blank" rel="noopener"&gt;External Load Balancer&lt;/A&gt; (also known as Public Load Balancer) can be deployed in a single region to provide IPv6 access to applications running on virtual machines or VM scale sets. &lt;STRONG&gt;External Load Balancer acts as a Layer 4 entry point&lt;/STRONG&gt; for IPv6 traffic, distributing connections across backend instances. This scenario is ideal when you have stateless applications or services that do not require Layer 7 features like SSL termination or path-based routing.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key IPv6 Features of External Load Balancer&lt;/STRONG&gt;:&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;STRONG&gt;Dual-Stack Frontend:&lt;/STRONG&gt; Standard Load Balancer supports both &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-ipv6-overview" target="_blank" rel="noopener"&gt;IPv4 and IPv6 frontends&lt;/A&gt; simultaneously. When configured as dual-stack, the load balancer gets two public IP addresses – one IPv4 and one IPv6 – and can distribute traffic from both IP families to the same backend pool.&lt;/P&gt;
&lt;P&gt;- &lt;STRONG&gt;Zone-Redundant by Default:&lt;/STRONG&gt; Standard Load Balancer is zone-redundant by default, providing high availability across Azure Availability Zones within a region without additional configuration.&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;STRONG&gt;IPv6 Frontend Availability:&lt;/STRONG&gt; IPv6 support in Standard Load Balancer is available in all Azure regions. Basic Load Balancer does not support IPv6, so you must use Standard SKU.&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;STRONG&gt;IPv6 Backend Pool Support:&lt;/STRONG&gt; While the frontend accepts IPv6 traffic, the load balancer will not translate IPv6 to IPv4. Backend pool members (VMs) must have private IPv6 addresses. You will need to add private IPv6 addressing to your existing VM IPv4-only infrastructure. This is in contrast to Application Gateway, discussed below, which will terminate inbound IPv6 network sessions and connect to the backend-end over IPv4.&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;STRONG&gt;Protocol Support:&lt;/STRONG&gt; Supports TCP and UDP load balancing over IPv6, making it suitable for web applications and APIs, but also for non-web TCP- or UDP-based services accessed by IPv6-only clients.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;To set up an IPv6-capable External Load Balancer in one region, follow this high-level process:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Enable IPv6 on the Virtual Network: &lt;/STRONG&gt;Ensure the VNET where your backend VMs reside has an IPv6 address space. Add a dual-stack address space to the VNET (e.g., add an IPv6 space like 2001:db8:1234::/56 to complement your existing IPv4 space). Configure subnets that are dual-stack, containing both IPv4 and IPv6 prefixes (/64 for IPv6).&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Create Standard Load Balancer with IPv6 Frontend: &lt;/STRONG&gt;In the Azure Portal, create a new Standard Load Balancer. During creation, configure the frontend IP with both IPv4 and IPv6 public IP addresses. Create or select existing Standard SKU public IP resources – one for IPv4 and one for IPv6.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Configure Backend Pool:&lt;/STRONG&gt; Add your virtual machines or VM scale set instances to the backend pool. Note that your backend instances will need to have private IPv6 addresses, in addition to IPv4 addresses, to receive inbound IPv6 traffic via the load balancer.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Set Up Load Balancing Rules:&lt;/STRONG&gt; Create load balancing rules that map frontend ports to backend ports. For web applications, typically map port 80 (HTTP) and 443 (HTTPS) from both the IPv4 and IPv6 frontends to the corresponding backend ports. Configure health probes to ensure only healthy instances receive traffic.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Configure Network Security Groups: &lt;/STRONG&gt;Ensure an NSG is present on the backend VM's subnet, allowing inbound traffic from the internet to the port(s) of the web application. Inbound traffic is "secure by default" meaning that inbound connectivity from internet is blocked unless there is an NSG present that explicitly allows it.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;DNS Configuration:&lt;/STRONG&gt; Create DNS records for your application: an A record pointing to the IPv4 address and an AAAA record pointing to the IPv6 address of the load balancer frontend.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;Outcome: &lt;/STRONG&gt;In this single-region scenario, IPv6-only clients will resolve your application's hostname to an IPv6 address and connect to the External Load Balancer over IPv6.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Example: &lt;/STRONG&gt;Consider a web application running on a VM (or a VM scale set) behind an External Load Balancer in Sweden Central. The VM runs the &lt;A class="lia-external-url" href="https://github.com/mddazure/azure-region-viewer" target="_blank" rel="noopener"&gt;Azure Region and Client IP Viewer&lt;/A&gt; containerized application exposed on port 80, which displays the region the VM is deployed in and the calling client's IP address. The load balancer's front-end IPv6 address has a DNS name of&amp;nbsp;&lt;U&gt;ipv6webapp-elb-swedencentral.swedencentral.cloudapp.azure.com&lt;/U&gt;. When called from a client with an IPv6 address, the application shows its region and the client's address.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Limitations &amp;amp; Considerations:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;EM&gt;Standard SKU Required:&lt;/EM&gt; Basic Load Balancer does not support IPv6. You must use Standard Load Balancer.&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;EM&gt;Layer 4 Only&lt;/EM&gt;: Unlike Application Gateway, External Load Balancer operates at Layer 4 (transport layer). It cannot perform SSL termination, cookie-based session affinity, or path-based routing. If you need these features, consider Application Gateway instead.&lt;/P&gt;
&lt;P&gt;- &lt;EM&gt;Dual stack IPv4/IPv6 Backend required:&lt;/EM&gt;&amp;nbsp;Backend pool members must have private IPv6 addresses to receive inbound IPv6 traffic via the load balancer. The load balancer does not translate between the IPv6 frontend and an IPv4 backend.&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;EM&gt;Outbound Connectivity:&lt;/EM&gt;&amp;nbsp;If your backend VMs need outbound internet access over IPv6, you need to configure an IPv6 outbound rule.&lt;/P&gt;
&lt;H2&gt;Global Load Balancer - multi-region&lt;/H2&gt;
&lt;P&gt;Azure &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/load-balancer/cross-region-overview" target="_blank" rel="noopener"&gt;Global Load Balancer&lt;/A&gt; (aka Cross-Region Load Balancer) provides a cloud-native global network load balancing solution for distributing traffic across multiple Azure regions. Unlike DNS-based solutions, Global Load Balancer uses anycast IP addressing to automatically route clients to the nearest healthy regional deployment through Microsoft's global network.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key Features of Global Load Balancer:&lt;/STRONG&gt;&lt;BR /&gt;- &lt;STRONG&gt;Static Anycast Global IP:&lt;/STRONG&gt; Global Load Balancer provides a single static public IP address (both IPv4 and IPv6 supported) that is advertised from all Microsoft WAN edge nodes globally. This anycast address ensures clients always connect to the nearest available Microsoft edge node without requiring DNS resolution.&lt;BR /&gt;- &lt;STRONG&gt;Geo-Proximity Routing:&lt;/STRONG&gt; The geo-proximity load-balancing algorithm minimizes latency by directing traffic to the nearest region&lt;SPAN style="color: rgb(30, 30, 30);"&gt; where the backend is deployed&lt;/SPAN&gt;. Unlike DNS-based routing, there's no DNS lookup delay - clients connect directly to the anycast IP and are immediately routed to the best region.&lt;BR /&gt;- &lt;STRONG&gt;Layer 4 Pass-Through: &lt;/STRONG&gt;Global Load Balancer operates as a Layer 4 pass-through network load balancer, preserving the original client IP address (including IPv6 addresses) for backend applications to use in their logic.&lt;BR /&gt;- &lt;STRONG&gt;Regional Redundancy:&lt;/STRONG&gt; If one region fails, traffic is automatically routed to the next closest healthy regional load balancer within seconds, providing instant global failover without DNS propagation delays.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Architecture Overview:&lt;/STRONG&gt; Global Load Balancer sits in front of multiple regional Standard Load Balancers, each deployed in different Azure regions. Each regional load balancer serves a local deployment of your application with IPv6 frontends. The global load balancer provides a single anycast IP address that clients worldwide can use to access your application, with automatic routing to the nearest healthy region.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Multi-Region Deployment Steps:&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Deploy Regional Load Balancers:&lt;/STRONG&gt; Create Standard External Load Balancers in multiple Azure regions (e.g. Sweden Central, East US2). Configure each with dual-stack frontends (IPv4 and IPv6 public IPs) and connect them to regional VM deployments or VM scale sets running your application.&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Configure Global Frontend IP address:&lt;/STRONG&gt; Create a Global tier public IPv6 address for the frontend, in one of the supported &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/load-balancer/cross-region-overview#home-regions-in-azure" target="_blank" rel="noopener"&gt;Global Load Balancer home regions &lt;/A&gt;. This becomes your application's global anycast address.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;&lt;STRONG&gt;Create Global Load Balancer:&lt;/STRONG&gt; Deploy the Global Load Balancer in the same home region. The home region is where the global load balancer resource is deployed - it doesn't affect traffic routing.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Add Regional Backends: &lt;/STRONG&gt;Configure the backend pool of the Global Load Balancer to include your regional Standard Load Balancers. Each regional load balancer becomes an endpoint in the global backend pool. The global load balancer automatically monitors the health of each regional endpoint.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Set Up Load Balancing Rules: &lt;/STRONG&gt;Create load balancing rules mapping frontend ports to backend ports. For web applications, typically map port 80 (HTTP) and 443 (HTTPS). The backend port on the global load balancer must match the frontend port of the regional load balancers.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Configure Health Probes:&lt;/STRONG&gt; Global Load Balancer automatically monitors the health of regional load balancers every 5 seconds. If a regional load balancer's availability drops to 0, it is automatically removed from rotation, and traffic is redirected to other healthy regions.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;DNS Configuration: &lt;/STRONG&gt;Create DNS records pointing to the global load balancer's anycast IP addresses. Create both A (IPv4) and AAAA (IPv6) records for your application's hostname pointing to the global load balancer's static IPs.&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;Outcome:&lt;/STRONG&gt; IPv6 clients connecting to your application's hostname will resolve to the global load balancer's anycast IPv6 address. When they connect to this address, the Microsoft global network infrastructure automatically routes their connection to the nearest participating Azure region. The regional load balancer then distributes the traffic across local backend instances. If that region becomes unavailable, subsequent connections are automatically routed to the next nearest healthy region.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Example: &lt;/STRONG&gt;Our web application, which displays the region it is in, and the calling client's IP address, now runs on VMs behind External Load Balancers in Sweden Central and East US2. &amp;nbsp;The External Load Balancer's front-ends are in the backend pool of a Global Load Balancer, which has a Global tier front-end IPv6 address. The front-end has an FQDN of `ipv6webapp-glb.eastus2.cloudapp.azure.com` (the region designation `eastus2` in the FQDN refers to the Global Load Balancer's "home region", into which the Global tier public IP must be deployed).&lt;/P&gt;
&lt;P&gt;When called from a client in Europe, Global Load Balancer directs the request to the instance deployed in Sweden Central.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;When called from a client in the US, Global Load Balancer directs the request to the instance deployed in US East 2.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Features:&lt;/STRONG&gt;&lt;BR /&gt;-&lt;STRONG&gt; Client IP Preservation: &lt;/STRONG&gt;The original IPv6 client address is preserved and available to backend applications, enabling IP-based logic and compliance requirements.&lt;BR /&gt;- &lt;STRONG&gt;Floating IP Support: &lt;/STRONG&gt;Configure floating IP at the global level for advanced networking scenarios requiring direct server return or high availability clustering.&lt;BR /&gt;- &lt;STRONG&gt;Instant Scaling: &lt;/STRONG&gt;Add or remove regional deployments behind the global endpoint without service interruption, enabling dynamic scaling for traffic events.&lt;BR /&gt;-&lt;STRONG&gt; Multiple Protocol Support: &lt;/STRONG&gt;Supports both TCP and UDP traffic distribution across regions, suitable for various application types beyond web services.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Limitations &amp;amp; Considerations:&lt;/STRONG&gt;&lt;BR /&gt;-&amp;nbsp;&lt;EM&gt;Home Region Requirement:&lt;/EM&gt; Global Load Balancer can only be deployed in specific &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/load-balancer/cross-region-overview#home-regions-in-azure" target="_blank" rel="noopener"&gt;home regions&lt;/A&gt;, though this doesn't affect traffic routing performance.&lt;BR /&gt;-&amp;nbsp;&lt;EM&gt;Public Frontend Only:&lt;/EM&gt;&amp;nbsp;Global Load Balancer currently supports only public frontends - internal/private global load balancing is not available.&lt;BR /&gt;-&amp;nbsp;&lt;EM&gt;Standard Load Balancer Backends:&lt;/EM&gt;&amp;nbsp;Backend pool can only contain Standard Load Balancers, not Basic Load Balancers or other resource types.&lt;BR /&gt;- S&lt;EM&gt;ame IP Version Requirement:&lt;/EM&gt;&amp;nbsp;NAT64 translation isn't supported - frontend and backend must use the same IP version (IPv4 or IPv6).&lt;BR /&gt;-&amp;nbsp;&lt;EM&gt;Port Consistency:&lt;/EM&gt;&amp;nbsp;Backend port on global load balancer must match the frontend port of regional load balancers for proper traffic flow.&lt;BR /&gt;- &lt;EM&gt;Health Probe Dependencies:&lt;/EM&gt;&amp;nbsp;Regional load balancers must have proper health probes configured for the global load balancer to accurately assess regional health.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Comparison with DNS-Based Solutions:&lt;/STRONG&gt;&lt;BR /&gt;Unlike Traffic Manager or other DNS-based global load balancing solutions, Global Load Balancer provides:&lt;BR /&gt;- &lt;STRONG&gt;Instant Failover:&lt;/STRONG&gt; No DNS TTL delays - failover happens within seconds at the network level.&lt;BR /&gt;- &lt;STRONG&gt;True Anycast:&lt;/STRONG&gt; Single IP address that works globally without client-side DNS resolution.&lt;BR /&gt;- &lt;STRONG&gt;Consistent Performance: &lt;/STRONG&gt;Geo-proximity routing through Microsoft's backbone network ensures optimal paths.&lt;BR /&gt;- &lt;STRONG&gt;Simplified Management:&lt;/STRONG&gt; No DNS record management or TTL considerations.&lt;/P&gt;
&lt;P&gt;This architecture delivers &lt;STRONG&gt;global high availability and optimal performance&lt;/STRONG&gt; for IPv6 applications through anycast routing, making it a good solution for latency-sensitive applications requiring worldwide accessibility with near-instant regional failover.&lt;/P&gt;
&lt;H2&gt;Application Gateway - single region&lt;/H2&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/application-gateway/overview" target="_blank" rel="noopener"&gt;Azure Application Gateway&lt;/A&gt;&amp;nbsp;can be deployed in a single region to provide IPv6 access to applications in that region. Application Gateway acts as the entry point for IPv6 traffic, terminating HTTP/S from IPv6 clients and forwarding to backend servers over IPv4. This scenario works well when your web application is served from one Azure region and you want to enable IPv6 connectivity for it.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Key IPv6 Features of Application Gateway (v2 SKU):&amp;nbsp;&lt;/STRONG&gt;&lt;BR /&gt;- &lt;STRONG&gt;Dual-Stack Frontend:&lt;/STRONG&gt; Application Gateway v2 supports both [IPv4 and IPv6 frontends](https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-faq). When configured as dual-stack, the gateway gets two IP addresses – one IPv4 and one IPv6 – and can listen on both. (IPv6-only is not supported; IPv4 is always paired). IPv6 support requires Application Gateway v2, v1 does not support IPv6.&lt;BR /&gt;- &lt;STRONG&gt;No IPv6 on Backends: &lt;/STRONG&gt;The backend pool must use IPv4 addresses. IPv6 addresses for backend servers are currently not supported. This means your web servers can remain on IPv4 internal addresses, simplifying adoption because you only enable IPv6 on the frontend.&lt;BR /&gt;- &lt;STRONG&gt;WAF Support: &lt;/STRONG&gt;The Application Gateway Web Application Firewall (WAF) will inspect IPv6 client traffic just as it does IPv4.&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Single Region Deployment Steps: &lt;/STRONG&gt;To set up an IPv6-capable Application Gateway in one region, consider the following high-level process:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Enable IPv6 on the Virtual Network:&lt;/STRONG&gt; Ensure the region’s VNET where the Application Gateway will reside has an IPv6 address space. Configure a subnet for the Application Gateway that is dual-stack (contains both an IPv4 subnet prefix and an IPv6 /64 prefix).&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Deploy Application Gateway (v2) with Dual Stack Frontend:&lt;/STRONG&gt; Create a new Application Gateway using the&lt;STRONG&gt; Standard_v2 or WAF_v2 SKU&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;&amp;nbsp;Populate Backend Pool: &lt;/STRONG&gt;Ensure your backend pool (the target application servers or service) contains (DNS names pointing to) IPv4 addresses of your actual web servers. IPv6 addresses are not supported for backends.&lt;/LI&gt;
&lt;LI&gt;&amp;nbsp;&lt;STRONG&gt;Configure Listeners and Rules: &lt;/STRONG&gt;Set up listeners on the Application Gateway for your site. When creating an HTTP(S) listener, you choose which frontend IP to use – you would create one listener for IPv4 address and one for IPv6. Both listeners can use the same domain name (hostname) and the same underlying routing rule to your backend pool.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Testing and DNS:&lt;/STRONG&gt; After the gateway is deployed and configured, note the IPv6 address of the frontend (you can find it in the Gateway’s overview or in the associated Public IP resource). Update your application’s DNS records: create an &lt;STRONG&gt;AAAA record &lt;/STRONG&gt;pointing to this IPv6 address (and update the A record to point to the IPv4 if it changed). With DNS in place, test the application by accessing it from an IPv6-enabled client or tool.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;Outcome: &lt;/STRONG&gt;In this single-region scenario, IPv6-only clients will resolve your website’s hostname to an IPv6 address and connect to the Application Gateway over IPv6. The Application Gateway then handles the traffic and forwards it to your application over IPv4 internally. From the user perspective, the service now appears natively on IPv6. Importantly, this does not require any changes to the web servers, which can continue using IPv4.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Application Gateway will include the source IPv6 address in an X-Forwarded-For header, so that the backend application has visibility of the originating client's address.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Example:&lt;/STRONG&gt;&amp;nbsp;Our web application, which displays the region it is deployed in and the calling client's IP address, now runs on a VM behind Application Gateway in Sweden Central. The front-end has an FQDN of `ipv6webapp-appgw-swedencentral.swedencentral.cloudapp.azure.com`.&lt;/P&gt;
&lt;P&gt;Application Gateway terminates the IPv6 connection from the client and proxies the traffic to the application over IPv4. The client's IPv6 address is passed in the X-Forwarded-For header, which is read and displayed by the application.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Calling the application's API endpoint at `/api/region` shows additional detail, including the IPv4 address of the Application Gateway instance that initiates the connection to the backend, and the original client IPv6 address (with the source port number appended) preserved in the X-Forwarded-For header.&lt;/P&gt;
&lt;PRE&gt;{&lt;BR /&gt;  "region": "SwedenCentral",&lt;BR /&gt;  "clientIp": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21:60769",&lt;BR /&gt;  "xForwardedFor": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21:60769",&lt;BR /&gt; &lt;STRONG&gt; "remoteAddress": "::ffff:10.1.0.4"&lt;/STRONG&gt;,&lt;BR /&gt;  "isPrivateIP": false,&lt;BR /&gt;  "expressIp": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21:60769",&lt;BR /&gt;  "connectionInfo": {&lt;BR /&gt;  &amp;nbsp; "remoteAddress": "::ffff:10.1.0.4",&lt;BR /&gt;  &amp;nbsp; "remoteFamily": "IPv6",&lt;BR /&gt;  &amp;nbsp; "localAddress": "::ffff:10.1.1.68",&lt;BR /&gt;  &amp;nbsp; "localPort": 80&lt;BR /&gt;  },&lt;BR /&gt;  "allHeaders": {&lt;BR /&gt;  &amp;nbsp; &lt;STRONG&gt;"x-forwarded-for": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21:60769"&lt;/STRONG&gt;&lt;BR /&gt;  },&lt;BR /&gt;  "deploymentAdvice": "Public IP detected successfully"&lt;BR /&gt;}&lt;/PRE&gt;
&lt;P&gt;&lt;STRONG&gt;Limitations &amp;amp; Considerations:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;EM&gt;Application Gateway v1 SKUs are not supported for IPv6.&lt;/EM&gt;&amp;nbsp;If you have an older deployment on v1, you’ll need to migrate to v2.&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;EM&gt;IPv6-only Application Gateway is not allowed.&lt;/EM&gt;&amp;nbsp;You must have IPv4 alongside IPv6 (the service must be dual-stack). This is usually fine, as dual-stack ensures all clients are covered.&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;EM&gt;No IPv6 backend addresses:&lt;/EM&gt;&amp;nbsp;The backend pool must have IPv4 addresses.&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;EM&gt;Management and Monitoring:&lt;/EM&gt; Application Gateway logs traffic from IPv6 clients to Log Analytics (the client IP field will show IPv6 addresses).&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;EM&gt;Security:&lt;/EM&gt; Azure’s infrastructure provides basic DDoS protection for IPv6 endpoints just as for IPv4. However, it is highly recommended to deploy Azure DDoS Protection Standard: this provides enhanced mitigation tailored to your specific deployment. Consider using the Web Application Firewall function for protection against application layer attacks.&lt;/P&gt;
&lt;H2&gt;Application Gateway - multi-region&lt;/H2&gt;
&lt;P&gt;Mission-critical web applications should be deploy in multiple Azure regions, achieving higher availability and lower latency for users worldwide. In a multi-region scenario, you need a mechanism to direct IPv6 client traffic to the “nearest” or healthiest region. Azure Application Gateway by itself is a regional service, so to use it in multiple regions, we use&amp;nbsp;&lt;STRONG&gt;Azure Traffic Manager&lt;/STRONG&gt; for global DNS load balancing, or use Azure Front Door (covered in the next section) as an alternative. This section focuses on the&amp;nbsp;&lt;STRONG&gt;Traffic Manager + Application Gateway&lt;/STRONG&gt;&amp;nbsp;approach to multi-region IPv6 delivery.&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-overview" target="_blank" rel="noopener"&gt;Azure Traffic Manager&lt;/A&gt;&amp;nbsp;is a DNS-based load balancer that can distribute traffic across endpoints in different regions. It works by responding to DNS queries with the appropriate endpoint FQDN or IP, based on the routing method (Performance, Priority, Geographic) configured. Traffic Manager is agnostic to the IP version: it either returns CNAMEs, or AAAA records for IPv6 endpoints and A records for IPv4. This makes it suitable for routing IPv6 traffic globally.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Architecture Overview:&lt;/STRONG&gt;&amp;nbsp;Each region has its own dual-stack Application Gateway. Traffic Manager is configured with an endpoint entry for each region’s gateway. The application’s FQDN is now a domain name hosted by Traffic Manager such as ipv6webapp.traffimanager.net, or a CNAME that ultimately points to it.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;DNS resolution will go through Traffic Manager, which decides which regional gateway’s FQDN to return. The client then connects directly to that Application Gateway’s IPv6 address, as follows:&lt;/P&gt;
&lt;P&gt;1. &amp;nbsp;&lt;STRONG&gt;DNS query:&lt;/STRONG&gt; Client asks for&amp;nbsp;&lt;U&gt;ipv6webapp.trafficmanager.net&lt;/U&gt;, which is &amp;nbsp;hosted in a Traffic Manager profile.&lt;BR /&gt;2. &lt;STRONG&gt;Traffic Manager decision:&lt;/STRONG&gt; Traffic Manager sees an incoming DNS request and chooses the best endpoint (say, Sweden Central) based on routing rules (e.g., geographic proximity or lowest latency).&lt;BR /&gt;3.&lt;STRONG&gt; Traffic Manager response: &lt;/STRONG&gt;Traffic Manager returns the FQDN of the Sweden Central Application Gateway to the client.&amp;nbsp;&lt;BR /&gt;4. &lt;STRONG&gt;DNS Resolution:&lt;/STRONG&gt; The client resolves regional FQDN and receives a AAAA response containing the IPv6 address.&lt;BR /&gt;5. &lt;STRONG&gt;Client connects: &lt;/STRONG&gt;The client’s browser connects to the West Europe App Gateway IPv6 address directly. The HTTP/S session is established via IPv6 to that regional gateway, which then handles the request.&lt;BR /&gt;6. &lt;STRONG&gt;Failover: &lt;/STRONG&gt;If that region becomes unavailable, Traffic Manager’s health checks will detect it and subsequent DNS queries will be answered with the FQDN of the secondary region’s gateway.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Deployment Steps for Multi-Region with Traffic Manager:&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Set up Dual-Stack Application Gateways in each region: &lt;/STRONG&gt;Similar to the single-region case, deploy an Azure Application Gateway v2 in each desired region (e.g., one in North America, one in Europe). Configure the web application in each region, these should be parallel deployments serving the same content.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Configure a Traffic Manager Profile: &lt;/STRONG&gt;In Azure Traffic Manager, create a profile and choose a &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-routing-methods" target="_blank" rel="noopener"&gt;routing method&lt;/A&gt; (such as Performance for nearest region routing, or Priority for primary/backup failover). Add &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-endpoint-types" target="_blank" rel="noopener"&gt;endpoints&lt;/A&gt; for each region. Since our endpoints are Azure services with IPs, we can either use&amp;nbsp;&lt;EM&gt;Azure endpoints&lt;/EM&gt; (if the Application Gateways have Azure-provided DNS names) or &lt;EM&gt;External endpoints&lt;/EM&gt; using the IP addresses. The simplest way is to use the &lt;EM&gt;Public IP resource&lt;/EM&gt;&amp;nbsp;of each Application Gateway as an Azure endpoint – ensure each App Gateway’s public IP has a DNS label (so it has a FQDN). Traffic Manager will detect those and also be aware of their IPs. Alternatively, use the IPv6 address as an External endpoint directly. Traffic Manager allows IPv6 addresses and will return AAAA records for them.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;DNS Setup:&lt;/STRONG&gt; Traffic Manager profiles have a FQDN (like&amp;nbsp;&lt;U&gt;ipv6webapp.trafficmanager.net&lt;/U&gt;). You can either use that as your service’s CNAME, or you can configure your custom domain to CNAME to the Traffic Manager profile. &amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Health Probing:&lt;/STRONG&gt; Traffic Manager continuously checks the health of endpoints. When endpoints are Azure App Gateways, it uses HTTP/S probes to a specified URI path, to each gateway’s address. Make sure each App Gateway has a listener on the probing endpoint (e.g., a health check page) and that health probes are enabled.&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Testing Failover and Distribution: &lt;/STRONG&gt;Test the setup by querying DNS from different geographical locations (to see if you get the nearest region’s IP). Also simulate a region down (stop the App Gateway or backend) and observe if Traffic Manager directs traffic to the other region. Because DNS TTLs are involved, failover isn’t instant but typically within a couple of minutes depending on TTL and probe interval.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;Considerations in this Architecture:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;EM&gt;Latency vs Failover:&lt;/EM&gt;&amp;nbsp;Traffic Manager as a DNS load balancer directs users at connect time, but once a client has an answer (IP address), it keeps sending to that address until the DNS record TTL expires and it re-resolves. This is fine for most web apps. Ensure the TTL in the Traffic Manager profile is not too high (the default is 30 seconds).&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;EM&gt;IPv6 DNS and Connectivity:&lt;/EM&gt;&amp;nbsp;Confirm that each region’s IPv6 address is correctly configured and reachable globally. Azure’s public IPv6 addresses are globally routable. Traffic Manager itself is a global service and fully supports IPv6 in its decision-making.&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;EM&gt;Cost:&lt;/EM&gt;&amp;nbsp;Using multiple Application Gateways and Traffic Manager incurs costs for each component (App Gateway is per hour + capacity unit, Traffic Manager per million DNS queries). This is a trade-off for high availability.&lt;/P&gt;
&lt;P&gt;-&amp;nbsp;&lt;EM&gt;Alternative: Azure Front Door: &lt;/EM&gt;Azure Front Door is an alternative to&amp;nbsp;the Traffic Manager + Application Gateway combination. Front Door can automatically handle global routing and failover at layer 7 without DNS-based limitations, offering potentially faster failover. Azure Front Door is discussed in the next section.&lt;/P&gt;
&lt;P&gt;In summary, a multi-region IPv6 web delivery with Application Gateways uses Traffic Manager for global DNS load balancing. Traffic Manager will seamlessly return IPv6 addresses for IPv6 clients, ensuring that no matter where an IPv6-only client is, they get pointed to the nearest available regional deployment of your app. This design achieves global resiliency (withstand a regional outage) and low latency access, leveraging IPv6 connectivity on each regional endpoint.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Example:&lt;/STRONG&gt; The global FQDN of our application is now&amp;nbsp;&lt;U&gt;ipv6webapp.trafficmanager.net&lt;/U&gt;&amp;nbsp;and clients will use this FQDN to access the application regardless of their geographical location.&lt;BR /&gt;Traffic Manager will return the FQDN of one of the regional deployments, `ipv6webapp-appgw-swedencentral.swedencentral.cloudapp.azure.com` or `ipv6webappr2-appgw-eastus2.eastus2.cloudapp.azure.com` depending on the routing method configured, the health state of the regional endpoints and the client's location. Then the client resolves the regional FQDN through its local DNS server and connects to the regional instance of the application.&lt;/P&gt;
&lt;P&gt;DNS resolution from a client in Europe:&lt;/P&gt;
&lt;PRE&gt;Resolve-DnsName ipv6webapp.trafficmanager.net&lt;BR /&gt;Name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Type &amp;nbsp; TTL &amp;nbsp; Section &amp;nbsp; &amp;nbsp;NameHost&lt;BR /&gt;---- &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ---- &amp;nbsp; --- &amp;nbsp; ------- &amp;nbsp; &amp;nbsp;--------&lt;BR /&gt;ipv6webapp.trafficmanager.net &amp;nbsp;CNAME &amp;nbsp;59 &amp;nbsp; &amp;nbsp;Answer &amp;nbsp; &amp;nbsp; ipv6webapp-appgw-swedencentral.swedencentral.cloudapp.azure.com&lt;BR /&gt;Name &amp;nbsp; &amp;nbsp; &amp;nbsp; : ipv6webapp-appgw-swedencentral.swedencentral.cloudapp.azure.com&lt;BR /&gt;QueryType &amp;nbsp;: AAAA&lt;BR /&gt;TTL &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;: 10&lt;BR /&gt;Section &amp;nbsp; &amp;nbsp;: Answer&lt;BR /&gt;IP6Address : 2603:1020:1001:25::168&lt;/PRE&gt;
&lt;P&gt;And from a client in the US:&lt;/P&gt;
&lt;P&gt;Resolve-DnsName ipv6webapp.trafficmanager.net&lt;BR /&gt;Name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Type &amp;nbsp; TTL &amp;nbsp; Section &amp;nbsp; &amp;nbsp;NameHost&lt;BR /&gt;---- &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ---- &amp;nbsp; --- &amp;nbsp; ------- &amp;nbsp; &amp;nbsp;--------&lt;BR /&gt;ipv6webapp.trafficmanager.net &amp;nbsp;CNAME &amp;nbsp;60 &amp;nbsp; &amp;nbsp;Answer &amp;nbsp; &amp;nbsp; ipv6webappr2-appgw-eastus2.eastus2.cloudapp.azure.com&lt;BR /&gt;Name &amp;nbsp; &amp;nbsp; &amp;nbsp; : ipv6webappr2-appgw-eastus2.eastus2.cloudapp.azure.com&lt;BR /&gt;QueryType &amp;nbsp;: AAAA&lt;BR /&gt;TTL &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;: 10&lt;BR /&gt;Section &amp;nbsp; &amp;nbsp;: Answer&lt;BR /&gt;IP6Address : 2603:1030:403:17::5b0&lt;/P&gt;
&lt;H2&gt;Azure Front Door&lt;/H2&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/frontdoor/front-door-overview" target="_blank" rel="noopener"&gt;Azure Front Door&lt;/A&gt;&amp;nbsp;is an application delivery network with built-in CDN, SSL offload, WAF, and routing capabilities. It provides a single, unified frontend distributed across Microsoft’s edge network. Azure Front Door natively supports IPv6 connectivity.&lt;/P&gt;
&lt;P&gt;For applications that have users worldwide, Front Door offers advantages:&lt;BR /&gt;-&lt;STRONG&gt; Global Anycast Endpoint: &lt;/STRONG&gt;Provides anycast IPv4 and IPv6 addresses, advertised out of all edge locations, with automatic A and AAAA DNS record support.&lt;BR /&gt;- &lt;STRONG&gt;IPv4 and IPv6 origin support: &lt;/STRONG&gt;Azure Front Door supports both IPv4 and IPv6 origins (i.e. backends), both within Azure and externally (i.e. accessible over the internet).&lt;BR /&gt;- &lt;STRONG&gt;Simplified DNS:&lt;/STRONG&gt; Custom domains can be mapped using CNAME records.&lt;BR /&gt;- &lt;STRONG&gt;Layer-7 Routing:&lt;/STRONG&gt; Supports path-based routing and automatic backend health detection.&lt;BR /&gt;- &lt;STRONG&gt;Edge Security: &lt;/STRONG&gt;Includes DDoS protection and optional WAF integration.&lt;/P&gt;
&lt;P&gt;Front Door enables "cross-IP version" scenario's: a client can connect to the Front Door front-end over IPv6, and then Front Door can connect to an IPv4 origin. Conversely, an IPv4-only client can retrieve content from an IPv6 backend via Front Door. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;Front Door preserves the client's source IP address in the X-Forwarded-For header.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note: &lt;/STRONG&gt;Front Door provides managed IPv6 addresses that are not customer-owned resources. Custom domains should use CNAME records pointing to the Front Door hostname rather than direct IP address references.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Private Link Integration&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Azure Front Door Premium introduces &lt;STRONG&gt;Private Link integration&lt;/STRONG&gt;, enabling secure, private connectivity between Front Door and backend resources, without exposing them to the public internet.&lt;/P&gt;
&lt;P&gt;When Private Link is enabled, Azure Front Door establishes a private endpoint within a Microsoft-managed virtual network. This endpoint acts as a secure bridge between Front Door’s global edge network and your origin resources, such as Azure App Service, Azure Storage, Application Gateway, or workloads behind an internal load balancer.&lt;/P&gt;
&lt;P&gt;Traffic from end users still enters through Front Door’s globally distributed POPs, benefiting from features like SSL offload, caching, and WAF protection. However, instead of routing to your origin over public, internet-facing, endpoints, Front Door uses the private Microsoft backbone to reach the private endpoint. This ensures that all traffic between Front Door and your origin remains isolated from external networks.&lt;/P&gt;
&lt;P&gt;The private endpoint connection requires approval from the origin resource owner, adding an extra layer of control. Once approved, the origin can restrict public access entirely, enforcing that all traffic flows through Private Link.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Private Link integration brings following &lt;STRONG&gt;benefits:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;-&lt;EM&gt; Enhanced Security:&lt;/EM&gt; By removing public exposure of backend services, Private Link significantly reduces the risk of DDoS attacks, data exfiltration, and unauthorized access.&lt;BR /&gt;- &lt;EM&gt;Compliance and Governance:&lt;/EM&gt; Many regulatory frameworks mandate private connectivity for sensitive workloads. Private Link helps meet these requirements without sacrificing global availability.&lt;BR /&gt;- &lt;EM&gt;Performance and Reliability: &lt;/EM&gt;Traffic between Front Door and your origin travels over Microsoft’s high-speed backbone network, delivering low latency and consistent performance compared to public internet paths.&lt;BR /&gt;- &lt;EM&gt;Defense in Depth: &lt;/EM&gt;Combined with Web Application Firewall (WAF), TLS encryption, and DDoS protection, Private Link strengthens your security posture across multiple layers.&lt;BR /&gt;- &lt;EM&gt;Isolation and Control:&lt;/EM&gt; Resource owners maintain control over connection approvals, ensuring that only authorized Front Door profiles can access the origin.&lt;BR /&gt;-&lt;EM&gt; Integration with Hybrid Architectures:&lt;/EM&gt; For scenarios involving AKS clusters, custom APIs, or workloads behind internal load balancers, Private Link enables secure connectivity without requiring public IPs or complex VPN setups.&lt;/P&gt;
&lt;P&gt;Private Link transforms Azure Front Door from a global entry point into a fully private delivery mechanism for your applications, aligning with modern security principles and enterprise compliance needs.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Example:&lt;/STRONG&gt; Our application is now placed behind Azure Front Door. We are combining a public backend endpoint and Private Link integration, to show both in action in a single example. The Sweden Central origin endpoint is the public IPv6 endpoint of the regional External Load Balancers and the origin in US East 2 is connected via Private Link integration&lt;/P&gt;
&lt;P&gt;The global FQDN `ipv6webapp-d4f4euhnb8fge4ce.b01.azurefd.net` and clients will use this FQDN to access the application regardless of their geographical location. The FQDN resolves to Front Door's global anycast address, and the internet will route client requests to the nearest Microsoft edge from this address is advertised. Front Door will then transparently route the request to the nearest origin deployment in Azure. Although public endpoints are used in this example, that traffic will be route over the Microsoft network.&lt;/P&gt;
&lt;P&gt;From a client in Europe:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Calling the application's api endpoint on `ipv6webapp-d4f4euhnb8fge4ce.b01.azurefd.net/api/region` shows some more detail.&lt;/P&gt;
&lt;PRE&gt;{&lt;BR /&gt;  "region": "SwedenCentral",&lt;BR /&gt;  "clientIp": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21",&lt;BR /&gt;  "xForwardedFor": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21",&lt;BR /&gt;  &lt;STRONG&gt;"remoteAddress": "2a01:111:2053:d801:0:afd:ad4:1b28",&lt;/STRONG&gt;&lt;BR /&gt;  "isPrivateIP": false,&lt;BR /&gt;  "expressIp": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21",&lt;BR /&gt;  "connectionInfo": {&lt;BR /&gt;  &amp;nbsp; "remoteAddress": "2a01:111:2053:d801:0:afd:ad4:1b28",&lt;BR /&gt;  &amp;nbsp; "remoteFamily": "IPv6",&lt;BR /&gt;  &amp;nbsp; "localAddress": "2001:db8:1:1::4",&lt;BR /&gt;  &amp;nbsp; "localPort": 80&lt;BR /&gt;  },&lt;BR /&gt;  "allHeaders": {&lt;BR /&gt;  &amp;nbsp; &lt;STRONG&gt;"x-forwarded-for": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21"&lt;/STRONG&gt;,&lt;BR /&gt;  &amp;nbsp; "x-azure-clientip": "2001:1c04:3404:9500:fd9b:58f4:1fb2:db21"&lt;BR /&gt;  },&lt;BR /&gt;  "deploymentAdvice": "Public IP detected successfully"&lt;BR /&gt;}&lt;/PRE&gt;
&lt;P&gt;"remoteAddress": "2a01:111:2053:d801:0:afd:ad4:1b28" is the address from which Front Door sources its request to the origin.&lt;/P&gt;
&lt;P&gt;From a client in the US:&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;The detailed view shows that the IP address calling the backend instance now is local VNET address. Private Link sources traffic coming in from a local address taken from the VNET it is in. The original client IP address is again preserved in the X-Forwarded-For header.&lt;/P&gt;
&lt;PRE&gt;{&lt;BR /&gt;&amp;nbsp; "region": "eastus2",&lt;BR /&gt;&amp;nbsp; "clientIp": "2603:1030:501:23::68:55658",&lt;BR /&gt;&amp;nbsp; "xForwardedFor": "2603:1030:501:23::68:55658",&lt;BR /&gt;&amp;nbsp; &lt;STRONG&gt;"remoteAddress": "::ffff:10.2.1.5",&lt;/STRONG&gt;&lt;BR /&gt;&amp;nbsp; "isPrivateIP": false,&lt;BR /&gt;&amp;nbsp; "expressIp": "2603:1030:501:23::68:55658",&lt;BR /&gt;&amp;nbsp; "connectionInfo": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "remoteAddress": "::ffff:10.2.1.5",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "remoteFamily": "IPv6",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "localAddress": "::ffff:10.2.2.68",&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "localPort": 80&lt;BR /&gt;&amp;nbsp; },&lt;BR /&gt;&amp;nbsp; "allHeaders": {&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &lt;STRONG&gt;"x-forwarded-for": "2603:1030:501:23::68:55658"&lt;/STRONG&gt;&lt;BR /&gt;&amp;nbsp; },&lt;BR /&gt;&amp;nbsp; "deploymentAdvice": "Public IP detected successfully"&lt;BR /&gt;}&lt;/PRE&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;IPv6 adoption for web applications is no longer optional. It is essential as public IPv4 address space is depleted, mobile networks increasingly use IPv6 only and governments mandate IPv6 reachability for public services. Azure's comprehensive dual-stack networking capabilities provide a clear path forward, enabling organizations to leverage IPv6 externally without sacrificing IPv4 compatibility or requiring complete infrastructure overhauls.&lt;/P&gt;
&lt;P&gt;Azure's externally facing services — including Application Gateway, External Load Balancer, Global Load Balancer, and Front Door — support IPv6 frontends, while Application Gateway and Front Door maintain IPv4 backend connectivity. This architecture allows applications to remain unchanged while instantly becoming accessible to IPv6-only clients.&lt;/P&gt;
&lt;P&gt;For single-region deployments, Application Gateway offers layer-7 features like SSL termination and WAF protection. External Load Balancer provides high-performance layer-4 distribution. Multi-region scenarios benefit from Traffic Manager's DNS-based routing combined with regional Application Gateways, or the superior performance and failover capabilities of Global Load Balancer's anycast addressing.&lt;/P&gt;
&lt;P&gt;Azure Front Door provides global IPv6 delivery with edge optimization, built-in security, and seamless failover across Microsoft's network. Private Link integration allows secure global IPv6 distribution while maintaining backend isolation.&lt;/P&gt;
&lt;P&gt;The transition to IPv6 application delivery on Azure is straightforward: enable dual-stack addressing on virtual networks, configure IPv6 frontends on load balancing services, and update DNS records. With Application Gateway or Front Door, backend applications require no modifications. These Azure services handle the IPv4-to-IPv6 translation seamlessly. This approach ensures both immediate IPv6 accessibility and long-term architectural flexibility as IPv6 adoption accelerates globally.&lt;/P&gt;</description>
      <pubDate>Fri, 14 Nov 2025 13:57:25 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/delivering-web-applications-over-ipv6/ba-p/4469638</guid>
      <dc:creator>Marc de Droog</dc:creator>
      <dc:date>2025-11-14T13:57:25Z</dc:date>
    </item>
    <item>
      <title>Extending Layer-2 (VXLAN) networks over Layer-3 IP network</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/extending-layer-2-vxlan-networks-over-layer-3-ip-network/ba-p/4466406</link>
      <description>&lt;H2&gt;Introduction&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Virtual Extensible LAN (VXLAN)&lt;/STRONG&gt; is a network virtualization technology that encapsulates Layer-2 Ethernet frames inside Layer-3 UDP/IP packets. In essence, VXLAN creates a logical Layer-2 overlay network on top of an IP network, allowing Ethernet segments (VLANs) or underlay IP packet to be stretched across routed infrastructures. A key advantage is scale: VXLAN uses a 24-bit segment ID (VNI) instead of the 12-bit VLAN ID, supporting around&amp;nbsp;&lt;STRONG&gt;16 million isolated networks&lt;/STRONG&gt; versus the 4,094 VLAN limit. This makes VXLAN ideal for large cloud data centers and multi-tenant environments that demand many distinct network segments.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;VXLAN’s Layer-2 overlays bring &lt;STRONG&gt;flexibility and mobility&lt;/STRONG&gt; to modern architectures. Because VXLAN tunnels can span multiple Layer-3 domains, organizations can extend VLANs across different sites or subnets – for example, creating a tunnel that extends over two data centers over an IP WAN as long as underlying tunnel IP is reachable. This enables seamless workload mobility and disaster recovery: also helps virtual machines or applications can move between physical locations &lt;STRONG&gt;without changing IP addresses&lt;/STRONG&gt;, since they remain in the same virtual L2 network. The overlay approach also decouples the logical network from the physical underlay, meaning you can run your familiar L2 segments over any IP routing infrastructure while leveraging features like equal-cost multi-path (ECMP) load balancing and avoiding large spanning-tree domains. In short, VXLAN combines the best of both worlds – &lt;STRONG&gt;the simplicity of Layer-2 adjacency with the scalability of Layer-3 routing&lt;/STRONG&gt; – making it a foundational tool in cloud networking and software-defined data centers.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Layer-2 VXLAN overlay on a Layer-3 IP network&lt;/STRONG&gt; allows customers or edge networks to stretch Ethernet (VLAN) segments across geographically distributed sites using an IP backbone. This approach preserves VLAN tags end-to-end and enables flexible segmentation across locations &lt;STRONG&gt;without&lt;/STRONG&gt; needing an extend or continuous Layer-2 network in the core. It also helps hide or avoid the underlying IP network complexities.&lt;/P&gt;
&lt;P&gt;However, it’s crucial to account for &lt;STRONG&gt;MTU overhead&lt;/STRONG&gt; (VXLAN adds ~50 bytes of header) so that the overlay’s VLAN MTU is set smaller than the underlay IP MTU – otherwise fragmentation or packet loss can occur. Additionally, because VXLAN doesn’t inherently signal link status, implementing &lt;STRONG&gt;Bidirectional Forwarding Detection (BFD)&lt;/STRONG&gt; on the VXLAN interfaces provides rapid detection of neighbor failures, ensuring quick rerouting or recovery when a tunnel endpoint goes down.&lt;/P&gt;
&lt;H2&gt;VXLAN overlay use case and benefits&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;VXLAN&lt;/STRONG&gt;&amp;nbsp;is a standard protocol (IETF RFC 7348) that can encapsulate Layer-2 Ethernet frames into Layer-3 UDP/IP packets. By doing so, &lt;STRONG&gt;VXLAN creates an L2 overlay network on top of an L3 underlay&lt;/STRONG&gt;. The VXLAN tunnel endpoints (VTEPs), which can be routers, switches, or hosts, wrap the original Ethernet frame (including its VLAN tag) with an IP/UDP header plus a VXLAN header, then send it through the IP network. The default UDP port for VXLAN is 4789. This mechanism offers several key benefits:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Preserves VLAN Tags and L2 Segmentation:&lt;/STRONG&gt; The entire Ethernet frame is carried across, so the original VLAN ID (802.1Q tag) is maintained end-to-end through the tunnel. Even if an extra tag is added at the ingress for local tunneling, the &lt;STRONG&gt;customer’s inner VLAN tag remains intact across the overlay&lt;/STRONG&gt;. This means a VLAN defined at one site will be recognized at the other site as the same VLAN, enabling seamless L2 adjacency. In practice, VXLAN can transport &lt;STRONG&gt;multiple VLANs transparently&lt;/STRONG&gt; by mapping each VLAN or service to a VXLAN Network Identifier (VNI).&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Flexible network segmentation at scale:&lt;/STRONG&gt; VXLAN uses a 24-bit VNI (VXLAN Network ID), supporting &lt;STRONG&gt;about 16 million distinct segments&lt;/STRONG&gt;, far exceeding the 4094 VLAN limit of traditional 802.1Q networks. This gives architects freedom to create many isolated L2 overlay networks (for multi-tenant scenarios, application tiers, etc.) over a shared IP infrastructure. &lt;STRONG&gt;Geographically distributed sites can share the same VLANs&lt;/STRONG&gt; and broadcast domain via VXLAN, without the WAN routers needing any VLAN configurations. The IP/MPLS core only sees routed VXLAN packets, not individual VLANs, simplifying the underlay configuration.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;No need for end-to-end VLANs in underlay:&lt;/STRONG&gt; Traditional solutions to extend L2 might rely on methods like MPLS/VPLS or long ethernet trunk lines, which often require configuring VLANs across the WAN and can’t scale well. In a VXLAN overlay, &lt;STRONG&gt;the intermediate L3 network remains unaware of customer VLANs&lt;/STRONG&gt;, and you &lt;STRONG&gt;don’t need to trunk VLANs across the WAN&lt;/STRONG&gt;. Each site’s VTEP encapsulates and decapsulates traffic, so the core routers/switches just forward IP/UDP packets. This isolation improves scalability and stability—core devices don’t carry massive MAC address tables or STP domains from all sites. It also means the underlay can use robust IP routing (OSPF, BGP, etc.) with ECMP, rather than extending spanning-tree across sites. In short, &lt;STRONG&gt;VXLAN lets you treat the WAN like an IP cloud&lt;/STRONG&gt; while still maintaining Layer-2 connectivity between specific endpoints.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Multi-path and resilience:&lt;/STRONG&gt; Since the overlay runs on IP, it naturally leverages IP routing features. ECMP in the underlay, for example, can load-balance VXLAN traffic across multiple links, something not possible with a single bridged VLAN spanning the WAN. The encapsulated traffic’s UDP header even provides entropy (via source port hashing) to help load-sharing on multiple paths. Furthermore, if one underlay path fails, routing protocols can reroute VXLAN packets via alternate paths without disrupting the logical L2 network. This increases reliability and bandwidth usage compared to a Layer-2 only approach.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Diagram: VXLAN Overlay Across a Layer-3 WAN&lt;/STRONG&gt; – Below is a simplified illustration of two sites using a VXLAN overlay. “Site A” and “Site B” each have a local VLAN (e.g. VLAN 100) that they want to bridge across an IP WAN. The VTEPs at each site encapsulate the Layer-2 frames into VXLAN/UDP packets and send them over the IP network. Inside the tunnel, the original VLAN tag is preserved. In this example, a &lt;STRONG&gt;BFD session&lt;/STRONG&gt; (red dashed line) runs between the VTEPs to monitor the tunnel’s health, as explained later.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;EM&gt;Figure 1: Two sites (A and B) extend “VLAN 100” across an IP WAN using a VXLAN tunnel. The inner VLAN tag is preserved over the L3 network. A BFD keepalive (every 900ms) runs between the VXLAN endpoints to detect failures.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;The &lt;STRONG&gt;practical effect&lt;/STRONG&gt; of this design is that devices in Site&amp;nbsp;A and Site&amp;nbsp;B can be in the same VLAN and IP subnet, broadcast to each other, etc., even though they are connected by a routed network. For example, if Site&amp;nbsp;A has a machine in VLAN&amp;nbsp;100 with IP 10.1.100.5/24 and Site&amp;nbsp;B has another in VLAN&amp;nbsp;100 with IP 10.1.100.10/24, they can communicate as if on one LAN – ARP, switches, and VLAN tagging function normally across the tunnel.&lt;/P&gt;
&lt;H2&gt;MTU and overhead considerations&lt;/H2&gt;
&lt;P&gt;One critical consideration for deploying VXLAN overlays is handling the &lt;STRONG&gt;increased packet size&lt;/STRONG&gt; due to encapsulation. A VXLAN packet includes additional headers on top of the original Ethernet frame: an outer IP header, UDP header, and VXLAN header (plus an outer Ethernet header on the WAN interface). &lt;STRONG&gt;This encapsulation adds approximately 50 bytes of overhead&lt;/STRONG&gt; to each packet (for IPv4; about 70 bytes for IPv6).&lt;/P&gt;
&lt;P&gt;In practical terms, if your original Ethernet frame was the typical 1500-byte payload (1518 bytes with Ethernet header and CRC, or 1522 with a VLAN tag), the VXLAN-encapsulated version will be ~1550 bytes. **The underlying IP network *&lt;EM&gt;must&lt;/EM&gt;* accommodate these larger frames**, or you’ll get fragmentation or drops. Many network links by default only support 1500-byte MTUs, so without adjustments, a VXLAN carrying a full-sized VLAN packet would exceed that. Though modern networks runs jumbo frames (~9k), if the underlying encapsulated packet frames exceeds 8950 bytes it can create problems like control-plane failure (ex BGP session tear down) or fragmentation for data packet causing out of order packet.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Solution:&lt;/STRONG&gt; &lt;EM&gt;Either raise the MTU on the underlay network or enforce a lower MTU on the overlay.&lt;/EM&gt; Network architects generally prefer to &lt;STRONG&gt;increase the IP MTU of the core&lt;/STRONG&gt; so the overlay can carry standard 1500-byte Ethernet frames unfragmented. For example, one vendor’s guide recommends configuring at least a &lt;STRONG&gt;1550-byte MTU on all network segments&lt;/STRONG&gt; to account for VXLAN’s ~50B overhead. In enterprise environments, it’s common to use “baby jumbo” frames (e.g. 1600 bytes) or full jumbo (9000 bytes) in the datacenter/WAN to accommodate various tunneling overheads. If increasing the underlay MTU is not possible (say, over an ISP that only supports 1500), then the &lt;STRONG&gt;VLAN MTU on the overlay should be reduced&lt;/STRONG&gt; – for instance, set the VLAN interface MTU to 1450 bytes, so that even with the 50B VXLAN overhead the outer packet remains 1500 bytes. This prevents any IP fragmentation.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why Fragmentation is Undesirable:&lt;/STRONG&gt; VXLAN itself doesn’t include any fragmentation mechanism; it relies on the underlay IP to fragment if needed. But IP fragmentation can harm performance and some devices/drop policies might simply drop oversized VXLAN packets instead of fragmenting. In fact, certain implementations &lt;STRONG&gt;don’t support VXLAN fragmentation or Path MTU discovery&lt;/STRONG&gt; on tunnels. The safe approach is to ensure no encapsulated packet ever exceeds the physical MTU. That means planning your MTUs end-to-end: make the core links slightly larger than the largest expected overlay packet.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Diagram: VXLAN Encapsulation and MTU Layering&lt;/STRONG&gt; – The figure below illustrates the components of a VXLAN-encapsulated frame and how they contribute to packet size. The original Ethernet frame (yellow) with a VLAN tag is wrapped with a new outer Ethernet, IP, UDP, and VXLAN header . The extra headers add ~50 bytes. If the inner (yellow) frame was, say, 1500 bytes of payload plus 18 bytes Ethernet overhead, the outer packet becomes ~1568 bytes (including new headers and FCS). In practice the&amp;nbsp;&lt;STRONG&gt;old FCS is replaced by a new one&lt;/STRONG&gt;, so the net growth is ~50 bytes. The key takeaway: &lt;EM&gt;the IP transport must handle the total size&lt;/EM&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;EM&gt;Figure 2: Layered view of a VXLAN-encapsulated packet (not to scale). The original Ethernet frame with VLAN tag (yellow) is encapsulated by outer headers (blue/green/red/gray), resulting in ~50 bytes of overhead for IPv4. The outer packet must fit within the WAN MTU (e.g. 1518B if inner frame is 1468B) to avoid fragmentation.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;In summary, &lt;STRONG&gt;ensure the IP underlay’s MTU is configured to accommodate the VXLAN overhead&lt;/STRONG&gt;. If using standard 1500-byte MTUs on the WAN, set your overlay interfaces (VLAN SVIs or bridge MTUs) to around 1450 bytes. In many cases if possible, raising the WAN MTU to 1600 or using jumbo frames throughout is the best practice to provide ample headroom. Always test your end-to-end path with ping sweeps (e.g. using the DF-bit and varying sizes) to verify that the encapsulated packets aren’t being dropped due to MTU limits.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;H2&gt;Neighbor failure detection with BFD&lt;/H2&gt;
&lt;P&gt;One challenge with overlays like VXLAN is that the &lt;STRONG&gt;logical link lacks immediate visibility into physical link status&lt;/STRONG&gt;. If one end of the VXLAN tunnel goes down or the path fails, the other end’s VXLAN interface may remain “up” (since its own underlay interface is still up), potentially blackholing traffic until higher-level protocols notice. VXLAN itself doesn’t send continuous “link alive” messages to check the remote VTEP’s reachability.&lt;/P&gt;
&lt;P&gt;To address this, network engineers deploy &lt;STRONG&gt;BFD&lt;/STRONG&gt; on VXLAN endpoints. BFD is a lightweight protocol specifically designed for rapid failure detection &lt;STRONG&gt;independent of media or routing protocol&lt;/STRONG&gt;. It works by two endpoints periodically sending very fast, small hello packets to each other (often every 50ms or less). If a few consecutive hellos are missed, BFD declares the peer down – often within &amp;lt;1 second, versus several seconds (or tens of seconds) with conventional detection.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Applying BFD to VXLAN:&lt;/STRONG&gt; Many router and switch vendors support running BFD over a VXLAN tunnel or on the VTEP’s loopback adjacencies. When enabled, the two VTEPs will continuously ping each other at the configured interval. If the VXLAN tunnel fails (e.g. one site loses connectivity), BFD on the surviving side will quickly detect the loss of response. &lt;STRONG&gt;This can then trigger corrective actions&lt;/STRONG&gt;: for instance, the BFD can generate logs for the logical interface or notify the routing protocol to withdraw routes via that tunnel. In designs with redundant tunnels or redundant VTEPs, BFD helps achieve sub-second failover – traffic can switch to a backup VXLAN tunnel almost immediately upon a primary failure. Even in a single-tunnel scenario, BFD gives an early alert to the network operator or applications that the link is down, rather than quietly dropping packets.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Example:&lt;/STRONG&gt; If Site&amp;nbsp;A and Site&amp;nbsp;B have two VXLAN tunnels (primary and backup) connecting them, running BFD on each tunnel interface means that if the primary’s path goes down, BFD at Site&amp;nbsp;A and B will detect it within milliseconds and inform the routing control-plane. The network can then shift traffic to the backup tunnel right away. Without BFD, the network might have to wait for a timeout (e.g. OSPF dead interval or even ARP timeouts) to realize the primary tunnel is dead, causing a noticeable outage.&lt;/P&gt;
&lt;P&gt;BFD is protocol-agnostic – it can integrate with any routing protocols. For VXLAN, it’s purely a&amp;nbsp;&lt;STRONG&gt;monitoring mechanism&lt;/STRONG&gt;: lightweight and with minimal overhead on the tunnel. Its messages are small UDP packets (often on port 3784/3785) that can be sourced from the VTEP’s IP. The frequency is configurable based on how fast you need detection vs. how much overhead you can afford; common timers are 300ms with 3x multiplier (detect in ~1s) for moderate speeds, or even 50ms with 3x (150ms detection) for high-speed failover requirements.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Bottom line:&lt;/STRONG&gt; &lt;EM&gt;Implementing BFD dramatically improves the reliability&lt;/EM&gt; of a VXLAN-based L2 extension. Since &lt;STRONG&gt;VXLAN tunnels don’t automatically signal if a neighbor is unreachable&lt;/STRONG&gt;, BFD acts as the heartbeat. Many platforms even allow BFD to directly influence interface state (for example, the VXLAN interface can be tied to go down when BFD fails) so that any higher-level protocols (like VRRP, dynamic routing, etc.) immediately react to the loss. This prevents lengthy outages and ensures the overlay network remains robust even over a complex WAN.&lt;/P&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;Deploying a Layer-2 VXLAN overlay across a Layer-3 WAN &lt;STRONG&gt;unlocks powerful capabilities&lt;/STRONG&gt;: you can keep using familiar VLAN-based segmentation across sites while taking advantage of an IP network’s scalability and resilience. It’s a vendor-neutral solution widely supported in modern networking gear. By &lt;STRONG&gt;preserving VLAN tags&lt;/STRONG&gt; over the tunnel, VXLAN makes it possible to stretch subnets and broadcast domains to remote locations for workloads that require Layer-2 adjacency. With the huge VNI address space, segmentation can scale for large enterprises or cloud providers well beyond traditional VLAN limits.&lt;/P&gt;
&lt;P&gt;However, to realize these benefits successfully, &lt;STRONG&gt;careful attention must be paid to MTU and link monitoring&lt;/STRONG&gt;. Always accommodate the ~50-byte VXLAN overhead by configuring proper MTUs (or adjusting the overlay’s MTU) – this prevents fragmentation and packet loss that can be very hard to troubleshoot after deployment. And since a VXLAN tunnel’s health isn’t apparent to switches/hosts by default, use tools like BFD to &lt;STRONG&gt;add fast failure detection&lt;/STRONG&gt;, thereby avoiding black holes and improving convergence times. In doing so, you ensure that your stretched network is not only functional but also resilient and performant.&lt;/P&gt;
&lt;P&gt;By following these guidelines – &lt;STRONG&gt;leveraging VXLAN for flexible L2 overlays, minding the MTU, and bolstering with BFD – network engineers can build a robust, wide-area Layer-2 extension&lt;/STRONG&gt; that behaves nearly indistinguishably from a local LAN, yet rides on the efficiency and reliability of a Layer-3 IP backbone. Enjoy the best of both worlds: VLANs without borders, and an IP network without unnecessary constraints.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;References:&lt;/STRONG&gt; VXLAN technical overview and best practices from vendor documentation and industry sources have been used to ensure accuracy in the above explanations. This ensures the blog is grounded in real-world proven knowledge while remaining vendor-neutral and applicable to a broad audience of cloud and network professionals.&lt;/P&gt;</description>
      <pubDate>Mon, 10 Nov 2025 17:29:05 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/extending-layer-2-vxlan-networks-over-layer-3-ip-network/ba-p/4466406</guid>
      <dc:creator>SaravSubramanian</dc:creator>
      <dc:date>2025-11-10T17:29:05Z</dc:date>
    </item>
    <item>
      <title>Simplify container network metrics filtering in Azure Container Networking Services for AKS</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/simplify-container-network-metrics-filtering-in-azure-container/ba-p/4468221</link>
      <description>&lt;P&gt;We’re excited to introduce &lt;STRONG&gt;container network metrics filtering&lt;/STRONG&gt; in &lt;STRONG&gt;Azure Container Networking Services&lt;/STRONG&gt;&amp;nbsp;for &lt;STRONG&gt;Azure Kubernetes Service (AKS) &lt;/STRONG&gt;is&lt;STRONG&gt; &lt;/STRONG&gt;now in &lt;STRONG&gt;public preview&lt;/STRONG&gt;! This capability transforms how you manage network observability in Kubernetes clusters by giving you control over what metrics matter most.&lt;/P&gt;
&lt;HR /&gt;
&lt;H2&gt;Why excessive metrics are a problem (and how we’re fixing it)&lt;/H2&gt;
&lt;P&gt;In today’s large-scale, microservices-driven environments, teams often face &lt;STRONG&gt;metrics bloat&lt;/STRONG&gt;,&lt;STRONG&gt; &lt;/STRONG&gt;collecting far more data than they need. The result?&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;High storage &amp;amp; ingestion costs:&lt;/STRONG&gt; Paying for data you’ll never use.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Cluttered dashboards:&lt;/STRONG&gt; Hunting for critical latency spikes in a sea of irrelevant pod restarts.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Operational overhead:&lt;/STRONG&gt; Slower queries, higher maintenance, and fatigue.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Our new filtering capability solves this by letting you define &lt;STRONG&gt;precise filters at the pod level&lt;/STRONG&gt; using standard Kubernetes custom resources. You collect only what matters, before it ever reaches your monitoring stack.&lt;/P&gt;
&lt;HR /&gt;
&lt;H2&gt;Key Benefits: Signal Over Noise&lt;/H2&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;th&gt;Benefit&lt;/th&gt;&lt;th&gt;Your Gain&lt;/th&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Fine-grained control&lt;/td&gt;&lt;td&gt;Filter by namespace or pod label. Target critical services and ignore noise.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Cost optimization&lt;/td&gt;&lt;td&gt;Reduce ingestion costs for Prometheus, Grafana, and other tools.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Improved observability&lt;/td&gt;&lt;td&gt;Cleaner dashboards and faster troubleshooting with relevant metrics only.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Dynamic &amp;amp; zero-downtime&lt;/td&gt;&lt;td&gt;Apply or update filters without restarting Cilium agents or Prometheus.&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;HR /&gt;
&lt;H2&gt;How it works: Filtering at the source&lt;/H2&gt;
&lt;P&gt;Unlike traditional sampling or post-processing, filtering happens &lt;STRONG&gt;at the Cilium agent level—inside the kernel’s data plane&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;You define filters using the &lt;STRONG&gt;ContainerNetworkMetric&lt;/STRONG&gt; custom resource to include or exclude metrics such as:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;DNS lookups&lt;/LI&gt;
&lt;LI&gt;TCP connection metrics&lt;/LI&gt;
&lt;LI&gt;Flow metrics&lt;/LI&gt;
&lt;LI&gt;Drop (error) metrics&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This reduces data volume &lt;STRONG&gt;before metrics leave the host&lt;/STRONG&gt;, ensuring your observability tools receive only curated, high-value data.&lt;/P&gt;
&lt;HR /&gt;
&lt;H2&gt;Example: Filtering flow metrics to reduce noise&lt;/H2&gt;
&lt;P&gt;Here’s a sample &lt;CODE&gt;ContainerNetworkMetric&lt;/CODE&gt; CRD that filters &lt;STRONG&gt;only dropped flows&lt;/STRONG&gt; from the &lt;CODE&gt;traffic/http&lt;/CODE&gt; namespace and excludes flows from &lt;CODE&gt;traffic/fortio&lt;/CODE&gt; pods:&lt;/P&gt;
&lt;PRE&gt;apiVersion: acn.azure.com/v1alpha1
kind: ContainerNetworkMetric
metadata:
  name: container-network-metric
spec:
  filters:
    - metric: flow
      includeFilters:
        # Include only DROPPED flows from traffic namespace
        verdict:
          - "dropped"
        from:
          namespacedPod:
            - "traffic/http"
      excludeFilters:
        # Exclude traffic/fortio flows to reduce noise
        from:
          namespacedPod:
            - "traffic/fortio"
&lt;/PRE&gt;
&lt;DIV class="image-container"&gt;
&lt;H3&gt;Before filtering:&lt;/H3&gt;
&lt;img /&gt;&lt;/DIV&gt;
&lt;DIV class="image-container"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="image-container"&gt;
&lt;H3&gt;After applying filters:&lt;/H3&gt;
&lt;img /&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="image-container"&gt;&lt;HR /&gt;
&lt;H2&gt;Getting started today&lt;/H2&gt;
&lt;P&gt;Ready to simplify your network observability?&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Enable Advanced Container Networking Services:&lt;/STRONG&gt; Make sure &lt;A href="https://aka.ms/acns" target="_blank" rel="noopener"&gt;Advanced Container Networking Services is enabled&lt;/A&gt; on your AKS cluster.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Define Your Filter:&lt;/STRONG&gt; Apply the &lt;A href="https://aka.ms/acns/filteringhowto" target="_blank" rel="noopener"&gt;ContainerNetworkMetric CRD&lt;/A&gt; with your include/exclude rules.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Validate:&lt;/STRONG&gt; Check your settings via ConfigMap and Cilium agent logs.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;See the Impact:&lt;/STRONG&gt; Watch ingestion costs drop and dashboards become clearer!&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;👉 Learn more in the &lt;A href="https://aka.ms/acns/container-network-metrics-filtering" target="_blank" rel="noopener"&gt;Metrics Filtering Guide&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Try the public preview today and take control of your container network metrics.&lt;/STRONG&gt;&lt;/P&gt;
&lt;/DIV&gt;</description>
      <pubDate>Mon, 24 Nov 2025 18:49:33 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/simplify-container-network-metrics-filtering-in-azure-container/ba-p/4468221</guid>
      <dc:creator>KhushbuP</dc:creator>
      <dc:date>2025-11-24T18:49:33Z</dc:date>
    </item>
    <item>
      <title>Layer 7 Network Policies for AKS: Now Generally Available for Production Security and Observability!</title>
      <link>https://techcommunity.microsoft.com/t5/azure-networking-blog/layer-7-network-policies-for-aks-now-generally-available-for/ba-p/4467598</link>
      <description>&lt;P&gt;We are thrilled to announce that &lt;STRONG&gt;Layer 7 (L7) Network Policies&lt;/STRONG&gt; for Azure Kubernetes Service (AKS), powered by Cilium and Advanced Container Networking Services (ACNS), has reached &lt;STRONG&gt;General Availability (GA)!&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The journey from public preview to GA signifies a critical step: L7 Network Policies are now fully supported, highly optimized, and ready for your most demanding, mission-critical production workloads.&lt;/P&gt;
&lt;HR /&gt;
&lt;H2&gt;A Practical Example: Securing a Multi-Tier Retail Application&lt;/H2&gt;
&lt;P&gt;Let's walk through a common production scenario. Imagine a standard retail application running on AKS with three core microservices:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;frontend-app: Handles user traffic and displays product information.&lt;/LI&gt;
&lt;LI&gt;inventory-api: A backend service that provides product stock levels. It should be read-only for the frontend.&lt;/LI&gt;
&lt;LI&gt;payment-gateway: A highly sensitive service that processes transactions. It should only accept POST requests from the frontend to a specific endpoint.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;The Security Challenge: &lt;/STRONG&gt;A traditional L4 policy would allow the frontend-app to talk to the inventory-api on its port, but it couldn't prevent a compromised frontend pod from trying to exploit a potential vulnerability by sending a DELETE or POST request to modify inventory data.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;The L7 Policy Solution:&lt;/STRONG&gt; With GA L7 policies, you can enforce the Principle of Least Privilege at the application layer. Here's how you would protect the inventory-api:&lt;/P&gt;
&lt;PRE style="background: #f4f4f4; padding: 10px; border-radius: 5px;"&gt;apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: protect-inventory-api
spec:
  endpointSelector:
    matchLabels:
      app: inventory-api
  ingress:
  - fromEndpoints:
    - matchLabels:
        app: frontend-app
    toPorts:
    - ports:
      - port: "8080" # The application port
        protocol: TCP
      rules:
        http:
        - method: "GET"            # ONLY allow the GET method
          path: "/api/inventory/.*"  # For paths under /api/inventory/
  &lt;/PRE&gt;
&lt;H3&gt;The Outcome:&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Allowed: A legitimate request from the frontend-app (GET /api/inventory/item123) is seamlessly forwarded.&lt;/LI&gt;
&lt;LI&gt;Blocked: Assuming frontend-app is compromised, any malicious request (like DELETE /api/inventory/item123) originating from it is blocked at the network layer. This Zero Trust approach protects the inventory-api service from the threat, regardless of the security state of the source service.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This same principle can be applied to protect the payment-gateway, ensuring it only accepts POST requests to the /process-payment endpoint, and nothing else.&lt;/P&gt;
&lt;H2&gt;Beyond L7: Supporting Zero Trust with Enhanced Security&lt;/H2&gt;
&lt;P&gt;In addition, toL7 application-level policies to ensure Zero Trust, we support Layer 3/4 network security and advanced egress controls like &lt;STRONG&gt;Fully Qualified Domain Name (FQDN) filtering&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;This comprehensive approach allows administrators to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Restrict Outbound Connections (L3/L4 &amp;amp; FQDN): &lt;/STRONG&gt;Implement strict egress control by ensuring that workloads can only communicate with approved external services. FQDN filtering is crucial here, allowing pods to connect exclusively to trusted external domains (e.g., www.trusted-partner.com), significantly reducing the risk of data exfiltration and maintaining compliance. To learn more, visit the&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/aks/container-network-security-fqdn-filtering-concepts" target="_blank" rel="noopener"&gt;FQDN Filtering Overview.&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Enforce Uniform Policy Across the Cluster (CCNP):&lt;/STRONG&gt; Extend protections beyond individual namespaces. By defining security measures as a Cilium Clusterwide Network Policy (CCNP), thanks to its General Availability (GA), administrators can ensure uniform policy enforcement across multiple namespaces or the entire Kubernetes cluster, simplifying management and strengthening the overall security posture of all workloads. To learn&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;CCNP Example: L4 Egress Policy with FQDN Filtering&lt;/H2&gt;
&lt;P&gt;This policy ensures that &lt;STRONG&gt;all pods&lt;/STRONG&gt; across the cluster (CiliumClusterwideNetworkPolicy) are &lt;STRONG&gt;only allowed&lt;/STRONG&gt; to establish outbound connections to the domain *.example.com on the standard web ports (80 and 443).&lt;/P&gt;
&lt;PRE style="background: #f4f4f4; padding: 10px; border-radius: 5px;"&gt;apiVersion: cilium.io/v2&lt;BR /&gt;kind: CiliumClusterwideNetworkPolicy&lt;BR /&gt;metadata:&lt;BR /&gt;&amp;nbsp; name: allow-egress-to-example-com&lt;BR /&gt;spec:&lt;BR /&gt;&amp;nbsp; endpointSelector: {} # Applies to all pods in the cluster&lt;BR /&gt;&amp;nbsp; egress:&lt;BR /&gt;&amp;nbsp; &amp;nbsp; - toFQDNs:&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; - matchPattern: "*.example.com" # Allows access to any subdomain of example.com&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; toPorts:&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; - ports:&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; - port: "443"&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; protocol: TCP&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; - port: "80"&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; protocol: TCP
&amp;nbsp; &lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;HR /&gt;
&lt;H2&gt;Operational Excellence: Observability You Can Trust&lt;/H2&gt;
&lt;P&gt;A secure system must be observable. With GA, the integrated visibility of your L7 traffic is production ready.&lt;/P&gt;
&lt;P&gt;In our example above, the blocked DELETE request isn't silent. It is immediately visible in your Azure Managed Grafana dashboards as a&lt;STRONG&gt; "Dropped"&lt;/STRONG&gt; flow, attributed directly to the protect-inventory-api policy. This makes security incidents auditable and easy to diagnose, enabling operations teams to detect misconfigurations or threats in real time. Below is a sample dashboard layout screenshot.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;HR /&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Next Steps: Upgrade and Secure Your Production!&lt;/H2&gt;
&lt;P&gt;We encourage you to enable L7 Network Policies on your AKS clusters and level up your network security controls for containerized workloads. We value your feedback as we continue to develop and improve this feature. Please refer to the&amp;nbsp;&lt;A class="lia-external-url" href="https://aka.ms/acns/l7policy" target="_blank" rel="noopener"&gt;Layer 7 Policy Overview&lt;/A&gt;&amp;nbsp;for more information and visit&amp;nbsp;&lt;A class="lia-external-url" href="https://aka.ms/acns/l7policy-how-to" target="_blank" rel="noopener"&gt;How to Apply L7 Policy&lt;/A&gt; for an example scenario.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 08 Nov 2025 03:12:57 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-networking-blog/layer-7-network-policies-for-aks-now-generally-available-for/ba-p/4467598</guid>
      <dc:creator>KhushbuP</dc:creator>
      <dc:date>2025-11-08T03:12:57Z</dc:date>
    </item>
  </channel>
</rss>

