azure networking
54 TopicsAnnouncing the general availability of Route-Maps for Azure virtual WAN.
Route-maps is a feature that gives you the ability to control route advertisements and routing for Virtual WAN virtual hubs. Route-maps lets you have more control of the routing that enters and leaves Azure Virtual WAN site-to-site (S2S) VPN connections, User VPN point-to-site (P2S) connections, ExpressRoute connections, and virtual network (VNet) connections. Why use Route-maps? Route-maps can be used to summarize routes when you have on-premises networks connected to Virtual WAN via ExpressRoute or VPN and are limited by the number of routes that can be advertised from/to virtual hub. You can use Route-maps to control routes entering and leaving your Virtual WAN deployment among on-premises and virtual-networks. You can control routing decisions in your Virtual WAN deployment by modifying a BGP attribute such as AS-PATH to make a route more, or less preferable. This is helpful when there are destination prefixes reachable via multiple paths and customers want to use AS-PATH to control best path selection. You can easily tag routes using the BGP community attribute in order to manage routes. What is a Route-map? A Route-map is an ordered sequence of one or more rules that are applied to routes received or sent by the virtual hub. Each Route-map rule comprises of 3 sections: 1. Match conditions Route-maps allows you to match routes using Route-prefix, BGP community and AS-Path. These are the set of conditions that a processed route must meet in order to be considered as a match for the rule. Below are the supported match conditions. Property Criterion Value (example) Interpretation Route-prefix equals 10.1.0.0/16,10.2.0.0/16,10.3.0.0/16,10.4.0.0/16 Matches these 4 routes only. Specific prefixes under these routes won't be matched. Route-prefix contains 10.1.0.0/16,10.2.0.0/16, 192.168.16.0/24, 192.168.17.0/24 Matches all the specified routes and all prefixes underneath. (Example 10.2.1.0/24 is underneath 10.2.0.0/16) Community equals 65001:100,65001:200 Community property of the route must have both the communities. Order isn't relevant. Community contains 65001:100,65001:200 Community property of the route can have one or more of the specified communities. AS-Path equals 65001,65002,65003 AS-PATH of the routes must have ASNs listed in the specified order. AS-Path contains 65001,65002,65003 AS-PATH in the routes can contain one or more of the ASNs listed. Order isn't relevant. 2. Actions to be performed Route-Maps allows you to Drop or Modify routes. Below are the supported actions. Property Action Value Interpretation Route-prefix Drop 10.3.0.0/8,10.4.0.0/8 The routes specified in the rule are dropped. Route-prefix Replace 10.0.0.0/8,192.168.0.0/16 Replace all the matched routes with the routes specified in the rule. As-Path Add 64580,64581 Prepend AS-PATH with the list of ASNs specified in the rule. These ASNs are applied in the same order for the matched routes. As-Path Replace 65004,65005 AS-PATH will be set to this list in the same order, for every matched route. See key considerations for reserved AS numbers. As-Path Replace No value specified Remove all ASNs in the AS-PATH in the matched routes. Community Add 64580:300,64581:300 Add all the listed communities to all the matched routes community attribute. Community Replace 64580:300,64581:300 Replace community attribute for all the matched routes with the list provided. Community Replace No value specified Remove community attribute from all the matched routes. Community Remove 65001:100,65001:200 Remove any of the listed communities that are present in the matched routes’ Community attribute. 3. Apply to a Connection You can apply route-maps on each connection for the inbound, outbound, or both inbound and outbound directions. Where can I find Route-maps? Route-Maps can be found in the routing section of your virtual WAN hub. How do I troubleshoot Route-maps? The Route-Map Dashboard lets you views the routes for a connection. You can view the routes before or after a Route-Map is applied. Demo5.7KViews14likes2CommentsThe Deployment of Hollow Core Fiber (HCF) in Azure’s Network
Co-authors: Jamie Gaudette, Frank Rey, Tony Pearson, Russell Ellis, Chris Badgley and Arsalan Saljoghei In the evolving Cloud and AI landscape, Microsoft is deploying state-of-the-art Hollow Core Fiber (HCF) technology in Azure’s network to optimize infrastructure and enhance performance for customers. By deploying cabled HCF technology together with HCF-supportable datacenter (DC) equipment, this solution creates ultra-low latency traffic routes with faster data transmission to meet the demands of Cloud & AI workloads. The successful adoption of HCF technology in Azure’s network relies on developing a new ecosystem to take full advantage of the solution, including new cables, field splicing, installation and testing… and Microsoft has done exactly that. Azure has collaborated with industry leaders to deliver components and equipment, cable manufacturing and installation. These efforts, along with advancements in HCF technology, have paved the way for its deployment in-field. HCF is now operational and carrying live customer traffic in multiple Microsoft Azure regions, proving it is as reliable as conventional fiber with no field failures or outages. This article will explore the installation activities, testing, and link performance of a recent HCF deployment, showcasing the benefits that Azure customers can leverage from HCF technology. HCF connected Azure DCs are ready for service The latest HCF cable deployment connects two Azure DCs in a major city, with two metro routes each over 20km long. The hybrid cables both include 32 HCF and 48 single mode fiber (SMF) strands, with HCFs delivering high-capacity Dense Wavelength Division Multiplexing (DWDM) transmission comparable to SMF. The cables are installed over two diverse paths (the red and blue lines shown in image 1), each with different entry points into the DC. Route diversity at the physical layer enhances network resilience and reliability by allowing traffic to be rerouted through alternate paths, minimizing the risk of network outage should there be a disruption. It also allows for increased capacity by distributing network traffic more evenly, improving overall network performance and operational efficiency. Image 1: Satellite image of two Azure DC sites (A & Z) within a metro region interconnected with new ultra-low latency HCF technology, using two diverse paths (blue & red) Image 2 shows the optical routing that the deployed HCF cables take through both Inside Plant (ISP) and Outside Plant (OSP), for interconnecting terminal equipment within key sites in the region (comprised of DCs, Network Gateways and PoPs). Image 2: Optical connectivity at the physical layer between DCA and DCZ The HCF OSP cables have been developed for outdoor use in harsh environments without degrading the propagation properties of the fiber. The cable technology is smaller, faster, and easier to install (using a blown installation method). Alongside cables, various other technologies have been developed and integrated to provide a reliable end-to-end HCF network solution. This includes dedicated HCF-compatible equipment (shown in image 3), such as custom cable joint enclosures, fusion splicing technology, HCF patch tails for cable termination in the DC, and a HCF custom-designed Optical Time Domain Reflectometer (OTDR) to locate faults in the link. These solutions work with commercially available transponders and DWDM technologies to deliver multi-Tb/s capacities for Azure customers. Looking more closely at a HCF cable installation, in image 4 the cable is installed by passing it through a blowing-head (1) and inserting it into pre-installed conduit in segments underground along the route. As with traditional installations with conventional cable, the conduit, cable entry/exit, and cable joints are accessible through pre-installed access chambers, typically a few hundred meters apart. The blowing head uses high-pressure air from a compressor to push the cable into the conduit. A single drum-length of cable can be re-fleeted above ground (2) at multiple access points and re-jetted (3) over several kilometers. After the cables are installed inside the conduit, they are jointed at pre-designated access chamber locations. These house the purpose designed cable joint enclosures. Image 4: Cable preparation and installation during in-field deployment Image 5 shows a custom HCF cable joint enclosure in the field, tailored to protect HCFs for reliable data transmission. These enclosures organize the many HCF splices inside and are placed in underground chambers across the link. Image 5: 1) HCF joint enclosure in a chamber in-field 2) Open enclosure showing fiber loop storage protected by colored tubes at the rear-side of the joint 3) Open enclosure showing HCF spliced on multiple splice tray layers Inside the DC, connectorized ‘plug-and-play’ HCF-specific patch tails have been developed and installed for use with existing DWDM solutions. The patch tails interface between the HCF transmission and SMF active and passive equipment, each containing two SMF compatible connectors, coupled to the ISP HCF cable. In image 6, this has been terminated to a patch panel and mated with existing DWDM equipment inside the DC. Image 6: HCF patch tail solution connected to DWDM equipment Testing To validate the end-to-end quality of the installed HCF links (post deployment and during its operation), field deployable solutions have been developed and integrated to ensure all required transmission metrics are met and to identify and restore any faults before the link is ready for customer traffic. One such solution is Microsoft’s custom-designed HCF-specific OTDR, which helps measure individual splice losses and verify attenuation in all cable sections. This is checked against rigorous Azure HCF specification requirements. The OTDR tool is invaluable for locating high splice losses or faults that need to be reworked before the link can be brought into service. The diagram below shows an OTDR trace detecting splice locations and splice loss levels (dB) across a single strand of installed HCF. The OTDR can also continuously monitor HCF links and quickly locate faults, such as cable cuts, for quick recovery and remediation. For this deployment, a mean splice loss of 0.16dB was achieved, with some splices as low as 0.04dB, comparable to conventional fiber. Low attenuation and splice loss helps to maintain higher signal integrity, supporting longer transmission reach and higher traffic capacity. There are ongoing Azure HCF roadmap programs to continually improve this. Performance Before running customer traffic on the link, the fiber is tested to ensure reliable, error-free data transmission across the operating spectrum by counting lost or error bits. Once confirmed, the link is moved into production, allowing customer traffic to flow on the route. These optical tests, tailored to HCF, are carried out by the installer to meet Azure’s acceptance requirements. Image 8 illustrates the flow of traffic across a HCF link, dictated by changing demand on capacity and routing protocols in the region, which fluctuate throughout the day. The HCF span supports varying levels of customer traffic from the point the link was made live, without incurring any outages or link flaps. A critical metric for measuring transmission performance over each HCF path is the instantaneous Pre-Forward Error Correction (FEC) Bit Error Rate (BER) level. Pre-FEC BERs measure errors in a digital data stream at the receiver before any error correction is applied. This is crucial for transmission quality when the link carries data traffic; lower levels mean fewer errors and higher signal quality, essential for reliable data transmission. The following graph (image 9) shows the evolution of the Pre-FEC BER level on a HCF span once the link is live. A single strand of HCF is represented by a color, with all showing minimal fluctuation. This demonstrates very stable Pre-FEC BER levels, well below < -3.4 (log 10 ), across all 400G optical transponders, operating over all channels during a 24-day period. This indicates the network can handle high-data transmission efficiently with no Post-FEC errors, leading to high customer traffic performance and reliability. Image 9: Very stable Pre-FEC BER levels across the HCF span over 20 days The graph below demonstrates the optical loss stability over one entire span which is comprised of two HCF strands. It was monitored continuously over 20 days using the inbuilt line system and measured in both directions to assess the optical health of the HCF link. The new HCF cable paths are live and carrying customer traffic across multiple Azure regions. Having demonstrated the end-to-end deployment capabilities and network compatibility of the HCF solution, it is possible to take full advantage of the ultra-stable, high performance and reliable connectivity HCF delivers to Azure customers. What’s next? Unlocking the full potential of HCF requires compatible, end-to-end solutions. This blog outlines the holistic and deployable HCF systems we have developed to better serve our customers. While we further integrate HCF into more Azure Regions, our development roadmap continues. Smaller cables with more fibers and enhanced systems components to further increase the capacity of our solutions, standardized and simplified deployment and operations, as well as extending the deployable distance of HCF long haul transmission solutions. Creating a more stable, higher capacity, faster network will allow Azure to better serve all its customers. Learn more about how hollow core fiber is accelerating AI. Recently published HCF research papers: Ultra high resolution and long range OFDRs for characterizing and monitoring HCF DNANFs Unrepeated HCF transmission over spans up to 301.7km9.6KViews10likes1CommentInspection Patterns in Hub-and-Spoke and vWAN Architectures
By shruthi_nair Mays_Algebary Inspection plays a vital role in network architecture, and each customer may have unique inspection requirements. This article explores common inspection scenarios in both Hub-and-Spoke and Virtual WAN (vWAN) topologies. We’ll walk through design approaches assuming a setup with two Hubs or Virtual Hubs (VHubs) connected to on-premises environments via ExpressRoute. The specific regions of the Hubs or VHubs are not critical, as the same design principles can be applied across regions. Scenario1: Hub-and-Spoke Inspection Patterns In the Hub-and-Spoke scenarios, the baseline architecture assumes the presence of two Hub VNets. Each Hub VNet is peered with its local spoke VNets as well as with the other Hub VNet (Hub2-VNet). Additionally, both Hub VNets are connected to both local and remote ExpressRoute circuits to ensure redundancy. Note: In Hub-and-Spoke scenarios, connectivity between virtual networks over ExpressRoute circuits across Hubs is intentionally disabled. This ensures that inter-Hub traffic uses VNet peering, which provides a more optimized path, rather than traversing the ExpressRoute circuit. In Scenario 1, we present two implementation approaches: a traditional method and an alternative leveraging Azure Virtual Network Manager (AVNM). Option1: Full Inspection A widely adopted design pattern is to inspect all traffic, both east-west and north-south, to meet security and compliance requirements. This can be implemented using a traditional Hub-and-Spoke topology with VNet Peering and User-Defined Routes (UDRs), or by leveraging AVNM with Connectivity Configurations and centralized UDR management. In the traditional approach: VNet Peering is used to connect each spoke to its local Hub, and to establish connectivity between the two Hubs. UDRs direct traffic to the firewall as the next hop, ensuring inspection before reaching its destination. These UDRs are applied at the Spoke VNets, the Gateway Subnet, and the Firewall Subnet (especially for inter-region scenarios), as shown in the below diagram. As your environment grows, managing individual UDRs and VNet Peerings manually can become complex. To simplify deployment and ongoing management at scale, you can use AVNM. With AVNM: Use the Hub-and-Spoke connectivity configuration to manage routing within a single Hub. Use the Mesh connectivity configuration to establish Inter-Hub connectivity between the two Hubs. AVNM also enables centralized creation, assignment, and management of UDRs, streamlining network configuration at scale. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Spoke ✅ Spoke ↔ Internet ✅ Option2: Selective Inspection Between Azure VNets In some scenarios, full traffic inspection is not required or desirable. This may be due to network segmentation based on trust zones, for example, traffic between trusted VNets may not require inspection. Other reasons include high-volume data replication, latency-sensitive applications, or the need to reduce inspection overhead and cost. In this design, VNets are grouped into trusted and untrusted zones. Trusted VNets can exist within the same Hub or across different Hubs. To bypass inspection between trusted VNets, you can connect them directly using VNet Peering or AVNM Mesh connectivity topology. It’s important to note that UDRs are still used and configured as described in the full inspection model (Option 1). However, when trusted VNets are directly connected, system routes (created by VNet Peering or Mesh connectivity) take precedence over custom UDRs. As a result, traffic between trusted VNets bypasses the firewall and flows directly. In contrast, traffic to or from untrusted zones follows the UDRs, ensuring it is routed through the firewall for inspection. t Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Internet ✅ Spoke ↔ Spoke (Same Zones) ❌ Spoke ↔ Spoke (Across Zones) ✅ Option3: No Inspection to On-premises In cases where a firewall at the on-premises or colocation site already inspects traffic from Azure, customers typically aim to avoid double inspection. To support this in the above design, traffic destined for on-premises is not routed through the firewall deployed in Azure. For the UDRs applied to the spoke VNets, ensure that "Propagate Gateway Routes" is set to true, allowing traffic to follow the ExpressRoute path directly without additional inspection in Azure. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ❌ Spoke ↔ Spoke ✅ Spoke ↔ Internet ✅ Option4: Internet Inspection Only While not generally recommended, some customers choose to inspect only internet-bound traffic and allow private traffic to flow without inspection. In such cases, spoke VNets can be directly connected using VNet Peering or AVNM Mesh connectivity. To ensure on-premises traffic avoids inspection, set "Propagate Gateway Routes" to true in the UDRs applied to spoke VNets. This allows traffic to follow the ExpressRoute path directly without being inspected in Azure. Scenario2: vWAN Inspection Options Now we will explore inspection options using a vWAN topology. Across all scenarios, the base architecture assumes two Virtual Hubs (VHubs), each connected to its respective local spoke VNets. vWAN provides default connectivity between the two VHubs, and each VHub is also connected to both local and remote ExpressRoute circuits for redundancy. It's important to note that this discussion focuses on inspection in vWAN using Routing Intent. As a result, bypassing inspection for traffic to on-premises is not supported in this model. Option1: Full Inspection As noted earlier, inspecting all traffic, both east-west and north-south, is a common practice to fulfill compliance and security needs. In this design, enabling Routing Intent provides the capability to inspect both, private and internet-bound traffic. Unlike the Hub-and-Spoke topology, this approach does not require any UDR configuration. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Spoke ✅ Spoke ↔ Internet ✅ Option2: Using Different Firewall Flavors for Traffic Inspection Using different firewall flavors inside VHub for traffic inspection Some customers require specific firewalls for different traffic flows, for example, using Azure Firewall for East-West traffic while relying on a third-party firewall for North-South inspection. In vWAN, it’s possible to deploy both Azure Firewall and a third-party network virtual appliance (NVA) within the same VHub. However, as of this writing, deploying two different third-party NVAs in the same VHub is not supported. This behavior may change in the future, so it’s recommended to monitor the known limitations section for updates. With this design, you can easily control which firewall handles East-West versus North-South traffic using Routing Intent, eliminating the need for UDRs. Using different firewall flavors inside VHub for traffic inspection Deploying third-party firewalls in spoke VNets when VHub limitations apply If the third-party firewall you want to use is not supported within the VHub, or if the managed firewall available in the VHub lacks certain required features compared to the version deployable in a regular VNet, you can deploy the third-party firewall in a spoke VNet instead, while using Azure Firewall in the VHub. In this design, the third-party firewall (deployed in a spoke VNet) handles internet-bound traffic, and Azure Firewall (in the VHub) inspects East-West traffic. This setup is achieved by peering the third-party firewall VNet to the VHub, as well as directly peering it with the spoke VNets. These spoke VNets are also connected to the VHub, as illustrated in the diagram below. UDRs are required in the spoke VNets to forward internet-bound traffic to the third-party firewall VNet. East-West traffic routing, however, is handled using the Routing Intent feature, directing traffic through Azure Firewall without the need for UDRs. Deploying third-party firewalls in spoke VNets when VHub limitations apply Note: Although it is not required to connect the third-party firewall VNet to the VHub for traffic flow, doing so is recommended for ease of management and on-premises reachability. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Inspected using Azure Firewall Spoke ↔ Spoke ✅ Inspected using Azure Firewall Spoke ↔ Internet ✅ Inspected using Third Party Firewall Option3: Selective Inspection Between Azure VNets Similar to the Hub-and-Spoke topology, there are scenarios where full traffic inspection is not ideal. This may be due to Azure VNets being segmented into trusted and untrusted zones, where inspection is unnecessary between trusted VNets. Other reasons include large data replication between specific VNets or latency-sensitive applications that require minimizing inspection delays and associated costs. In this design, trusted and untrusted VNets can reside within the same VHub or across different VHubs. Routing Intent remains enabled to inspect traffic between trusted and untrusted VNets, as well as internet-bound traffic. To bypass inspection between trusted VNets, you can connect them directly using VNet Peering or AVNM Mesh connectivity. Unlike the Hub-and-Spoke model, this design does not require UDR configuration. Because trusted VNets are directly connected, system routes from VNet peering take precedence over routes learned through the VHub. Traffic destined for untrusted zones will continue to follow the Routing Intent and be inspected accordingly. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Internet ✅ Spoke ↔ Spoke (Same Zones) ❌ Spoke ↔ Spoke (Across Zones) ✅ Option4: Internet Inspection Only While not generally recommended, some customers choose to inspect only internet-bound traffic and bypass inspection of private traffic. In this design, you only enable the Internet Inspection option within Routing Intent, so private traffic bypasses the firewall entirely. The VHub manages both intra-and inter-VHub routing directly. Internet Inspection Only Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ❌ Spoke ↔ Internet ✅ Spoke ↔ Spoke ❌2.6KViews9likes3CommentsSubnet Peering
The Basics: VNET Peering Virtual Networks in Azure can be connected through VNET Peering. Peered VNETs become one routing domain, meaning that the entire IP space of each VNET is visible to and reachable from the other VNET. This is great for many applications: wire speed connectivity without gateways or other complexities. In this diagram, vnet-left and vnet-right are peered: This is what that looks like in the portal for vnet-left (with the righthand vnet similar): Effective routes of vm's in either vnet show the entire ip space of the peered vnet. For vm left-1 in vnet-left: az network nic show-effective-route-table -g sn-rg -n left-1701 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.0.0.0/16 VnetLocal Default Active 10.1.0.0/16 VNetPeering Default Active 0.0.0.0/0 Internet And for vm right-1 in vnet-right: az network nic show-effective-route-table -g sn-rg -n right-159 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.1.0.0/16 VnetLocal Default Active 10.0.0.0/16 VNetPeering Default Active 0.0.0.0/0 Internet The Problem There are situations where completely merging the address spaces of VNETs is not desirable. Think of micro-segmentation: only the front-end of a multi-tier application must be exposed and accessible from outside the VNET, with the application- and database tiers ideally remaining isolated. Network Security Groups are now used to achieve isolation; they can block access to the internal tiers from sources outside of the vnet's ip range, or the front-end subnet. Another scenario is IP address space exhaustion: private ip space is a scarce resource in many companies. There may not be enough free space available to assign each vnet a unique, routable, segment large enough to accomodate all resources hosted in each vnet. Again, there may not be a need for all resources to have a routable ip address as they do not need to be accessible from outside the vnet. The Solution: Subnet Peering The above scenario's could be solved for if it were possible to selectively share a vnet's address range across a peering. Enter Subnet Peering : this new capability allows selective sharing of IP address space across a peering at the subnet level. Subnet Peering is not yet available through the Azure portal, but can be configured through the Azure CLI. A few new parameters have been added to the existing az network vnet peering create command: --peer-complete-vnets {0, 1 (default), f, false, n, no, t, true, y, yes}: when set to 0, false or no configures peering at the subnet level in stead of at the vnet level. --local-subnet-names: list of subnets to be peered in the current vnet (i.e. the vnet called out in the --vnet-name parameter) - --remote-vnet-names: list of subnets to be peered in the remote vnet (i.e. the vnet called out in the --remote-vnet parameter) --enable-only-ipv6 {0(default), 1, f, false, n, no, t, true, y, yes}: if set to true, peers only ipv6 space in dual stack vnets. NB: Although Subnet Peering is available in all Azure regions, subscription allow-listing through this form is still required. Please read and be aware of the caveats under point 11 on the form. Segmentation This command peers subnet1 in vnet-left to subnet0 and subnet2 in right-vnet: az network vnet peering create -g sn-rg -n left0-right0 --vnet-name vnet-left --local-subnet-names subnet1 --remote-subnet-names subnet0 subnet2 --remote-vne vnet-right --peer-complete-vnets 0 --allow-vnet-access 1 Then establish the peering in the other direction: az network vnet peering create -g sn-rg -n right0-left0 --vnet-name vnet-right --local-subnet-names subnet0 subnet2 --remote-subnet-names subnet1 --remote-vne vnet-left --peer-complete-vnets 0 --allow-vnet-access This leaves subnet0 and subnet2 in vnet-left, and subnet1 in vnet-right disconnected, effectively creating segmentation between the peered vnets without the use of NSGs. Details of the peerings from left to right are below. Notice the localAddressSpace and remoteAddressSpace prefixes are those of the peered local and remote subnets. The other direction is similar - with localAddressSpace and remoteAddressSpace prefixes swapped of course. az network vnet peering show -g sn-rg -n left0-right0 --vnet-name vnet-left { "allowForwardedTraffic": false, "allowGatewayTransit": false, "allowVirtualNetworkAccess": true, "doNotVerifyRemoteGateways": false, "etag": "W/\"6a80a7da-36f4-404c-8bd8-d23e1b2bdd9a\"", "id": "/subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-left/virtualNetworkPeerings/left0-right0", "localAddressSpace": { "addressPrefixes": [ "10.0.1.0/24" ] }, "localSubnetNames": [ "subnet1" ], "localVirtualNetworkAddressSpace": { "addressPrefixes": [ "10.0.1.0/24" ] }, "name": "left0-right0", "peerCompleteVnets": false, "peeringState": "Connected", "peeringSyncLevel": "FullyInSync", "provisioningState": "Succeeded", "remoteAddressSpace": { "addressPrefixes": [ "10.1.0.0/24", "10.1.2.0/24" ] }, "remoteSubnetNames": [ "subnet0", "subnet2" ], "remoteVirtualNetwork": { "id": "/subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-right", "resourceGroup": "sn-rg" }, "remoteVirtualNetworkAddressSpace": { "addressPrefixes": [ "10.1.0.0/24", "10.1.2.0/24" ] }, "remoteVirtualNetworkEncryption": { "enabled": false, "enforcement": "AllowUnencrypted" }, "resourceGroup": "sn-rg", "resourceGuid": "eb63ec9e-aa48-023e-1514-152f8ab39ae7", "type": "Microsoft.Network/virtualNetworks/virtualNetworkPeerings", "useRemoteGateways": false } The peerings show in the portal as "normal" vnet peerings, as subnet peering is not yet supported by the portal. The only indication that this is not a full vnet peering, is the peered IP address space - this is the subnet IP range of the remote subnet(s). Inspecting the effective routes on vm left-1 shows it has routes for subnet0 and subnet2 but not for subnet1 in `vnet-right`: az network nic show-effective-route-table -g sn-rg -n left-1701 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.0.0.0/16 VnetLocal Default Active 172.16.0.0/24 VnetLocal Default Active 10.1.0.0/24 VNetPeering Default Active 10.1.2.0/24 VNetPeering Default Active 0.0.0.0/0 Internet Note that vm's left-0 and left-1 also have routes for the same subnets in vnet-right, even though their subnets were not listed in the --local-subnet-names parameter: az network nic show-effective-route-table -g sn-rg -n left-0681 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.0.0.0/16 VnetLocal Default Active 172.16.0.0/24 VnetLocal Default Active 10.1.0.0/24 VNetPeering Default Active 10.1.2.0/24 VNetPeering Default Active 0.0.0.0/0 Internet az network nic show-effective-route-table -g sn-rg -n left-2809 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.0.0.0/16 VnetLocal Default Active 172.16.0.0/24 VnetLocal Default Active 10.1.0.0/24 VNetPeering Default Active 10.1.2.0/24 VNetPeering Default Active 0.0.0.0/0 Internet In the current release of the feature, the ip space of the subnets listed in --remote-subnet-names is propagated to all subnets in the local vnet, not just to the subnets listed in --local-subnet-names. However, the subnets in vnet-right only have the address space of subnet1 in vnet-left propagated, as shows in the effective routes of the vm's on the right: az network nic show-effective-route-table -g sn-rg -n right-0814 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.1.0.0/16 VnetLocal Default Active 172.16.0.0/24 VnetLocal Default Active 10.0.1.0/24 VNetPeering Default Active 0.0.0.0/0 Internet az network nic show-effective-route-table -g sn-rg -n right-159 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.1.0.0/16 VnetLocal Default Active 172.16.0.0/24 VnetLocal Default Active 10.0.1.0/24 VNetPeering Default Active 0.0.0.0/0 Internet az network nic show-effective-route-table -g sn-rg -n right-2283 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.1.0.0/16 VnetLocal Default Active 172.16.0.0/24 VnetLocal Default Active 10.0.1.0/24 VNetPeering Default Active 0.0.0.0/0 Internet Bidrectional routing therefore only exists between subnet1 in vnet-left and subnet0 and subnet2 in vnet-right, as intended: This is demonstrated by testing with ping from `left-1` to the vm's on the right: to right-0: PING 10.1.0.4 (10.1.0.4) 56(84) bytes of data. 64 bytes from 10.1.0.4: icmp_seq=1 ttl=64 time=1.20 ms 64 bytes from 10.1.0.4: icmp_seq=2 ttl=64 time=1.24 ms 64 bytes from 10.1.0.4: icmp_seq=3 ttl=64 time=1.06 ms 64 bytes from 10.1.0.4: icmp_seq=4 ttl=64 time=1.56 ms ^C --- 10.1.0.4 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3005ms to right-1: PING 10.1.1.4 (10.1.1.4) 56(84) bytes of data. ^C --- 10.1.1.4 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4131ms to right-2: PING 10.1.2.4 (10.1.2.4) 56(84) bytes of data. 64 bytes from 10.1.2.4: icmp_seq=1 ttl=64 time=3.39 ms 64 bytes from 10.1.2.4: icmp_seq=2 ttl=64 time=0.968 ms 64 bytes from 10.1.2.4: icmp_seq=3 ttl=64 time=3.00 ms 64 bytes from 10.1.2.4: icmp_seq=4 ttl=64 time=2.47 ms ^C --- 10.1.2.4 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3005ms Overlapping IP Space Now let's add overlapping ip space to both vnets. With the peerings removed, we add 172.16.0.0/16 to each vnet, create subnet3 with 172.16.3.0/24 and insert a vm. When we try to peer the full vnets, an error results because of the address space overlap: az network vnet peering create -g sn-rg -n left0-right0 --vnet-name vnet-left --remote-vnet vnet-right --peer-complete-vnets 1 --allow-vnet-access 1 (VnetAddressSpacesOverlap) Cannot create or update peering /subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-left/virtualNetworkPeerings/left0-right0. Virtual networks /subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-left and /subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-right cannot be peered because their address spaces overlap. Overlapping address prefixes: 172.16.0.0/16, 172.16.0.0/16 Code: VnetAddressSpacesOverlap Now we will again establish subnet-level peering as in the previous section: az network vnet peering create -g sn-rg -n left0-right0 --vnet-name vnet-left --local-subnet-names subnet1 --remote-subnet-names subnet0 subnet2 --remote-vne vnet-right --peer-complete-vnets 0 --allow-vnet-access 1 az network vnet peering create -g sn-rg -n right0-left0 --vnet-name vnet-right --local-subnet-names subnet0 subnet2 --remote-subnet-names subnet1 --remote-vne vnet-left --peer-complete-vnets 0 --allow-vnet-access 1 This completes successfully despite the presence of overlapping address space in both vnets. az network vnet peering show -g sn-rg -n left0-right0 --vnet-name vnet-left --query "[provisioningState, remoteAddressSpace]" [ "Succeeded", { "addressPrefixes": [ "10.1.0.0/24", "10.1.2.0/24" ] } ] az network vnet peering show -g sn-rg -n right0-left0 --vnet-name vnet-right --query "[provisioningState, remoteAddressSpace]" [ "Succeeded", { "addressPrefixes": [ "10.0.1.0/24" ] } ] Now let's try to include vnet-left subnet3, which overlaps with subnet3 on the right, in the peering: az network vnet peering create -g sn-rg -n left0-right0 --vnet-name vnet-left --local-subnet-names subnet1 subnet3 --remote-subnet-names subnet0 subnet2 --remote-vne vnet-right --peer-complete-vnets 0 --allow-vnet-access 1 (VnetAddressSpacesOverlap) Cannot create or update peering /subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-left/virtualNetworkPeerings/left0-right0. Virtual networks /subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-left and /subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-right cannot be peered because their address spaces overlap. Overlapping address prefixes: 172.16.3.0/24 Code: VnetAddressSpacesOverlap This demonstrates that subnet peering allows for the partial peering of vnets that contain overlapping ip space. As discussed, this can be very helpful in scenario's where private ip space is in short supply. Looking ahead Subnet peering is now available in all Azure regions: feel free to test, experiment and use in production. The feature is currently only available through the latest versions of the Azure CLI, Bicep, ARM Template, Terraform and Powershell. Portal support should be added soon. Meaningful next steps will be to integrate Subnet Peering in both Azure Virtual Network Manager (AVNM) and Virtual WAN, so that the advantages it brings can be leveraged at enterprise scale in nework foundations. I will continue to track developments and update this post as appropriate.5.5KViews7likes1CommentNetwork Redundancy Between AVS, On-Premises, and Virtual Networks in a Multi-Region Design
By Mays_Algebary shruthi_nair Establishing redundant network connectivity is vital to ensuring the availability, reliability, and performance of workloads operating in hybrid and cloud environments. Proper planning and implementation of network redundancy are key to achieving high availability and sustaining operational continuity. This article focuses on network redundancy in multi-region architecture. For details on single-region design, refer to this blog. The diagram below illustrates a common network design pattern for multi-region deployments, using either a Hub-and-Spoke or Azure Virtual WAN (vWAN) topology, and serves as the baseline for establishing redundant connectivity throughout this article. In each region, the Hub or Virtual Hub (VHub) extends Azure connectivity to Azure VMware Solution (AVS) via an ExpressRoute circuit. The regional Hub/VHub is connected to on-premises environments by cross-connecting (bowtie) both local and remote ExpressRoute circuits, ensuring redundancy. The concept of weight, used to influence traffic routing preferences, will be discussed in the next section. The diagram below illustrates the traffic flow when both circuits are up and running. Design Considerations If a region loses its local ExpressRoute connection, AVS in that region will lose connectivity to the on-premises environment. However, VNets will still retain connectivity to on-premises via the remote region’s ExpressRoute circuit. The solutions discussed in this article aim to ensure redundancy for both AVS and VNets. Looking at the diagram above, you might wonder: why do we need to set weights at all, and why do the AVS-ER connections (1b/2b) use the same weight as the primary on-premises connections (1a/2a)? Weight is used to influence routing decisions and ensure optimal traffic flow. In this scenario, both ExpressRoute circuits, ER1-EastUS and ER2-WestUS, advertise the same prefixes to the Azure ExpressRoute gateway. As a result, traffic from the VNet to on-premises would be ECMPed across both circuits. To avoid suboptimal routing and ensure that traffic from the VNets prefers the local ExpressRoute circuit, a higher weight is assigned to the local path. It’s also critical that the ExpressRoute gateway connection to on-premises (1a/2a) and to AVS (1b/2b), is assigned the same weight. Otherwise, traffic from the VNet to AVS will follow a less efficient route as AVS routes are also learned over ER1-EastUS via Global Reach. For instance, VNets in EastUS will connect to AVS EUS through ER1-EastUS circuit via Global Reach (as shown by the blue dotted line), instead of using the direct local path (orange line). This suboptimal routing is illustrated in the below diagram. Now let us see what solutions we can have to achieve redundant connectivity. The following solutions will apply to both Hub-and-Spoke and vWAN topology unless noted otherwise. Note: The diagrams in the upcoming solutions will focus only on illustrating the failover traffic flow. Solution1: Network Redundancy via ExpressRoute in Different Peering Location In the solution, deploy an additional ExpressRoute circuit in a different peering location within the same metro area (e.g., ER2–PeeringLocation2), and enable Global Reach between this new circuit and the existing AVS ExpressRoute (e.g., AVS-ER1). If you intend to use this second circuit as a failover path, apply prepends to the on-premises prefixes advertised over it. Alternatively, if you want to use it as an active-active redundant path, do not prepend routes, in this case, both AVS and Azure VNets will ECMP to distribute traffic across both circuits (e.g., ER1–EastUS and ER–PeeringLocation2) when both are available. Note: Compared to the Standard Topology, this design removes both the ExpressRoute cross-connect (bowtie) and weight settings. When adding a second circuit in the same metro, there's no benefit in keeping them, otherwise traffic from the Azure VNet will prefer the local AVS circuit (AVS-ER1/AVS-ER2) to reach on-premises due to the higher weight, as on-premises routes are also learned over AVS circuit (AVS-ER1/AVS-ER2) via Global Reach. Also, when connecting the new circuit (e.g., ER–Peering Location2), remove all weight settings across the connections. Traffic will follow the optimal path based on BGP prepending on the new circuit, or load-balance (ECMP) if no prepend is applied. Note: Use public ASN to prepend the on-premises prefix as AVS circuit (e.g., AVS-ER) will strip the private ASN toward AVS. Solution Insights Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. It could be cost prohibitive depending on the bandwidth of the second circuit. Solution2: Network Redundancy via ExpressRoute Direct In this solution, ExpressRoute Direct is used to provision multiple circuits from a single port pair in each region, for example, ER2-WestUS and ER4-WestUS are created from the same port pair. This allows you to dedicate one circuit for local traffic and another for failover to a remote region. To ensure optimal routing, prepend the on-premises prefixes using public ASN on the newly created circuit (e.g., ER3-EastUS and ER4-WestUS). Remove all weight settings across the connections; traffic will follow the optimal path based on BGP prepending on the new circuit. For instance, if ER1-EastUS becomes unavailable, traffic from AVS and VNets in the EastUS region will automatically route through ER4-WestUS circuit, ensuring continuity. Note: Compared to the Standard Topology, this design connects the newly created ExpressRoute circuits (e.g., ER3-EastUS/ER4-WestUS) to the remote region of ExpressRoute gateway (black dotted lines) instead of having the bowtie to the primary circuits (e.g., ER1-EastUS/ER2-WestUS). Solution Insights Easy to implement if you have ExpressRoute Direct. ExpressRoute Direct supports over- provisioning where you can create logical ExpressRoute circuits on top of your existing ExpressRoute Direct resource of 10-Gbps or 100-Gbps up to the subscribed Bandwidth of 20 Gbps or 200 Gbps. For example, you can create two 10-Gbps ExpressRoute circuits within a single 10-Gbps ExpressRoute Direct resource (port pair). Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. Solution3: Network Redundancy via ExpressRoute Metro Metro ExpressRoute is a new configuration that enables dual-homed connectivity to two different peering locations within the same city. This setup enhances resiliency by allowing traffic to continue flowing even if one peering location goes down, using the same circuit. Solution Insights Higher Resiliency: Provides increased reliability with a single circuit. Limited regional availability: Currently available in select regions, with more being added over time. Cost-effective: Offers redundancy without significantly increasing costs. Solution4: Deploy VPN as a Backup to ExpressRoute This solution mirrors solution 1 for a single region but extends it to multiple regions. In this approach, a VPN serves as the backup path for each region in the event of an ExpressRoute failure. In a Hub-and-Spoke topology, a backup path to and from AVS can be established by deploying Azure Route Server (ARS) in the hub VNet. ARS enables seamless transit routing between ExpressRoute and the VPN gateway. In vWAN topology, ARS is not required; the vHub's built-in routing service automatically provides transitive routing between the VPN gateway and ExpressRoute. In this design, you should not cross-connect ExpressRoute circuits (e.g., ER1-EastUS and ER2-WestUS) to the ExpressRoute gateways in the Hub VNets (e.g., Hub-EUS or Hub-WUS). Doing so will lead to routing issues, where the Hub VNet only programs the on-premises routes learned via ExpressRoute. For instance, in the EastUS region, if the primary circuit (ER1-EastUS) goes down, Hub-EUS will receive on-premises routes from both the VPN tunnel and the remote ER2-WestUS circuit. However, it will prefer and program only the ExpressRoute-learned routes from ER2-WestUS circuit. Since ExpressRoute gateways do not support route transitivity between circuits, AVS connected via AVS-ER will not receive the on-premises prefixes, resulting in routing failures. Note: In vWAN topology, to ensure optimal route convergence when failing back to ExpressRoute, you should prepend the prefixes advertised from on-premises over the VPN. Without route prepending, VNets may continue to use the VPN as the primary path to on-premises. If prepend is not an option, you can trigger the failover manually by bouncing the VPN tunnel. Solution Insights Cost-effective and straightforward to deploy. Increased Latency: The VPN tunnel over the internet adds latency due to encryption overhead. Bandwidth Considerations: Multiple VPN tunnels might be needed to achieve bandwidth comparable to a high-capacity ExpressRoute circuit (e.g., over 1G). For details on VPN gateway SKU and tunnel throughput, refer to this link. As you can't cross connect ExpressRoute circuits, VNets will utilize the VPN for failover instead of leveraging remote region ExpressRoute circuit. Solution5: Network Redundancy-Multiple On-Premises (split-prefix) In many scenarios, customers advertise the same prefix from multiple on-premises locations to Azure. However, if the customer can split prefixes across different on-premises sites, it simplifies the implementation of failover strategy using existing ExpressRoute circuits. In this design, each on-premises advertises region-specific prefixes (e.g., 10.10.0.0/16 for EastUS and 10.70.0.0/16 for WestUS), along with a common supernet (e.g., 10.0.0.0/8). Under normal conditions, AVS and VNets in each region use longest prefix match to route traffic efficiently to the appropriate on-premises location. For instance, if ER1-EastUS becomes unavailable, AVS and VNets in EastUS will automatically fail over to ER2-WestUS, routing traffic via the supernet prefix to maintain connectivity. Solution Insights Cost-effective: no additional deployment, using existing ExpressRoute circuits. Advertising specific prefixes over each region might need additional planning. Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. Solution6: Prioritize Network Redundancy for One Region Over Another If you're operating under budget constraints and can prioritize one region (such as hosting critical workloads in a single location) and want to continue using your existing ExpressRoute setup, this solution could be an ideal fit. In this design, assume AVS in EastUS (AVS-EUS) hosts the critical workloads. To ensure high availability, AVS-ER1 is configured with Global Reach connections to both the local ExpressRoute circuit (ER1-EastUS) and the remote circuit (ER2-WestUS). Make sure to prepend the on-premises prefixes advertised to ER2-WestUS using public ASN to ensure optimal routing (no ECMP) from AVS-EUS over both circuits (ER1-EastUS and ER2-WestUS). On the other hand, AVS in WestUS (AVS-WUS) is connected via Global Reach only to its local region ExpressRoute circuit (ER2-WestUS). If that circuit becomes unavailable, you can establish an on-demand Global Reach connection to ER1-EastUS, either manually or through automation (e.g., a triggered script). This approach introduces temporary downtime until the Global Reach link is established. You might be thinking, why not set up Global Reach between the AVS-WUS circuit and remote region circuits (like connecting AVS-ER2 to ER1-EastUS), just like we did for AVS-EUS? Because it would lead to suboptimal routing. Due to AS path prepending on ER2-WestUS, if both ER1-EastUS and ER2-WestUS are linked to AVS-ER2, traffic would favor the remote ER1-EastUS circuit since it presents a shorter AS path. As a result, traffic would bypass the local ER2-WestUS circuit, causing inefficient routing. That is why for AVS-WUS, it's better to use on-demand Global Reach to ER1-EastUS as a backup path, enabled manually or via automation, only when ER2-WestUS becomes unavailable. Note: VNets will failover via local AVS circuit. E.g., HUB-EUS will route to on-prem through AVS-ER1 and ER2-WestUS via Global Reach Secondary (purple line). Solution Insights Cost-effective Workloads hosted in AVS within the non-critical region will experience downtime if the local region ExpressRoute circuit becomes unavailable, until the on-demand Global Reach connection is established. Conclusion Each solution has its own advantages and considerations, such as cost-effectiveness, ease of implementation, and increased resiliency. By carefully planning and implementing these solutions, organizations can ensure operational continuity and optimal traffic routing in multi-region deployments.2.4KViews6likes0CommentsDNS best practices for implementation in Azure Landing Zones
Why DNS architecture matters in Landing Zone A well-designed DNS layer is the glue that lets workloads in disparate subscriptions discover one another quickly and securely. Getting it right during your Azure Landing Zone rollout avoids painful refactoring later, especially once you start enforcing Zero-Trust and hub-and-spoke network patterns. Typical Landing-Zone topology Subscription Typical Role Key Resources Connectivity (Hub) Transit, routing, shared security Hub VNet, Azure Firewall / NVA, VPN/ER gateways, Private DNS Resolver Security Security tooling & SOC Sentinel, Defender, Key Vault (HSM) Shared Services Org-wide shared apps ADO and Agents, Automation Management Ops & governance Log Analytics, backup etc Identity Directory and auth services Extended domain controllers, Azure AD DS All five subscriptions contain a single VNet. Spokes (Security, Shared, Management, Identity) are peered to the Connectivity VNet, forming the classic hub-and-spoke. Centralized DNS with mandatory firewall inspection Objective: All network communication from a spoke must cross the firewall in the hub including DNS communication. Design Element Best-Practice Configuration Private DNS Zones Link only to the Connectivity VNet. Spokes have no direct zone links. Private DNS Resolver Deploy inbound + outbound endpoints in the Connectivity VNet. Link connectivity virtual network to outbound resolver endpoint. Spoke DNS Settings Set custom DNS servers on each spoke VNet equal to the inbound endpoint’s IPs. Forwarding Ruleset Create a ruleset, associate it with the outbound endpoint, and add forwarders: • Specific domains → on-prem / external servers • Wildcard “.” → on-prem DNS (for compliance scenarios) Firewall Rules Allow UDP/TCP 53 from spokes to Resolver-inbound, and from Resolver-outbound to target DNS servers Note: Azure private DNS zone is a global resource. Meaning single private DNS zone can be utilized to resolve DNS query for resources deployed in multiple regions. DNS private resolver is a regional resource. Meaning it can only link to virtual network within the same region. Traffic flow Spoke VM → Inbound endpoint (hub) Firewall receives the packet based on spoke UDR configuration and processes the packet before it sent to inbound endpoint IP. Resolver applies forwarding rules on unresolved DNS queries; unresolved queries leave via Outbound endpoint. DNS forwarding rulesets provide a way to route queries for specific DNS namespaces to designated custom DNS servers. Fallback to internet and NXDOMAIN redirect Azure Private DNS now supports two powerful features to enhance name resolution flexibility in hybrid and multi-tenant environments: Fallback to internet Purpose: Allows Azure to resolve DNS queries using public DNS if no matching record is found in the private DNS zone. Use case: Ideal when your private DNS zone doesn't contain all possible hostnames (e.g., partial zone coverage or phased migrations). How to enable: Go to Azure private DNS zones -> Select zone -> Virtual network link -> Edit option Ref article: https://learn.microsoft.com/en-us/azure/dns/private-dns-fallback Centralized DNS - when firewall inspection isn’t required Objective: DNS query is not monitored via firewall and DNS query can be bypassed from firewall. Link every spoke virtual directly to the required Private DNS Zones so that spoken can resolve PaaS resources directly. Keep a single Private DNS Resolver (optional) for on-prem name resolution; spokes can reach its inbound endpoint privately or via VNet peering. Spoke-level custom DNS This can point to extended domain controllers placed within identity virtual. This pattern reduces latency and cost but still centralizes zone management. Integrating on-premises active directory DNS Create conditional forwarders on each Domain Controller for every Private DNS Zone pointing it to DNS private resolver inbound endpoint IP Address. (e.g.,blob.core.windows.net database.windows.net). Do not include the literal privatelink label. Ref article: https://github.com/dmauser/PrivateLink/tree/master/DNS-Integration-Scenarios#43-on-premises-dns-server-conditional-forwarder-considerations Note: Avoid selecting the option “Store this conditional forwarder in Active Directory and replicate as follows” in environments with multiple Azure subscriptions and domain controllers deployed across different Azure environments. Key takeaways Linking zones exclusively to the connectivity subscription's virtual network keeps firewall inspection and egress control simple. Private DNS Resolver plus forwarding rulesets let you shape hybrid name resolution without custom appliances. When no inspection is needed, direct zone links to spokes cut hops and complexity. For on-prem AD DNS, the conditional forwarder is required pointing it to inbound endpoint IP, exclude privatelink name when creating conditional forwarder, and do not replicate conditional forwarder Zone with AD replication if customer has footprint in multiple Azure tenants. Plan your DNS early, bake it into your infrastructure-as-code, and your landing zone will scale cleanly no matter how many spokes join the hub tomorrow.6.1KViews6likes4CommentsNetwork Redundancy from On-Premises to Azure VMware and VNETs in a Single-Region Design
By shruthi_nair Mays_Algebary Establishing redundant network connectivity is vital to ensuring the availability, reliability, and performance of workloads operating in hybrid and cloud environments. Proper planning and implementation of network redundancy are key to achieving high availability and sustaining operational continuity. This guide presents common architectural patterns for building redundant connectivity between on-premises datacenters, Azure Virtual Networks (VNets), and Azure VMware Solution (AVS) within a single-region deployment. AVS allows organizations to run VMware-based workloads directly on Azure infrastructure, offering a streamlined path for migrating existing VMware environments to the cloud without the need for significant re-architecture or modification. Connectivity Between AVS, On-Premises, and Virtual Networks The diagram below illustrates a common network design pattern using either a Hub-and-Spoke or Virtual WAN (vWAN) topology, deployed within a single Azure region. ExpressRoute is used to establish connectivity between on-premises environments and VNets. The same ExpressRoute circuit is extended to connect AVS to the on-premises infrastructure through ExpressRoute Global Reach. Consideration: This design presents a single point of failure. If the ExpressRoute circuit (ER1-EastUS) experiences an outage, connectivity between the on-premises environment, VNets, and AVS will be disrupted. Let’s examine some solutions to establish redundant connectivity in case the ER1-EastUS experiences an outage. Solution1: Network Redundancy via VPN In this solution, one or more VPN tunnels are deployed as a backup to ExpressRoute. If ExpressRoute becomes unavailable, the VPN provides an alternative connectivity path from the on-premises environment to VNets and AVS. In a Hub-and-Spoke topology, a backup path to and from AVS can be established by deploying Azure Route Server (ARS) in the hub VNet. ARS enables seamless transit routing between ExpressRoute and the VPN gateway. In vWAN topology, ARS is not required. The vHub's built-in routing service automatically provides transitive routing between the VPN gateway and ExpressRoute. Note: In vWAN topology, to ensure optimal route convergence when failing back to ExpressRoute, you should prepend the prefixes advertised from on-premises over the VPN. Without route prepending, VNets may continue to use the VPN as the primary path to on-premises. If prepend is not an option, you can trigger the failover manually by bouncing the VPN tunnel. Solution Insights: Cost-effective and straightforward to deploy. Latency: The VPN tunnel introduces additional latency due to its reliance on the public internet and the overhead associated with encryption. Bandwidth Considerations: Multiple VPN tunnels might be needed to achieve bandwidth comparable to a high-capacity ExpressRoute circuit (e.g., over 10G). For details on VPN gateway SKU and tunnel throughput, refer to this link. Solution2: Network Redundancy via SD-WAN In this solution, SDWAN tunnels are deployed as a backup to ExpressRoute. If ExpressRoute becomes unavailable, SDWAN provides an alternative connectivity path from the on-premises environment to VNets and AVS. In a Hub-and-Spoke topology, a backup path to and from AVS can be established by deploying ARS in the hub VNet. ARS enables seamless transit routing between ExpressRoute and the SDWAN appliance. In vWAN topology, ARS is not required. The vHub's built-in routing service automatically provides transitive routing between the SDWAN and ExpressRoute. Note: In vWAN topology, to ensure optimal convergence from the VNets when failing back to ExpressRoute, prepend the prefixes advertised from on-premises over the SDWAN to make sure it's longer that ExpressRoute learned routes. If you don't prepend, VNets will continue to use SDWAN path to on-prem as primary path. In this design using Azure vWAN, the SD-WAN can be deployed either within the vHub or in a spoke VNet connected to the vHub the same principle applies in both cases. Solution Insights: If you have an existing SDWANs no additional deployment is needed. Bandwidth Considerations: Vendor specific. Management consideration: Third party dependency and need to manage the HA deployment, except for SD-WAN SaaS solution. Solution3: Network Redundancy via ExpressRoute in Different Peering Locations Deploy an additional ExpressRoute circuit at a different peering location and enable Global Reach between ER2-peeringlocation2 and AVS-ER. To use this circuit as a backup path, prepend the on-premises prefixes on the second circuit; otherwise, AVS or VNet will perform Equal-Cost Multi-Path (ECMP) routing across both circuits to on-prem. Note: Use public ASN to prepend the on-premises prefix as AVS-ER will strip the private ASN toward AVS. Refer to this link for more details. Solution Insights: Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. Could be cost prohibitive depending on the bandwidth of the second circuit. Solution4: Network Redundancy via Metro ExpressRoute Metro ExpressRoute is the new ExpressRoute configuration. This configuration allows you to benefit from a dual-homed setup that facilitates diverse connections to two distinct ExpressRoute peering locations within a city. This configuration offers enhanced resiliency if one peering location experiences an outage. Solution Insights Higher Resiliency: Provides increased reliability with a single circuit. Limited regional availability: Availability is restricted to specific regions within the metro area. Cost-effective Conclusion: The choice of failover connectivity should be guided by the specific latency and bandwidth requirements of your workloads. Ultimately, achieving high availability and ensuring continuous operations depend on careful planning and effective implementation of network redundancy strategies.2.4KViews5likes0CommentsWired for Hybrid - What's New in Azure Networking December 2023 edition
Hello Folks, Azure Networking is the foundation of your infrastructure in Azure. Each month we bring you an update on What’s new in Azure Networking. In this blog post, we’ll cover what's new with Azure Networking in December 2023. In this blog post, we will cover the following announcements and how they can help you. Enjoy!4.6KViews5likes4CommentsUnlock enterprise AI/ML with confidence: Azure Application Gateway as your scalable AI access layer
As enterprises accelerate their adoption of generative AI and machine learning to transform operations, enhance productivity, and deliver smarter customer experiences, Microsoft Azure has emerged as a leading platform for hosting and scaling intelligent applications. With offerings like Azure OpenAI, Azure Machine Learning, and Cognitive Services, organizations are building copilots, virtual agents, recommendation engines, and advanced analytics platforms that push the boundaries of what is possible. However, scaling these applications to serve global users introduces new complexities: latency, traffic bursts, backend rate limits, quota distribution, and regional failovers must all be managed effectively to ensure seamless user experiences and resilient architectures. Azure Application Gateway: The AI access layer Azure Application Gateway plays a foundational role in enabling AI/ML at scale by acting as a high-performance Layer 7 reverse proxy—built to intelligently route, protect, and optimize traffic between clients and AI services. Hundreds of enterprise customers are already using Azure Application Gateway to efficiently manage traffic across diverse Azure-hosted AI/ML models—ensuring uptime, performance, and security at global scale. The AI delivery challenge Inferencing against AI/ML backends is more than connecting to a service. It is about doing so: Reliably: across regions, regardless of load conditions Securely: protecting access from bad actors and abusive patterns Efficiently: minimizing latency and request cost Scalable: handling bursts and high concurrency without errors Observably: with real-time insights, diagnostics, and feedback loops for proactive tuning Key features of Azure Application Gateway for AI traffic Smart request distribution: Path-based and round-robin routing across OpenAI and ML endpoints. Built-in health probes: Automatically bypass unhealthy endpoints Security enforcement: With WAF, TLS offload, and mTLS to protect sensitive AI/ML workloads Unified endpoint: Expose a single endpoint for clients; manage complexity internally. Observability: Full diagnostics, logs, and metrics for traffic and routing visibility. Smart rewrite rules: Append path, or rewrite headers per policy. Horizontal scalability: Easily scale to handle surges in demand by distributing load across multiple regions, instances, or models. SSE and real-time streaming: Optimize connection handling and buffering to enable seamless AI response streaming. Azure Web Application Firewall (WAF) Protections for AI/ML Workloads When deploying AI/ML workloads, especially those exposed via APIs, model endpoints, or interactive web apps, security is as critical as performance. A modern WAF helps protect not just the application, but also the sensitive models, training data, and inference pipelines behind it. Core Protections: SQL injection – Prevents malicious database queries targeting training datasets, metadata stores, or experiment tracking systems. Cross-site scripting (XSS) – Blocks injected scripts that could compromise AI dashboards, model monitoring tools, or annotation platforms. Malformed payloads – Stops corrupted or adversarial crafted inputs designed to break parsing logic or exploit model pre/post-processing pipelines. Bot protections – Bot Protection Rule Set detects & blocks known malicious bot patterns (credential stuffing, password spraying). Block traffic based on request body size, HTTP headers, IP addresses, or geolocation to prevent oversized payloads or region-specific attacks on model APIs. Enforce header requirements to ensure only authorized clients can access model inference or fine-tuning endpoints. Rate limiting based on IP, headers, or user agent to prevent inference overloads, cost spikes, or denial of service against AI models. By integrating these WAF protections, AI/ML workloads can be shielded from both conventional web threats and emerging AI-specific attack vectors, ensuring models remain accurate, reliable, and secure. Architecture Real-world architectures with Azure Application Gateway Industries across sectors rely on Azure Application Gateway to securely expose AI and ML workloads: Healthcare → Protecting patient-facing copilots and clinical decision support tools with HIPAA-compliant routing, private inference endpoints, and strict access control. Finance → Safeguarding trading assistants, fraud-detection APIs, and customer chatbots with enterprise WAF rules, rate limiting, and region-specific compliance. Retail & eCommerce → Defending product recommendation engines, conversational shopping copilots, and personalization APIs from scraping and automated abuse. Manufacturing & industrial IoT → Securing AI-driven quality control, predictive maintenance APIs, and digital twin interfaces with private routing and bot protection. Education → Hosting learning copilots and tutoring assistants safely behind WAF, preventing misuse while scaling access for students and researchers. Public sector & government → Enforcing FIPS-compliant TLS, private routing, and zero-trust controls for citizen services and AI-powered case management. Telecommunications & media → Protecting inference endpoints powering real-time translation, content moderation, and media recommendations at scale. Energy & utilities → Safeguarding smart grid analytics, sustainability dashboards, and AI-powered forecasting models through secure gateway routing. Advanced integrations Position Azure Application Gateway as the secure, scalable network entry point to your AI infrastructure Private-only Azure Application Gateway: Host AI endpoints entirely within virtual networks for secure internal access SSE support: Configure HTTP settings for streaming completions via Server-Sent Events Azure Application Gateway+ Azure Functions: Build adaptive policies that reroute traffic based on usage, cost, or time of day Azure Application Gateway + API management to protect OpenAI workloads What’s next: Adaptive AI gateways Microsoft is evolving Azure Application Gateway into a more intelligent, AI aware platform with capabilities such as: Auto rerouting to healthy endpoints or more cost-efficient models. Dynamic token management directly within Azure Application Gateway to optimize AI inference usage. Integrated feedback loops with Azure Monitor and Log Analytics for real-time performance tuning. The goal is to transform Azure Application Gateway from a traditional traffic manager into an adaptive inference orchestrator one that predicts failures, optimizes operational costs, and safeguards AI workloads from misuse. Conclusion Azure Application Gateway is not just a load balancer—it’s becoming a critical enabler for enterprise-grade AI delivery. Today, it delivers smart routing, security enforcement, adaptive observability, and a compliance-ready architecture, enabling organizations to scale AI confidently while safeguarding performance and cost. Looking ahead, Microsoft’s vision includes future capabilities such as quota resiliency to intelligently manage and balance AI usage limits, auto-rerouting to healthy endpoints or more cost-efficient models, dynamic token management within Azure Application Gateway to optimize inference usage, and integrated feedback loops with Azure Monitor and Log Analytics for real-time performance tuning. Together, these advancements will transform Azure Application Gateway from a traditional traffic manager into an adaptive inference orchestrator capable of anticipating failures, optimizing costs, and protecting AI workloads from misuse. If you’re building with Azure OpenAI, Machine Learning, or Cognitive Services, let Azure Application Gateway be your intelligent command center—anticipating needs, adapting in real time, and orchestrating every interaction so your AI can deliver with precision, security, and limitless scale. For more information, please visit: What is Azure Application Gateway v2? | Microsoft Learn What Is Azure Web Application Firewall on Azure Application Gateway? | Microsoft Learn Azure Application Gateway URL-based content routing overview | Microsoft Learn Using Server-sent events with Application Gateway (Preview) | Microsoft Learn AI Architecture Design - Azure Architecture Center | Microsoft Learn402Views4likes0CommentsExpressRoute Gateway Migration Playbook
Objective The objective of this document is to help with transitioning the ExpressRoute gateway from a non-zone-redundant SKU to a zone-redundant SKU. This upgrade enhances the reliability and availability of the gateway by ensuring that it is resilient to zone failures. Additionally, the public IP associated with the gateway will be upgraded from a Basic SKU to a Standard SKU. This upgrade provides improved performance, security features, and availability guarantees. The entire migration should be conducted in accordance with IT Service Management (ITSM) guidelines, ensuring that all best practices and standards are followed. Change management protocols should be strictly adhered to, including obtaining necessary approvals, documenting the change, and communicating with stakeholders. Pre-migration and post-migration testing should be performed to validate the success of the migration and to ensure that there are no disruptions to services. The migration should be scheduled within a planned maintenance window to minimize impact on users and services. This window should be carefully selected to ensure that it aligns with business requirements and minimizes downtime. Throughout the process, detailed monitoring and logging should be in place to track progress and quickly address any issues that may arise. Single-zone ExpressRoute Gateway: Zone-redundant ExpressRoute Gateway: Background ExpressRoute Gateway Standard SKU is a non-zone-redundant and lower the resiliency for the service. Basic SKU public IP is retiring in the end of September 2025. After this date the support for this SKU will be ceased which will potentially impact the ExpressRoute Gateway support. ExpressRoute Gateway Public IP is used internally for control plane communication. Migration Scenarios This document is equally relevant to all of the following scenarios: ExpressRoute Gateway Standard/High/Ultraperformance to ErGw1Az/ ErGw2Az/ ErGw3Az SKU ExpressRoute Gateway Standard/High/Ultraperformance to Standard/High/Ultraperformance (Multi-Zone) SKU Single-zone and multi-zone regions Zone redundant SKU (ErGw1Az/ErGw2Az/ErGw3Az) deployed in single zone. Prerequisites Stakeholder Approvals: Ensure ITSM approvals are in place. This is to ensure that changes to IT systems are properly reviewed and authorized before implementation. Change Request (CR): Submit and secure approval for a Change Request to guarantee that all modifications to IT systems are thoroughly reviewed, authorized, and implemented in a controlled manner. Maintenance Window: When scheduling a maintenance window for production work, consider the following to minimize disruption and ensure efficiency: Key Considerations Minimizing Disruption: Schedule during low activity periods, often outside standard business hours or on weekends. Ensuring Adequate Staffing: Ensure necessary staff and resources are available, including technical support. Aligning with Production Cycles: Coordinate with departments to align with production cycles. Best Practices Preventive and Predictive Maintenance: Focus on regular inspections, part replacements, and system upgrades. Effective Communication: Inform stakeholders in advance about the maintenance schedule. Proper Planning: Use historical data and insights to identify the best time slots for maintenance. Backup Plan: Document rollback or roll-forward procedures in case of failure. Following are some important considerations: Minimizing Disruption: A backup plan minimizes disruptions during planned maintenance, especially for VMs that may shut down or reboot. Ensuring Data Integrity: It protects against data loss or corruption by backing up critical data beforehand. Facilitating Quick Recovery: It allows for quick recovery if issues arise, maintaining business continuity and minimizing downtime. Current Configuration backup: Backup configuration for ExpressRoute Gateway, ExpressRoute Gateway Connection and Routing table associated with Gateway (if any) properties. Here are the Powershell commands that can be used to backup ExpressRoute Gateway Configuration. Review Gateway migration article About migrating to an availability zone-enabled ExpressRoute virtual network gateway - Azure ExpressRoute | Microsoft Learn Be ready to open a Microsoft Support Ticket (Optional/Proactive): In certain corner case scenarios where migration encounters a blocker, be ready with the necessary details to open a Microsoft support ticket. In the ticket, provide the maintenance plan to the support engineer and ensure they are fully informed about your environment-specific configuration. Pre-Migration Testing Connectivity Tests: Run network reachability tests to validate current state. Some of the sample tests could be as following: ICMP test from on-premises virtual machine to Azure virtual machine to test basic connectivity. Ping on-premises Virtual machine to an Azure virtual machine. $ ping <Azure-Virtual-Machine-IP> Application access test: Access your workload application from on-premises to a service running in Azure. This depends on the customer application. For example, if it is a web application, access the web server from a browser on a laptop or an on-premises machine. Latency and throughput tests: You can used ACT to test latency and throughput. Please refer to this link for installation details. Troubleshoot network link performance: Azure ExpressRoute | Microsoft Learn $ Get-LinkPerformance -RemoteHost 10.0.0.1 -TestSeconds 10 To test Jitter and packet loss you can use following tools. PSPing: psping -l 1024 -n 100 <Azure_VM_IP>:443 PathPing: pathping <Azure VM IP> Capture the results from above test to compare them after the migration. “iperf” is another tool widely used for throughput and latency testing. A web-based latency tool works fine as well: https://www.azurespeed.com/ Test the whole ExpressRoute Gateway migration process in lower environment (Optional): In other words, migrate an ExpressRoute Gateway in non-production environment. Advanced Notification Send an email to the relevant stakeholders and impacted users/teams a few weeks in advance. Send a final notification to the same group a day before. Stop IOs on hybrid private endpoint Using private endpoints in Azure over a hybrid connection with ExpressRoute provides a secure, reliable, and high-performance connection to Azure services. By leveraging ExpressRoute's private peering and connectivity models, you can ensure that your traffic remains within the Microsoft global network, avoiding public internet exposure. This setup is ideal for scenarios requiring high security, consistent performance, and seamless integration between on-premises and Azure environments Private endpoints (PEs) in a virtual network connected over ExpressRoute private peering might experience connectivity outage during migration. To avoid this, stop all IOs over hybrid private endpoints. Validate you have enough IP for migration Our guidance is to proceed with migration, a /27 prefix or longer is required in the GatewaySubnet. The migration feature checks for enough address space during validation phase. In a scenario where there aren’t enough IP addresses available to create zone-redundant ExpressRoute Gateway, the Gateway migration script will add additional prefix to the subnet. As a user you don’t have to take any action. The migration feature will tell you if it needs more IPs. Migration Steps Migration using Azure portal Step 1: Test connectivity from On-premises to Azure via ExpressRoute Gateway. Refer Step-7 Step 2: Verify that the Microsoft Azure support engineer is on standby. Step 3: Send an email to notify users about the start of the planned connectivity outage. Step 4: Stop or minimize IOs over ExpressRoute circuit (Downtime). Minimizing the IOs will reduce the impact. Step 5: Follow the document below to migrate the ExpressRoute gateway using Azure Portal Migrate to an availability zone-enabled ExpressRoute virtual network gateway in Azure portal - Azure ExpressRoute | Microsoft Learn Step 6: Restart IOs over ExpressRoute Circuit Step 7: Validate and Test Post Migration connectivity. Verify BGP Peering: Get-AzExpressRouteCircuitPeering -ResourceGroupName <RG> -CircuitName <CircuitName> Route Propagation Check: Get-AzExpressRouteCircuitRouteTable -ResourceGroupName <RG> -ExpressRouteCircuitName <CircuitName> -PeeringType AzurePrivatePeering Connectivity Tests: Run network reachability tests to validate current state. Some of the sample tests could be as following: ICMP test from on-premises virtual machine to Azure virtual machine to test basic connectivity. Ping on-premises Virtual machine to an Azure virtual machine. $ ping <Azure-Virtual-Machine-IP> Application access test: Access your workload application from on-premises to a service running in Azure. This depends on the customer application. For example, if it is a web application, access the web server from a browser on a laptop or an on-premises machine. Latency and throughput tests: You can used ACT to test latency and throughput. Please refer to this link for installation details. Troubleshoot network link performance: Azure ExpressRoute | Microsoft Learn $ Get-LinkPerformance -RemoteHost 10.0.0.1 -TestSeconds 10 To test Jitter and packet loss you can use following tools. PSPing: psping -l 1024 -n 100 <Azure_VM_IP>:443 PathPing: pathping <Azure VM IP> Compare the new results with the one captured before the outage. Validate that the migration is successful. ExpressRoute Gateway is migrated to the new SKU. Migration using powershell Step 1: Test connectivity from On-premises to Azure via ExpressRoute Gateway. Refer Step-7 Step 2: Verify that the Microsoft Azure support engineer is on standby. Refer Step 3: Send an email to notify users about the start of the planned connectivity outage. Step 4: Stop or minimize IOs over ExpressRoute circuit (Downtime). Minimizing the IOs will reduce the impact. Step 5: Follow the document below to migrate the ExpressRoute gateway using Powershell. Migrate to an availability zone-enabled ExpressRoute virtual network gateway using PowerShell - Azure ExpressRoute | Microsoft Learn Step 6: Restart IOs over ExpressRoute Circuit Step 7: Validate and Test Post Migration connectivity. Verify BGP Peering: Get-AzExpressRouteCircuitPeering -ResourceGroupName <RG> -CircuitName <CircuitName> Route Propagation Check: Get-AzExpressRouteCircuitRouteTable -ResourceGroupName <RG> -ExpressRouteCircuitName <CircuitName> -PeeringType AzurePrivatePeering Connectivity Tests: Run network reachability tests to validate current state. Some of the sample tests could be as following: ICMP test from on-premises virtual machine to Azure virtual machine to test basic connectivity. Ping on-premises Virtual machine to an Azure virtual machine. $ ping <Azure-Virtual-Machine-IP> Application access test: Access your workload application from on-premises to a service running in Azure. This depends on the customer application. For example, if it is a web application, access the web server from a browser on a laptop or an on-premises machine. Latency and throughput tests: You can used ACT to test latency and throughput. Please refer to this link for installation details. Troubleshoot network link performance: Azure ExpressRoute | Microsoft Learn $ Get-LinkPerformance -RemoteHost 10.0.0.1 -TestSeconds 10 To test Jitter and packet loss you can use following tools. PSPing: psping -l 1024 -n 100 <Azure_VM_IP>:443 PathPing: pathping <Azure VM IP> Compare the new results with the one captured before the outage. Validate that the migration is successful. ExpressRoute Gateway is migrated to the new SKU. Rollback Plan If any issue arises during migration take help of Microsoft support engineer to: Restore Previous Gateway: Use the backed-up configuration to either get back the original gateways or create a new one, based on guidance from support engineer. Validate Connectivity: Perform on-premises to Azure connectivity testing as mentioned in step 7 above. Post-Migration Steps Update Change Request: Document and close the CR. Update CMDB: Reflect the new gateway details in the Configuration Management Database. Stakeholder Sign-off: Ensure all teams validate and approve the changes. Contact Information Network Team: Azure Support: Azure Support Portal References Azure ExpressRoute Gateway Migration Documentation Install Azure PowerShell with PowerShellGet | Microsoft Learn2.7KViews4likes1Comment