virtual network
229 TopicsA Guide to Azure Data Transfer Pricing
Understanding Azure networking charges is essential for businesses aiming to manage their budgets effectively. Given the complexity of Azure networking pricing, which involves various influencing factors, the goal here is to bring a clearer understanding of the associated data transfer costs by breaking down the pricing models into the following use cases: VM to VM VM to Private Endpoint VM to Internal Standard Load Balancer (ILB) VM to Internet Hybrid connectivity Please note this is a first version, with a second version to follow that will include additional scenarios. Disclaimer: Pricing may change over time, check the public Azure pricing calculator for up-to-date pricing information. Actual pricing may vary depending on agreements, purchase dates, and currency exchange rates. Sign in to the Azure pricing calculator to see pricing based on your current program/offer with Microsoft. 1. VM to VM 1.1. VM to VM, same VNet Data transfer within the same virtual network (VNet) is free of charge. This means that traffic between VMs within the same VNet will not incur any additional costs. Doc. Data transfer across Availability Zones (AZ) is free. Doc. 1.2. VM to VM, across VNet peering Azure VNet peering enables seamless connectivity between two virtual networks, allowing resources in different VNets to communicate with each other as if they were within the same network. When data is transferred between VNets, charges apply for both ingress and egress data. Doc: VM to VM, across VNet peering, same region VM to VM, across Global VNet peering Azure regions are grouped into 3 Zones (distinct from Avaialbility Zones within a specific Azure region). The pricing for Global VNet Peering is based on that geographic structure. Data transfer between VNets in different zones incurs outbound and inbound data transfer rates for the respective zones. When data is transferred from a VNet in Zone 1 to a VNet in Zone 2, outbound data transfer rates for Zone 1 and inbound data transfer rates for Zone 2 will be applicable. Doc. 1.3. VM to VM, through Network Virtual Appliance (NVA) Data transfer through an NVA involves charges for both ingress and egress data, depending on the volume of data processed. When an NVA is in the path, such as for spoke VNet to spoke VNet connectivity via an NVA (firewall...) in the hub VNet, it incurs VM to VM pricing twice. The table above reflects only data transfer charges and does not include NVA/Azure Firewall processing costs. 2. VM to Private Endpoint (PE) Private Endpoint pricing includes charges for the provisioned resource and data transfer costs based on traffic direction. For instance, writing to a Storage Account through a Private Endpoint incurs outbound data charges, while reading incurs inbound data charges. Doc: 2.1. VM to PE, same VNet Since data transfer within a VNet is free, charges are only applied for data processing through the Private Endpoint. Cross-region traffic will incur additional costs if the Storage Account and the Private Endpoint are located in different regions. 2.2. VM to PE, across VNet peering Accessing Private Endpoints from a peered network incurs only Private Link Premium charges, with no peering fees. Doc. VM to PE, across VNet peering, same region VM to PE, across VNet peering, PE region != SA region 2.3. VM to PE, through NVA When an NVA is in the path, such as for spoke VNet to spoke VNet connectivity via a firewall in the hub VNet, it incurs VM to VM charges between the VM and the NVA. However, as per the PE pricing model, there are no charges between the NVA and the PE. The table above reflects only data transfer charges and does not include NVA/Azure Firewall processing costs. 3. VM to Internal Load Balancer (ILB) Azure Standard Load Balancer pricing is based on the number of load balancing rules as well as the volume of data processed. Doc: 3.1. VM to ILB, same VNet Data transfer within the same virtual network (VNet) is free. However, the data processed by the ILB is charged based on its volume and on the number load balancing rules implemented. Only the inbound traffic is processed by the ILB (and charged), the return traffic goes direct from the backend to the source VM (free of charge). 3.2. VM to ILB, across VNet peering In addition to the Load Balancer costs, data transfer charges between VNets apply for both ingress and egress. 3.3. VM to ILB, through NVA When an NVA is in the path, such as for spoke VNet to spoke VNet connectivity via a firewall in the hub VNet, it incurs VM to VM charges between the VM and the NVA and VM to ILB charges between the NVA and the ILB/backend resource. The table above reflects only data transfer charges and does not include NVA/Azure Firewall processing costs. 4. VM to internet 4.1. Data transfer and inter-region pricing model Bandwidth refers to data moving in and out of Azure data centers, as well as data moving between Azure data centers; other transfers are explicitly covered by the Content Delivery Network, ExpressRoute pricing, or Peering. Doc: 4.2. Routing Preference in Azure and internet egress pricing model When creating a public IP in Azure, Azure Routing Preference allows you to choose how your traffic routes between Azure and the Internet. You can select either the Microsoft Global Network or the public internet for routing your traffic. Doc: See how this choice can impact the performance and reliability of network traffic: By selecting a Routing Preference set to Microsoft network, ingress traffic enters the Microsoft network closest to the user, and egress traffic exits the network closest to the user, minimizing travel on the public internet (“Cold Potato” routing). On the contrary, setting the Routing Preference to internet, ingress traffic enters the Microsoft network closest to the hosted service region. Transit ISP networks are used to route traffic, travel on the Microsoft Global Network is minimized (“Hot Potato” routing). Bandwidth pricing for internet egress, Doc: 4.3. VM to internet, direct Data transferred out of Azure to the internet incurs charges, while data transferred into Azure is free of charge. Doc. It is important to note that default outbound access for VMs in Azure will be retired on September 30 2025, migration to an explicit outbound internet connectivity method is recommended. Doc. 4.4. VM to internet, with a public IP Here a standard public IP is explicitly associated to a VM NIC, that incurs additional costs. Like in the previous scenario, data transferred out of Azure to the internet incurs charges, while data transferred into Azure is free of charge. Doc. 4.5. VM to internet, with NAT Gateway In addition to the previous costs, data transfer through a NAT Gateway involves charges for both the data processed and the NAT Gateway itself, Doc: 5. Hybrid connectivity Hybrid connectivity involves connecting on-premises networks to Azure VNets. The pricing model includes charges for data transfer between the on-premises network and Azure, as well as any additional costs for using Network Virtual Appliances (NVAs) or Azure Firewalls in the hub VNet. 5.1. H&S Hybrid connectivity without firewall inspection in the hub For an inbound flow, from the ExpressRoute Gateway to a spoke VNet, VNet peering charges are applied once on the spoke inbound. There are no charges on the hub outbound. For an outbound flow, from a spoke VNet to an ER branch, VNet peering charges are applied once, outbound of the spoke only. There are no charges on the hub inbound. Doc. The table above does not include ExpressRoute connectivity related costs. 5.2. H&S Hybrid connectivity with firewall inspection in the hub Since traffic transits and is inspected via a firewall in the hub VNet (Azure Firewall or 3P firewall NVA), the previous concepts do not apply. “Standard” inter-VNet VM-to-VM charges apply between the FW and the destination VM : inbound and outbound on both directions. Once outbound from the source VNet (Hub or Spoke), once inbound on the destination VNet (Spoke or Hub). The table above reflects only data transfer charges within Azure and does not include NVA/Azure Firewall processing costs nor the costs related to ExpressRoute connectivity. 5.3. H&S Hybrid connectivity via a 3rd party connectivity NVA (SDWAN or IPSec) Standard inter-VNet VM-to-VM charges apply between the NVA and the destination VM: inbound and outbound on both directions, both in the Hub VNet and in the Spoke VNet. 5.4. vWAN scenarios VNet peering is charged only from the point of view of the spoke – see examples and vWAN pricing components. Next steps with cost management To optimize cost management, Azure offers tools for monitoring and analyzing network charges. Azure Cost Management and Billing allows you to track and allocate costs across various services and resources, ensuring transparency and control over your expenses. By leveraging these tools, businesses can gain a deeper understanding of their network costs and make informed decisions to optimize their Azure spending.13KViews14likes2CommentsCustom DHCP support in Azure
Discover the intricacies of Dynamic Host Configuration Protocol (DHCP), a network protocol used for assigning IP addresses and network parameters. Learn about the DORA process, lease renewals, and the role of DHCP Relay in large enterprises. Gain insight into how DHCP operates in Azure natively and how its support in Azure has evolved over time, including the removal of rate-limiting for relayed traffic. This comprehensive guide also covers the limitations and potential workarounds of DHCP in Azure. Ideal for network administrators and IT professionals.16KViews12likes2CommentsInspection Patterns in Hub-and-Spoke and vWAN Architectures
By shruthi_nair Mays_Algebary Inspection plays a vital role in network architecture, and each customer may have unique inspection requirements. This article explores common inspection scenarios in both Hub-and-Spoke and Virtual WAN (vWAN) topologies. We’ll walk through design approaches assuming a setup with two Hubs or Virtual Hubs (VHubs) connected to on-premises environments via ExpressRoute. The specific regions of the Hubs or VHubs are not critical, as the same design principles can be applied across regions. Scenario1: Hub-and-Spoke Inspection Patterns In the Hub-and-Spoke scenarios, the baseline architecture assumes the presence of two Hub VNets. Each Hub VNet is peered with its local spoke VNets as well as with the other Hub VNet (Hub2-VNet). Additionally, both Hub VNets are connected to both local and remote ExpressRoute circuits to ensure redundancy. Note: In Hub-and-Spoke scenarios, connectivity between virtual networks over ExpressRoute circuits across Hubs is intentionally disabled. This ensures that inter-Hub traffic uses VNet peering, which provides a more optimized path, rather than traversing the ExpressRoute circuit. In Scenario 1, we present two implementation approaches: a traditional method and an alternative leveraging Azure Virtual Network Manager (AVNM). Option1: Full Inspection A widely adopted design pattern is to inspect all traffic, both east-west and north-south, to meet security and compliance requirements. This can be implemented using a traditional Hub-and-Spoke topology with VNet Peering and User-Defined Routes (UDRs), or by leveraging AVNM with Connectivity Configurations and centralized UDR management. In the traditional approach: VNet Peering is used to connect each spoke to its local Hub, and to establish connectivity between the two Hubs. UDRs direct traffic to the firewall as the next hop, ensuring inspection before reaching its destination. These UDRs are applied at the Spoke VNets, the Gateway Subnet, and the Firewall Subnet (especially for inter-region scenarios), as shown in the below diagram. As your environment grows, managing individual UDRs and VNet Peerings manually can become complex. To simplify deployment and ongoing management at scale, you can use AVNM. With AVNM: Use the Hub-and-Spoke connectivity configuration to manage routing within a single Hub. Use the Mesh connectivity configuration to establish Inter-Hub connectivity between the two Hubs. AVNM also enables centralized creation, assignment, and management of UDRs, streamlining network configuration at scale. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Spoke ✅ Spoke ↔ Internet ✅ Option2: Selective Inspection Between Azure VNets In some scenarios, full traffic inspection is not required or desirable. This may be due to network segmentation based on trust zones, for example, traffic between trusted VNets may not require inspection. Other reasons include high-volume data replication, latency-sensitive applications, or the need to reduce inspection overhead and cost. In this design, VNets are grouped into trusted and untrusted zones. Trusted VNets can exist within the same Hub or across different Hubs. To bypass inspection between trusted VNets, you can connect them directly using VNet Peering or AVNM Mesh connectivity topology. It’s important to note that UDRs are still used and configured as described in the full inspection model (Option 1). However, when trusted VNets are directly connected, system routes (created by VNet Peering or Mesh connectivity) take precedence over custom UDRs. As a result, traffic between trusted VNets bypasses the firewall and flows directly. In contrast, traffic to or from untrusted zones follows the UDRs, ensuring it is routed through the firewall for inspection. t Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Internet ✅ Spoke ↔ Spoke (Same Zones) ❌ Spoke ↔ Spoke (Across Zones) ✅ Option3: No Inspection to On-premises In cases where a firewall at the on-premises or colocation site already inspects traffic from Azure, customers typically aim to avoid double inspection. To support this in the above design, traffic destined for on-premises is not routed through the firewall deployed in Azure. For the UDRs applied to the spoke VNets, ensure that "Propagate Gateway Routes" is set to true, allowing traffic to follow the ExpressRoute path directly without additional inspection in Azure. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ❌ Spoke ↔ Spoke ✅ Spoke ↔ Internet ✅ Option4: Internet Inspection Only While not generally recommended, some customers choose to inspect only internet-bound traffic and allow private traffic to flow without inspection. In such cases, spoke VNets can be directly connected using VNet Peering or AVNM Mesh connectivity. To ensure on-premises traffic avoids inspection, set "Propagate Gateway Routes" to true in the UDRs applied to spoke VNets. This allows traffic to follow the ExpressRoute path directly without being inspected in Azure. Scenario2: vWAN Inspection Options Now we will explore inspection options using a vWAN topology. Across all scenarios, the base architecture assumes two Virtual Hubs (VHubs), each connected to its respective local spoke VNets. vWAN provides default connectivity between the two VHubs, and each VHub is also connected to both local and remote ExpressRoute circuits for redundancy. It's important to note that this discussion focuses on inspection in vWAN using Routing Intent. As a result, bypassing inspection for traffic to on-premises is not supported in this model. Option1: Full Inspection As noted earlier, inspecting all traffic, both east-west and north-south, is a common practice to fulfill compliance and security needs. In this design, enabling Routing Intent provides the capability to inspect both, private and internet-bound traffic. Unlike the Hub-and-Spoke topology, this approach does not require any UDR configuration. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Spoke ✅ Spoke ↔ Internet ✅ Option2: Using Different Firewall Flavors for Traffic Inspection Using different firewall flavors inside VHub for traffic inspection Some customers require specific firewalls for different traffic flows, for example, using Azure Firewall for East-West traffic while relying on a third-party firewall for North-South inspection. In vWAN, it’s possible to deploy both Azure Firewall and a third-party network virtual appliance (NVA) within the same VHub. However, as of this writing, deploying two different third-party NVAs in the same VHub is not supported. This behavior may change in the future, so it’s recommended to monitor the known limitations section for updates. With this design, you can easily control which firewall handles East-West versus North-South traffic using Routing Intent, eliminating the need for UDRs. Using different firewall flavors inside VHub for traffic inspection Deploying third-party firewalls in spoke VNets when VHub limitations apply If the third-party firewall you want to use is not supported within the VHub, or if the managed firewall available in the VHub lacks certain required features compared to the version deployable in a regular VNet, you can deploy the third-party firewall in a spoke VNet instead, while using Azure Firewall in the VHub. In this design, the third-party firewall (deployed in a spoke VNet) handles internet-bound traffic, and Azure Firewall (in the VHub) inspects East-West traffic. This setup is achieved by peering the third-party firewall VNet to the VHub, as well as directly peering it with the spoke VNets. These spoke VNets are also connected to the VHub, as illustrated in the diagram below. UDRs are required in the spoke VNets to forward internet-bound traffic to the third-party firewall VNet. East-West traffic routing, however, is handled using the Routing Intent feature, directing traffic through Azure Firewall without the need for UDRs. Deploying third-party firewalls in spoke VNets when VHub limitations apply Note: Although it is not required to connect the third-party firewall VNet to the VHub for traffic flow, doing so is recommended for ease of management and on-premises reachability. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Inspected using Azure Firewall Spoke ↔ Spoke ✅ Inspected using Azure Firewall Spoke ↔ Internet ✅ Inspected using Third Party Firewall Option3: Selective Inspection Between Azure VNets Similar to the Hub-and-Spoke topology, there are scenarios where full traffic inspection is not ideal. This may be due to Azure VNets being segmented into trusted and untrusted zones, where inspection is unnecessary between trusted VNets. Other reasons include large data replication between specific VNets or latency-sensitive applications that require minimizing inspection delays and associated costs. In this design, trusted and untrusted VNets can reside within the same VHub or across different VHubs. Routing Intent remains enabled to inspect traffic between trusted and untrusted VNets, as well as internet-bound traffic. To bypass inspection between trusted VNets, you can connect them directly using VNet Peering or AVNM Mesh connectivity. Unlike the Hub-and-Spoke model, this design does not require UDR configuration. Because trusted VNets are directly connected, system routes from VNet peering take precedence over routes learned through the VHub. Traffic destined for untrusted zones will continue to follow the Routing Intent and be inspected accordingly. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Internet ✅ Spoke ↔ Spoke (Same Zones) ❌ Spoke ↔ Spoke (Across Zones) ✅ Option4: Internet Inspection Only While not generally recommended, some customers choose to inspect only internet-bound traffic and bypass inspection of private traffic. In this design, you only enable the Internet Inspection option within Routing Intent, so private traffic bypasses the firewall entirely. The VHub manages both intra-and inter-VHub routing directly. Internet Inspection Only Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ❌ Spoke ↔ Internet ✅ Spoke ↔ Spoke ❌2.6KViews9likes3CommentsAzure Virtual Network now supports updates without subnet property
Azure API HTTP PUT operation on your existing VNet resource only preserves them if they are also supplied in the JSON along with any new resources being added supplied in the JSON. If any of the existing resources are omitted from the JSON for the PUT operation, those resources are removed from the Azure deployment today. This behavior causes problems for customers, and to make it easier, we have implemented a change in the PUT API behavior for Vnet updates. This change allows you to skip the subnet specification in a PUT call without deleting the existing subnets. This capability is now available with API version 2023-09-01.7.8KViews8likes7CommentsSubnet Peering
The Basics: VNET Peering Virtual Networks in Azure can be connected through VNET Peering. Peered VNETs become one routing domain, meaning that the entire IP space of each VNET is visible to and reachable from the other VNET. This is great for many applications: wire speed connectivity without gateways or other complexities. In this diagram, vnet-left and vnet-right are peered: This is what that looks like in the portal for vnet-left (with the righthand vnet similar): Effective routes of vm's in either vnet show the entire ip space of the peered vnet. For vm left-1 in vnet-left: az network nic show-effective-route-table -g sn-rg -n left-1701 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.0.0.0/16 VnetLocal Default Active 10.1.0.0/16 VNetPeering Default Active 0.0.0.0/0 Internet And for vm right-1 in vnet-right: az network nic show-effective-route-table -g sn-rg -n right-159 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.1.0.0/16 VnetLocal Default Active 10.0.0.0/16 VNetPeering Default Active 0.0.0.0/0 Internet The Problem There are situations where completely merging the address spaces of VNETs is not desirable. Think of micro-segmentation: only the front-end of a multi-tier application must be exposed and accessible from outside the VNET, with the application- and database tiers ideally remaining isolated. Network Security Groups are now used to achieve isolation; they can block access to the internal tiers from sources outside of the vnet's ip range, or the front-end subnet. Another scenario is IP address space exhaustion: private ip space is a scarce resource in many companies. There may not be enough free space available to assign each vnet a unique, routable, segment large enough to accomodate all resources hosted in each vnet. Again, there may not be a need for all resources to have a routable ip address as they do not need to be accessible from outside the vnet. The Solution: Subnet Peering The above scenario's could be solved for if it were possible to selectively share a vnet's address range across a peering. Enter Subnet Peering : this new capability allows selective sharing of IP address space across a peering at the subnet level. Subnet Peering is not yet available through the Azure portal, but can be configured through the Azure CLI. A few new parameters have been added to the existing az network vnet peering create command: --peer-complete-vnets {0, 1 (default), f, false, n, no, t, true, y, yes}: when set to 0, false or no configures peering at the subnet level in stead of at the vnet level. --local-subnet-names: list of subnets to be peered in the current vnet (i.e. the vnet called out in the --vnet-name parameter) - --remote-vnet-names: list of subnets to be peered in the remote vnet (i.e. the vnet called out in the --remote-vnet parameter) --enable-only-ipv6 {0(default), 1, f, false, n, no, t, true, y, yes}: if set to true, peers only ipv6 space in dual stack vnets. NB: Although Subnet Peering is available in all Azure regions, subscription allow-listing through this form is still required. Please read and be aware of the caveats under point 11 on the form. Segmentation This command peers subnet1 in vnet-left to subnet0 and subnet2 in right-vnet: az network vnet peering create -g sn-rg -n left0-right0 --vnet-name vnet-left --local-subnet-names subnet1 --remote-subnet-names subnet0 subnet2 --remote-vne vnet-right --peer-complete-vnets 0 --allow-vnet-access 1 Then establish the peering in the other direction: az network vnet peering create -g sn-rg -n right0-left0 --vnet-name vnet-right --local-subnet-names subnet0 subnet2 --remote-subnet-names subnet1 --remote-vne vnet-left --peer-complete-vnets 0 --allow-vnet-access This leaves subnet0 and subnet2 in vnet-left, and subnet1 in vnet-right disconnected, effectively creating segmentation between the peered vnets without the use of NSGs. Details of the peerings from left to right are below. Notice the localAddressSpace and remoteAddressSpace prefixes are those of the peered local and remote subnets. The other direction is similar - with localAddressSpace and remoteAddressSpace prefixes swapped of course. az network vnet peering show -g sn-rg -n left0-right0 --vnet-name vnet-left { "allowForwardedTraffic": false, "allowGatewayTransit": false, "allowVirtualNetworkAccess": true, "doNotVerifyRemoteGateways": false, "etag": "W/\"6a80a7da-36f4-404c-8bd8-d23e1b2bdd9a\"", "id": "/subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-left/virtualNetworkPeerings/left0-right0", "localAddressSpace": { "addressPrefixes": [ "10.0.1.0/24" ] }, "localSubnetNames": [ "subnet1" ], "localVirtualNetworkAddressSpace": { "addressPrefixes": [ "10.0.1.0/24" ] }, "name": "left0-right0", "peerCompleteVnets": false, "peeringState": "Connected", "peeringSyncLevel": "FullyInSync", "provisioningState": "Succeeded", "remoteAddressSpace": { "addressPrefixes": [ "10.1.0.0/24", "10.1.2.0/24" ] }, "remoteSubnetNames": [ "subnet0", "subnet2" ], "remoteVirtualNetwork": { "id": "/subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-right", "resourceGroup": "sn-rg" }, "remoteVirtualNetworkAddressSpace": { "addressPrefixes": [ "10.1.0.0/24", "10.1.2.0/24" ] }, "remoteVirtualNetworkEncryption": { "enabled": false, "enforcement": "AllowUnencrypted" }, "resourceGroup": "sn-rg", "resourceGuid": "eb63ec9e-aa48-023e-1514-152f8ab39ae7", "type": "Microsoft.Network/virtualNetworks/virtualNetworkPeerings", "useRemoteGateways": false } The peerings show in the portal as "normal" vnet peerings, as subnet peering is not yet supported by the portal. The only indication that this is not a full vnet peering, is the peered IP address space - this is the subnet IP range of the remote subnet(s). Inspecting the effective routes on vm left-1 shows it has routes for subnet0 and subnet2 but not for subnet1 in `vnet-right`: az network nic show-effective-route-table -g sn-rg -n left-1701 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.0.0.0/16 VnetLocal Default Active 172.16.0.0/24 VnetLocal Default Active 10.1.0.0/24 VNetPeering Default Active 10.1.2.0/24 VNetPeering Default Active 0.0.0.0/0 Internet Note that vm's left-0 and left-1 also have routes for the same subnets in vnet-right, even though their subnets were not listed in the --local-subnet-names parameter: az network nic show-effective-route-table -g sn-rg -n left-0681 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.0.0.0/16 VnetLocal Default Active 172.16.0.0/24 VnetLocal Default Active 10.1.0.0/24 VNetPeering Default Active 10.1.2.0/24 VNetPeering Default Active 0.0.0.0/0 Internet az network nic show-effective-route-table -g sn-rg -n left-2809 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.0.0.0/16 VnetLocal Default Active 172.16.0.0/24 VnetLocal Default Active 10.1.0.0/24 VNetPeering Default Active 10.1.2.0/24 VNetPeering Default Active 0.0.0.0/0 Internet In the current release of the feature, the ip space of the subnets listed in --remote-subnet-names is propagated to all subnets in the local vnet, not just to the subnets listed in --local-subnet-names. However, the subnets in vnet-right only have the address space of subnet1 in vnet-left propagated, as shows in the effective routes of the vm's on the right: az network nic show-effective-route-table -g sn-rg -n right-0814 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.1.0.0/16 VnetLocal Default Active 172.16.0.0/24 VnetLocal Default Active 10.0.1.0/24 VNetPeering Default Active 0.0.0.0/0 Internet az network nic show-effective-route-table -g sn-rg -n right-159 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.1.0.0/16 VnetLocal Default Active 172.16.0.0/24 VnetLocal Default Active 10.0.1.0/24 VNetPeering Default Active 0.0.0.0/0 Internet az network nic show-effective-route-table -g sn-rg -n right-2283 -o table Source State Address Prefix Next Hop Type Next Hop IP -------- ------- ---------------- --------------- ------------- Default Active 10.1.0.0/16 VnetLocal Default Active 172.16.0.0/24 VnetLocal Default Active 10.0.1.0/24 VNetPeering Default Active 0.0.0.0/0 Internet Bidrectional routing therefore only exists between subnet1 in vnet-left and subnet0 and subnet2 in vnet-right, as intended: This is demonstrated by testing with ping from `left-1` to the vm's on the right: to right-0: PING 10.1.0.4 (10.1.0.4) 56(84) bytes of data. 64 bytes from 10.1.0.4: icmp_seq=1 ttl=64 time=1.20 ms 64 bytes from 10.1.0.4: icmp_seq=2 ttl=64 time=1.24 ms 64 bytes from 10.1.0.4: icmp_seq=3 ttl=64 time=1.06 ms 64 bytes from 10.1.0.4: icmp_seq=4 ttl=64 time=1.56 ms ^C --- 10.1.0.4 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3005ms to right-1: PING 10.1.1.4 (10.1.1.4) 56(84) bytes of data. ^C --- 10.1.1.4 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4131ms to right-2: PING 10.1.2.4 (10.1.2.4) 56(84) bytes of data. 64 bytes from 10.1.2.4: icmp_seq=1 ttl=64 time=3.39 ms 64 bytes from 10.1.2.4: icmp_seq=2 ttl=64 time=0.968 ms 64 bytes from 10.1.2.4: icmp_seq=3 ttl=64 time=3.00 ms 64 bytes from 10.1.2.4: icmp_seq=4 ttl=64 time=2.47 ms ^C --- 10.1.2.4 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3005ms Overlapping IP Space Now let's add overlapping ip space to both vnets. With the peerings removed, we add 172.16.0.0/16 to each vnet, create subnet3 with 172.16.3.0/24 and insert a vm. When we try to peer the full vnets, an error results because of the address space overlap: az network vnet peering create -g sn-rg -n left0-right0 --vnet-name vnet-left --remote-vnet vnet-right --peer-complete-vnets 1 --allow-vnet-access 1 (VnetAddressSpacesOverlap) Cannot create or update peering /subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-left/virtualNetworkPeerings/left0-right0. Virtual networks /subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-left and /subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-right cannot be peered because their address spaces overlap. Overlapping address prefixes: 172.16.0.0/16, 172.16.0.0/16 Code: VnetAddressSpacesOverlap Now we will again establish subnet-level peering as in the previous section: az network vnet peering create -g sn-rg -n left0-right0 --vnet-name vnet-left --local-subnet-names subnet1 --remote-subnet-names subnet0 subnet2 --remote-vne vnet-right --peer-complete-vnets 0 --allow-vnet-access 1 az network vnet peering create -g sn-rg -n right0-left0 --vnet-name vnet-right --local-subnet-names subnet0 subnet2 --remote-subnet-names subnet1 --remote-vne vnet-left --peer-complete-vnets 0 --allow-vnet-access 1 This completes successfully despite the presence of overlapping address space in both vnets. az network vnet peering show -g sn-rg -n left0-right0 --vnet-name vnet-left --query "[provisioningState, remoteAddressSpace]" [ "Succeeded", { "addressPrefixes": [ "10.1.0.0/24", "10.1.2.0/24" ] } ] az network vnet peering show -g sn-rg -n right0-left0 --vnet-name vnet-right --query "[provisioningState, remoteAddressSpace]" [ "Succeeded", { "addressPrefixes": [ "10.0.1.0/24" ] } ] Now let's try to include vnet-left subnet3, which overlaps with subnet3 on the right, in the peering: az network vnet peering create -g sn-rg -n left0-right0 --vnet-name vnet-left --local-subnet-names subnet1 subnet3 --remote-subnet-names subnet0 subnet2 --remote-vne vnet-right --peer-complete-vnets 0 --allow-vnet-access 1 (VnetAddressSpacesOverlap) Cannot create or update peering /subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-left/virtualNetworkPeerings/left0-right0. Virtual networks /subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-left and /subscriptions/7cb39d93-f8a1-48a7-af6d-c8e12136f0ad/resourceGroups/sn-rg/providers/Microsoft.Network/virtualNetworks/vnet-right cannot be peered because their address spaces overlap. Overlapping address prefixes: 172.16.3.0/24 Code: VnetAddressSpacesOverlap This demonstrates that subnet peering allows for the partial peering of vnets that contain overlapping ip space. As discussed, this can be very helpful in scenario's where private ip space is in short supply. Looking ahead Subnet peering is now available in all Azure regions: feel free to test, experiment and use in production. The feature is currently only available through the latest versions of the Azure CLI, Bicep, ARM Template, Terraform and Powershell. Portal support should be added soon. Meaningful next steps will be to integrate Subnet Peering in both Azure Virtual Network Manager (AVNM) and Virtual WAN, so that the advantages it brings can be leveraged at enterprise scale in nework foundations. I will continue to track developments and update this post as appropriate.5.5KViews7likes1CommentUnderstanding ExpressRoute private peering to address ExpressRoute resiliency
This article provides an overview of Microsoft ExpressRoute, including its various components such as the Circuit, the Gateway and the Connection, and different connectivity models like ExpressRoute Service Provider and ExpressRoute Direct. It also covers the resilience and failure scenarios related to ExpressRoute, including geo-redundancy, Availability Zones, and route advertisement limits. If you're looking to learn more about ExpressRoute and its implementation, this article is a great resource.11KViews7likes3Comments