azure expressroute
25 TopicsA Guide to Azure Data Transfer Pricing
Understanding Azure networking charges is essential for businesses aiming to manage their budgets effectively. Given the complexity of Azure networking pricing, which involves various influencing factors, the goal here is to bring a clearer understanding of the associated data transfer costs by breaking down the pricing models into the following use cases: VM to VM VM to Private Endpoint VM to Internal Standard Load Balancer (ILB) VM to Internet Hybrid connectivity Please note this is a first version, with a second version to follow that will include additional scenarios. Disclaimer: Pricing may change over time, check the public Azure pricing calculator for up-to-date pricing information. Actual pricing may vary depending on agreements, purchase dates, and currency exchange rates. Sign in to the Azure pricing calculator to see pricing based on your current program/offer with Microsoft. 1. VM to VM 1.1. VM to VM, same VNet Data transfer within the same virtual network (VNet) is free of charge. This means that traffic between VMs within the same VNet will not incur any additional costs. Doc. Data transfer across Availability Zones (AZ) is free. Doc. 1.2. VM to VM, across VNet peering Azure VNet peering enables seamless connectivity between two virtual networks, allowing resources in different VNets to communicate with each other as if they were within the same network. When data is transferred between VNets, charges apply for both ingress and egress data. Doc: VM to VM, across VNet peering, same region VM to VM, across Global VNet peering Azure regions are grouped into 3 Zones (distinct from Avaialbility Zones within a specific Azure region). The pricing for Global VNet Peering is based on that geographic structure. Data transfer between VNets in different zones incurs outbound and inbound data transfer rates for the respective zones. When data is transferred from a VNet in Zone 1 to a VNet in Zone 2, outbound data transfer rates for Zone 1 and inbound data transfer rates for Zone 2 will be applicable. Doc. 1.3. VM to VM, through Network Virtual Appliance (NVA) Data transfer through an NVA involves charges for both ingress and egress data, depending on the volume of data processed. When an NVA is in the path, such as for spoke VNet to spoke VNet connectivity via an NVA (firewall...) in the hub VNet, it incurs VM to VM pricing twice. The table above reflects only data transfer charges and does not include NVA/Azure Firewall processing costs. 2. VM to Private Endpoint (PE) Private Endpoint pricing includes charges for the provisioned resource and data transfer costs based on traffic direction. For instance, writing to a Storage Account through a Private Endpoint incurs outbound data charges, while reading incurs inbound data charges. Doc: 2.1. VM to PE, same VNet Since data transfer within a VNet is free, charges are only applied for data processing through the Private Endpoint. Cross-region traffic will incur additional costs if the Storage Account and the Private Endpoint are located in different regions. 2.2. VM to PE, across VNet peering Accessing Private Endpoints from a peered network incurs only Private Link Premium charges, with no peering fees. Doc. VM to PE, across VNet peering, same region VM to PE, across VNet peering, PE region != SA region 2.3. VM to PE, through NVA When an NVA is in the path, such as for spoke VNet to spoke VNet connectivity via a firewall in the hub VNet, it incurs VM to VM charges between the VM and the NVA. However, as per the PE pricing model, there are no charges between the NVA and the PE. The table above reflects only data transfer charges and does not include NVA/Azure Firewall processing costs. 3. VM to Internal Load Balancer (ILB) Azure Standard Load Balancer pricing is based on the number of load balancing rules as well as the volume of data processed. Doc: 3.1. VM to ILB, same VNet Data transfer within the same virtual network (VNet) is free. However, the data processed by the ILB is charged based on its volume and on the number load balancing rules implemented. Only the inbound traffic is processed by the ILB (and charged), the return traffic goes direct from the backend to the source VM (free of charge). 3.2. VM to ILB, across VNet peering In addition to the Load Balancer costs, data transfer charges between VNets apply for both ingress and egress. 3.3. VM to ILB, through NVA When an NVA is in the path, such as for spoke VNet to spoke VNet connectivity via a firewall in the hub VNet, it incurs VM to VM charges between the VM and the NVA and VM to ILB charges between the NVA and the ILB/backend resource. The table above reflects only data transfer charges and does not include NVA/Azure Firewall processing costs. 4. VM to internet 4.1. Data transfer and inter-region pricing model Bandwidth refers to data moving in and out of Azure data centers, as well as data moving between Azure data centers; other transfers are explicitly covered by the Content Delivery Network, ExpressRoute pricing, or Peering. Doc: 4.2. Routing Preference in Azure and internet egress pricing model When creating a public IP in Azure, Azure Routing Preference allows you to choose how your traffic routes between Azure and the Internet. You can select either the Microsoft Global Network or the public internet for routing your traffic. Doc: See how this choice can impact the performance and reliability of network traffic: By selecting a Routing Preference set to Microsoft network, ingress traffic enters the Microsoft network closest to the user, and egress traffic exits the network closest to the user, minimizing travel on the public internet (“Cold Potato” routing). On the contrary, setting the Routing Preference to internet, ingress traffic enters the Microsoft network closest to the hosted service region. Transit ISP networks are used to route traffic, travel on the Microsoft Global Network is minimized (“Hot Potato” routing). Bandwidth pricing for internet egress, Doc: 4.3. VM to internet, direct Data transferred out of Azure to the internet incurs charges, while data transferred into Azure is free of charge. Doc. It is important to note that default outbound access for VMs in Azure will be retired on September 30 2025, migration to an explicit outbound internet connectivity method is recommended. Doc. 4.4. VM to internet, with a public IP Here a standard public IP is explicitly associated to a VM NIC, that incurs additional costs. Like in the previous scenario, data transferred out of Azure to the internet incurs charges, while data transferred into Azure is free of charge. Doc. 4.5. VM to internet, with NAT Gateway In addition to the previous costs, data transfer through a NAT Gateway involves charges for both the data processed and the NAT Gateway itself, Doc: 5. Hybrid connectivity Hybrid connectivity involves connecting on-premises networks to Azure VNets. The pricing model includes charges for data transfer between the on-premises network and Azure, as well as any additional costs for using Network Virtual Appliances (NVAs) or Azure Firewalls in the hub VNet. 5.1. H&S Hybrid connectivity without firewall inspection in the hub For an inbound flow, from the ExpressRoute Gateway to a spoke VNet, VNet peering charges are applied once on the spoke inbound. There are no charges on the hub outbound. For an outbound flow, from a spoke VNet to an ER branch, VNet peering charges are applied once, outbound of the spoke only. There are no charges on the hub inbound. Doc. The table above does not include ExpressRoute connectivity related costs. 5.2. H&S Hybrid connectivity with firewall inspection in the hub Since traffic transits and is inspected via a firewall in the hub VNet (Azure Firewall or 3P firewall NVA), the previous concepts do not apply. “Standard” inter-VNet VM-to-VM charges apply between the FW and the destination VM : inbound and outbound on both directions. Once outbound from the source VNet (Hub or Spoke), once inbound on the destination VNet (Spoke or Hub). The table above reflects only data transfer charges within Azure and does not include NVA/Azure Firewall processing costs nor the costs related to ExpressRoute connectivity. 5.3. H&S Hybrid connectivity via a 3rd party connectivity NVA (SDWAN or IPSec) Standard inter-VNet VM-to-VM charges apply between the NVA and the destination VM: inbound and outbound on both directions, both in the Hub VNet and in the Spoke VNet. 5.4. vWAN scenarios VNet peering is charged only from the point of view of the spoke – see examples and vWAN pricing components. Next steps with cost management To optimize cost management, Azure offers tools for monitoring and analyzing network charges. Azure Cost Management and Billing allows you to track and allocate costs across various services and resources, ensuring transparency and control over your expenses. By leveraging these tools, businesses can gain a deeper understanding of their network costs and make informed decisions to optimize their Azure spending.14KViews14likes2CommentsUnderstanding ExpressRoute private peering to address ExpressRoute resiliency
This article provides an overview of Microsoft ExpressRoute, including its various components such as the Circuit, the Gateway and the Connection, and different connectivity models like ExpressRoute Service Provider and ExpressRoute Direct. It also covers the resilience and failure scenarios related to ExpressRoute, including geo-redundancy, Availability Zones, and route advertisement limits. If you're looking to learn more about ExpressRoute and its implementation, this article is a great resource.11KViews7likes3CommentsConnectivity options between Hub-and-Spoke and Azure Virtual WAN
Contents Overview Scenario 1 – Traffic hair-pinning using ExpressRoute Scenario 2 – Build a virtual tunnel (SD-WAN or IPSec) Scenario 3 – vNet Peering and vHub connection coexistence Scenario 4 – Transit virtual network for decentralized vNets Conclusion Bonus Overview This article is going to discuss different options that interconnect the Hub and Spoke networking with Virtual WAN for migrations scenarios. The goal of this article is to expand on additional options that can help customers to migrate to their existing Hub and Spoke topology to Azure Virtual WAN. You can find a comprehensive article Migrate to Azure Virtual WAN to go over several considerations during the migration process. The focus of this article is to focus only on the connectivity to facilitate the migration process. Therefore, it is important to note that the interconnectivity options listed here are intended to be used in the short term to ensure a temporary coexistence between both topologies while the workload on the Spoke vNets with the end goal of disconnecting both environments after migration is completed. This article mainly discusses scenarios with a Virtual WAN Secured Virtual Hub; exceptions are noted where applicable. The setup assumes the use of routing intent and route policies, replacing the previous approach of using route tables to secure Virtual Hubs. For more information, please consult: How to configure Virtual WAN Hub routing intent and routing policies. Scenario 1 – Traffic hair-pinning using ExpressRoute circuits To begin the migration, ensure that the target Virtual WAN Hub (vHub) includes all necessary components. For existing vHubs equipped with Firewalls, SD-WAN, VPN (Point-to-Site or Site-to-Site), confirm that these elements are also present and correctly configured on the target Virtual WAN. Additionally, for any migrated Spoke, an optional vNet peering can be maintained to the original Hub vNet if there are dependencies, such as shared services (DNS, Active Directory, and other services). Make sure that the peering configuration has the option for using remote gateway disabled, because once connected to the vHub, the vNet connection requires using remote gateway to be enabled. On this scenario traffic between Hub and Spoke and Virtual WAN Hub is facilitated using an ExpressRoute circuit that is connected to both environments. When a single circuit is connected to both environments’ routes will be exchanged between both environments, and it will hairpin at the MSEE (Microsoft Enterprise Edge) routers. This scenario is a similar approach used described in the article: Migrate to Azure Virtual WAN. Connectivity flow: Source Destination Data Path Spoke vNet Migrated Spokes vNets 1. vNet Hub Firewall 2. vNet ExpressRoute Gateway 3. MSEE via Hairpin 4. vHub ExpressRoute Gateway 5. vHub Firewall Spoke vNet Branches (VPN/SD-WAN) 1. vNet Hub Firewall 2. vNet SD-WAN NVA or VPN Gateway Spoke vNet On-premises DC 1. vNet Hub Firewall 2. ExpressRoute Gateway 3. ExpressRoute Circuit (MSEE) 4. Provider/Customer Migrated vNet Branches (VPN/SD-WAN) 1. vHub Firewall 2. vHub SD-WAN NVA or VPN Gateway Migrated vNet On-premises DC 1. vNet Hub Firewall 2. vNet ExpressRoute Gateway 3. ExpressRoute Circuit (MSEE) 4. Provider/Customer Note: Connectivity also considers that return traffic follows the same path and components. Pros Traffic stays in the Microsoft Backbone and does not go over the Provider or Customer CPE. Built-in route provided by the Azure Platform (this is configurable, see considerations). Cons Expect high latency. Traffic between VNET Hub and vHubs crosses MSEE routers outside the Azure Region in a Cloud Exchange facility, increasing latency due to the distance to the region. Single point of failure. Because the MSEE is located at the Edge location, an outage at that site can impact communication. To ensure redundancy, you can utilize a second MSEE at a different Edge location within the same metro area to achieve redundancy and lower latency. Additionally, a second MSEE in different metro areas can also provide redundancy, although this might result in increased latency. Considerations A new feature has been introduced to block MSEE hairpin. To enable this scenario, you need to activate Allow Traffic from remote Virtual WAN Networks (on VNET Hub side) and Allow Traffic from non-Virtual WAN Networks (on Virtual WAN Hub side). For more details, refer to this article: Customisation controls for connectivity between Virtual Networks over ExpressRoute.. Scenario 2 – Build a virtual tunnel (SD-WAN or IPSec) The same prerequisites for the target vHub apply for this option before beginning the migration. However, instead of utilizing ExpressRoute transit, in this scenario you establish a direct virtual tunnel between the existing VNET Hub and the vHub to facilitate communication. There are several options for achieving this, including: Use Azure native VPN Gateway on both VNET Hub and vHub for IPSec tunnels. Up to four tunnels can be created when VNET Hub VPN Gateway is configured to Active/Active (by default, vHub VPN Gateways are already Active/Active). It is important to consider that customers can use either BGP or static routing when implementing this option. However, BGP will be restricted, if the VNET VPN Gateway is the only gateway present, you can use custom ASN other than 65515. If there is another gateway, such as ExpressRoute or Azure Route Server, ASN must be set to 65515. Since vHub VPN Gateway does not allow custom ASN at this moment (65515 is the default ASN), static routes will be required for this setup. Use 3 rd party NVA to establish SD-WAN connectivity between both sides or IPSec tunnel. Using this option, you can leverage either static or BGP routing, where BGP will offer better integration with vHub and less administrative effort. Connectivity flow: Source Destination Data Path Spoke vNet Migrated Spokes vNets 1. vNet Hub Firewall 2. vNet Hub SD-WAN NVA or VPN Gateway 3. vHub Hub SD-WAN NVA or VPN Gateway 4. vHub Firewall Spoke vNet Branches (VPN/SD-WAN) 1. vNet Hub Firewall 2. vNet Hub SD-WAN NVA or VPN Gateway Spoke vNet On-premises DC 1. vNet Hub Firewall 2. ExpressRoute Gateway 3. MSEE Hairpin 4. Provider/Customer Migrated vNet Branches (VPN/SD-WAN) 1. vHub Firewall 2. SD-WAN NVA or VPN Gateway Migrated vNet On-premises DC 1. vNet Hub Firewall 2. ExpressRoute Gateway 3. MSEE Hairpin 4. Provider/Customer Note: Connectivity also considers that return traffic follows the same path and components. Pros Traffic remains within the Microsoft Backbone in the region, resulting in lower latency compared to Option 1. Cons Administrative overhead when using static routes and managing extra network components. Cost of adding a new VPN Gateway or 3 rd party NVA to build the virtual tunnel Throughput may be limited based on the type of virtual tunnel technology used. This limitation can be mitigated by adding multiple tunnels, which require BGP + ECMP to balance traffic between them. It is important to note that Azure allows up to eight tunnels, which is the maximum number of programmed routes for the same networks with different next hops, indicating the specific tunnel. Scenario 3 – vNet Peering and vHub connection coexistence In this scenario, spokes vNet originally connected to vNet Hub are migrated to the vHub while maintaining existing peering with the vNet Hub but with the Use Remote Gateway configuration disabled. This allows the migrated vNets to retain connectivity with the source vNet Hub while also connecting to the vHub. The connection to the vHub necessitates the Use Remote Gateway, which directs all traffic towards on-premises to use the vHub. To connect with other spokes via vHub, the migrated vNet needs a UDR with routes to the vNet spoke prefixes using the vNet Hub Firewall as the next hop. Use route summarization for contiguous prefixes or enter specific prefixes if they are not. Additionally, enable Gateway Propagation in the UDR so migrated Spoke vNets can learn routes from the vHub (RFC 1918, default route, or both). Connectivity flow: Source Destination Data Path Spoke vNet Migrated Spokes vNet 1. vNet Hub Firewall Spoke vNet Branches (VPN/SD-WAN) 1. vNet Hub Firewall 2. vNet SD-WAN NVA or VPN Gateway Spoke vNet On-premises DC 1. vNet Hub Firewall 2. ExpressRoute Gateway 3. ExpressRoute Circuit (MSEE) 4. Provider/Customer Migrated vNet Branches (VPN/SD-WAN) 1. vHub Firewall 2. vHub SD-WAN NVA or VPN Gateway Migrated vNet On-premises DC 1. vHub Firewall 2. vHub ExpressRoute Gateway 3. ExpressRoute Circuit (MSEE) 4. Provider/Customer Note: Connectivity means that return traffic follows the same path and components. Pros Traffic remains within the Microsoft Backbone in the region, resulting in lower latency compared to option 1. No throughput limitation imposed by virtual tunnels compared to option 2. Throughput will be limited by the VM size. Cons Administrative overhead to adjust the UDR to reach the Spoke vNets on connected over the vNet Hub. Scenario 4 – Transit virtual network for decentralized vNets This use case involves a decentralized virtual network model where each customer has an ExpressRoute Gateway for connectivity to on-premises systems. Traffic between virtual networks is managed using virtual network peering, based on the specific connectivity requirements of the customer. Each virtual network has its own gateway, which prevents connecting them directly to the virtual hub because the remote gateway option needs to be enabled. If the customer can tolerate the downtime associated with removing the Express Route Gateway from the migrated vNet, they have the option to establish a direct vNet connection to the vHub, thereby simplifying the solution. This process typically takes approximately 45 minutes, excluding the rollback procedure which would require an additional 45 minutes, potentially making this approach prohibitive for most customers. However, customers with existing Azure workloads often aim to minimize downtime. As illustrated in the diagram below, they can create a transit vNet equipped with a firewall or a Network Virtual Appliance (NVA) with routing capabilities. This configuration allows the migrated vNet to establish regular peering, thereby maintaining connectivity without significant disruption. The solution illustrated on this section uses a static route propagation at the vNet connection level towards the Transit vNet, which now requires non Secured-Virtual WAN hubs (note that support for static route propagation is on the Virtual WAN roadmap). Alternatively, you can use BGP peering from the Firewall or NVA to program the migrated vNets summary prefixes. For Firewall implementations with BGP it is recommended to leverage Next hop IP support for Virtual WAN where traffic flows over a load balance feature to ensure traffic symmetry. In that scenario you can also leverage Secured-vHubs. The migration process also necessitates adjustments to the routes for the migrated vNET to facilitate traffic flow to on-premises systems using the vHUB. This includes utilizing static routes at the connection from the Transit vNet to the vHub to advertise a summary route via the Firewall in the transit vNET to ensure return traffic and proper advertisement to the on-premises environment. Once the route configurations are established, the ExpressRoute connection can be removed. The customer can then proceed to Step 2, which will allow them to make the final adjustments and complete the full integration with the vHub following the outlined steps. Remove ExpressRoute Gateway. Create the vNet connection to the vHub, that will allow the specific Migrated vNet prefix to advertise to the vHub as well as leak down to the ExpressRoute. Once the step 2 is completed the traffic should start to flow over the vNET connection to the Vub. Removed the vNet peering to the Transit vNET. Connectivity flow: Source Destination Data Path vNet1/VNet2 Migrated Spokes vNet 1. Direct vNet peering vNet1/VNet2 Branches (VPN/SD-WAN) 1. ExpressRoute Gateway 2. ExpressRoute Circuit (MSEE) 3. Provider/Customer 4. VPN/SD-WAN vNet1/VNet2 On-premises DC 1. ExpressRoute Gateway 2. ExpressRoute Circuit (MSEE) 3. Provider/Customer Migrated vNet (Step1) Branches (VPN/SD-WAN) 1. Transit Firewall 2. vHub SD-WAN NVA or VPN Gateway Migrated vNet (Step1) On-premises DC 1. Transit Firewall 2. vHub ExpressRoute Gateway 3. ExpressRoute Circuit (MSEE) 4. Provider/Customer Migrated vNet (Step2) Branches (VPN/SD-WAN) 1. vHub SD-WAN NVA or VPN Gateway Migrated vNet (Step2) On-premises DC 1. vHub ExpressRoute Gateway 2. ExpressRoute Circuit (MSEE) 3. Provider/Customer Note: Connectivity means that return traffic follows the same path and components. Pros Traffic remains on the Microsoft backbone, ensuring minimal latency. Not the same throughput limits associated with option 2 solution (virtual tunnels). Cons Administrative overhead associated with maintaining the additional transit virtual network, including user-defined route management and vHub vNet Connection static route configuration. Costs incurred from operating any supplementary firewalls (FWs) or network virtual appliances (NVAs) in the transit vNet. Conclusion This article outlined four strategies for migrating from Hub and Spoke networking to Azure Virtual WAN—ExpressRoute hair pinning, VPN or SD-WAN virtual tunnels, vNet peering with vHub connections, and transit virtual networks for decentralized vNets—highlighting their pros, cons, and administrative considerations. It is important to assess which approach best fits your needs by weighing each scenario's advantages and drawbacks. Bonus The diagrams in Excalidraw format related to this blog post are available in the following GitHub repository.3.7KViews3likes3Comments