virtual wan
44 TopicsAnnouncing the general availability of Route-Maps for Azure virtual WAN.
Route-maps is a feature that gives you the ability to control route advertisements and routing for Virtual WAN virtual hubs. Route-maps lets you have more control of the routing that enters and leaves Azure Virtual WAN site-to-site (S2S) VPN connections, User VPN point-to-site (P2S) connections, ExpressRoute connections, and virtual network (VNet) connections. Why use Route-maps? Route-maps can be used to summarize routes when you have on-premises networks connected to Virtual WAN via ExpressRoute or VPN and are limited by the number of routes that can be advertised from/to virtual hub. You can use Route-maps to control routes entering and leaving your Virtual WAN deployment among on-premises and virtual-networks. You can control routing decisions in your Virtual WAN deployment by modifying a BGP attribute such as AS-PATH to make a route more, or less preferable. This is helpful when there are destination prefixes reachable via multiple paths and customers want to use AS-PATH to control best path selection. You can easily tag routes using the BGP community attribute in order to manage routes. What is a Route-map? A Route-map is an ordered sequence of one or more rules that are applied to routes received or sent by the virtual hub. Each Route-map rule comprises of 3 sections: 1. Match conditions Route-maps allows you to match routes using Route-prefix, BGP community and AS-Path. These are the set of conditions that a processed route must meet in order to be considered as a match for the rule. Below are the supported match conditions. Property Criterion Value (example) Interpretation Route-prefix equals 10.1.0.0/16,10.2.0.0/16,10.3.0.0/16,10.4.0.0/16 Matches these 4 routes only. Specific prefixes under these routes won't be matched. Route-prefix contains 10.1.0.0/16,10.2.0.0/16, 192.168.16.0/24, 192.168.17.0/24 Matches all the specified routes and all prefixes underneath. (Example 10.2.1.0/24 is underneath 10.2.0.0/16) Community equals 65001:100,65001:200 Community property of the route must have both the communities. Order isn't relevant. Community contains 65001:100,65001:200 Community property of the route can have one or more of the specified communities. AS-Path equals 65001,65002,65003 AS-PATH of the routes must have ASNs listed in the specified order. AS-Path contains 65001,65002,65003 AS-PATH in the routes can contain one or more of the ASNs listed. Order isn't relevant. 2. Actions to be performed Route-Maps allows you to Drop or Modify routes. Below are the supported actions. Property Action Value Interpretation Route-prefix Drop 10.3.0.0/8,10.4.0.0/8 The routes specified in the rule are dropped. Route-prefix Replace 10.0.0.0/8,192.168.0.0/16 Replace all the matched routes with the routes specified in the rule. As-Path Add 64580,64581 Prepend AS-PATH with the list of ASNs specified in the rule. These ASNs are applied in the same order for the matched routes. As-Path Replace 65004,65005 AS-PATH will be set to this list in the same order, for every matched route. See key considerations for reserved AS numbers. As-Path Replace No value specified Remove all ASNs in the AS-PATH in the matched routes. Community Add 64580:300,64581:300 Add all the listed communities to all the matched routes community attribute. Community Replace 64580:300,64581:300 Replace community attribute for all the matched routes with the list provided. Community Replace No value specified Remove community attribute from all the matched routes. Community Remove 65001:100,65001:200 Remove any of the listed communities that are present in the matched routes’ Community attribute. 3. Apply to a Connection You can apply route-maps on each connection for the inbound, outbound, or both inbound and outbound directions. Where can I find Route-maps? Route-Maps can be found in the routing section of your virtual WAN hub. How do I troubleshoot Route-maps? The Route-Map Dashboard lets you views the routes for a connection. You can view the routes before or after a Route-Map is applied. Demo5.8KViews14likes2CommentsA Guide to Azure Data Transfer Pricing
Understanding Azure networking charges is essential for businesses aiming to manage their budgets effectively. Given the complexity of Azure networking pricing, which involves various influencing factors, the goal here is to bring a clearer understanding of the associated data transfer costs by breaking down the pricing models into the following use cases: VM to VM VM to Private Endpoint VM to Internal Standard Load Balancer (ILB) VM to Internet Hybrid connectivity Please note this is a first version, with a second version to follow that will include additional scenarios. Disclaimer: Pricing may change over time, check the public Azure pricing calculator for up-to-date pricing information. Actual pricing may vary depending on agreements, purchase dates, and currency exchange rates. Sign in to the Azure pricing calculator to see pricing based on your current program/offer with Microsoft. 1. VM to VM 1.1. VM to VM, same VNet Data transfer within the same virtual network (VNet) is free of charge. This means that traffic between VMs within the same VNet will not incur any additional costs. Doc. Data transfer across Availability Zones (AZ) is free. Doc. 1.2. VM to VM, across VNet peering Azure VNet peering enables seamless connectivity between two virtual networks, allowing resources in different VNets to communicate with each other as if they were within the same network. When data is transferred between VNets, charges apply for both ingress and egress data. Doc: VM to VM, across VNet peering, same region VM to VM, across Global VNet peering Azure regions are grouped into 3 Zones (distinct from Avaialbility Zones within a specific Azure region). The pricing for Global VNet Peering is based on that geographic structure. Data transfer between VNets in different zones incurs outbound and inbound data transfer rates for the respective zones. When data is transferred from a VNet in Zone 1 to a VNet in Zone 2, outbound data transfer rates for Zone 1 and inbound data transfer rates for Zone 2 will be applicable. Doc. 1.3. VM to VM, through Network Virtual Appliance (NVA) Data transfer through an NVA involves charges for both ingress and egress data, depending on the volume of data processed. When an NVA is in the path, such as for spoke VNet to spoke VNet connectivity via an NVA (firewall...) in the hub VNet, it incurs VM to VM pricing twice. The table above reflects only data transfer charges and does not include NVA/Azure Firewall processing costs. 2. VM to Private Endpoint (PE) Private Endpoint pricing includes charges for the provisioned resource and data transfer costs based on traffic direction. For instance, writing to a Storage Account through a Private Endpoint incurs outbound data charges, while reading incurs inbound data charges. Doc: 2.1. VM to PE, same VNet Since data transfer within a VNet is free, charges are only applied for data processing through the Private Endpoint. Cross-region traffic will incur additional costs if the Storage Account and the Private Endpoint are located in different regions. 2.2. VM to PE, across VNet peering Accessing Private Endpoints from a peered network incurs only Private Link Premium charges, with no peering fees. Doc. VM to PE, across VNet peering, same region VM to PE, across VNet peering, PE region != SA region 2.3. VM to PE, through NVA When an NVA is in the path, such as for spoke VNet to spoke VNet connectivity via a firewall in the hub VNet, it incurs VM to VM charges between the VM and the NVA. However, as per the PE pricing model, there are no charges between the NVA and the PE. The table above reflects only data transfer charges and does not include NVA/Azure Firewall processing costs. 3. VM to Internal Load Balancer (ILB) Azure Standard Load Balancer pricing is based on the number of load balancing rules as well as the volume of data processed. Doc: 3.1. VM to ILB, same VNet Data transfer within the same virtual network (VNet) is free. However, the data processed by the ILB is charged based on its volume and on the number load balancing rules implemented. Only the inbound traffic is processed by the ILB (and charged), the return traffic goes direct from the backend to the source VM (free of charge). 3.2. VM to ILB, across VNet peering In addition to the Load Balancer costs, data transfer charges between VNets apply for both ingress and egress. 3.3. VM to ILB, through NVA When an NVA is in the path, such as for spoke VNet to spoke VNet connectivity via a firewall in the hub VNet, it incurs VM to VM charges between the VM and the NVA and VM to ILB charges between the NVA and the ILB/backend resource. The table above reflects only data transfer charges and does not include NVA/Azure Firewall processing costs. 4. VM to internet 4.1. Data transfer and inter-region pricing model Bandwidth refers to data moving in and out of Azure data centers, as well as data moving between Azure data centers; other transfers are explicitly covered by the Content Delivery Network, ExpressRoute pricing, or Peering. Doc: 4.2. Routing Preference in Azure and internet egress pricing model When creating a public IP in Azure, Azure Routing Preference allows you to choose how your traffic routes between Azure and the Internet. You can select either the Microsoft Global Network or the public internet for routing your traffic. Doc: See how this choice can impact the performance and reliability of network traffic: By selecting a Routing Preference set to Microsoft network, ingress traffic enters the Microsoft network closest to the user, and egress traffic exits the network closest to the user, minimizing travel on the public internet (“Cold Potato” routing). On the contrary, setting the Routing Preference to internet, ingress traffic enters the Microsoft network closest to the hosted service region. Transit ISP networks are used to route traffic, travel on the Microsoft Global Network is minimized (“Hot Potato” routing). Bandwidth pricing for internet egress, Doc: 4.3. VM to internet, direct Data transferred out of Azure to the internet incurs charges, while data transferred into Azure is free of charge. Doc. It is important to note that default outbound access for VMs in Azure will be retired on September 30 2025, migration to an explicit outbound internet connectivity method is recommended. Doc. 4.4. VM to internet, with a public IP Here a standard public IP is explicitly associated to a VM NIC, that incurs additional costs. Like in the previous scenario, data transferred out of Azure to the internet incurs charges, while data transferred into Azure is free of charge. Doc. 4.5. VM to internet, with NAT Gateway In addition to the previous costs, data transfer through a NAT Gateway involves charges for both the data processed and the NAT Gateway itself, Doc: 5. Hybrid connectivity Hybrid connectivity involves connecting on-premises networks to Azure VNets. The pricing model includes charges for data transfer between the on-premises network and Azure, as well as any additional costs for using Network Virtual Appliances (NVAs) or Azure Firewalls in the hub VNet. 5.1. H&S Hybrid connectivity without firewall inspection in the hub For an inbound flow, from the ExpressRoute Gateway to a spoke VNet, VNet peering charges are applied once on the spoke inbound. There are no charges on the hub outbound. For an outbound flow, from a spoke VNet to an ER branch, VNet peering charges are applied once, outbound of the spoke only. There are no charges on the hub inbound. Doc. The table above does not include ExpressRoute connectivity related costs. 5.2. H&S Hybrid connectivity with firewall inspection in the hub Since traffic transits and is inspected via a firewall in the hub VNet (Azure Firewall or 3P firewall NVA), the previous concepts do not apply. “Standard” inter-VNet VM-to-VM charges apply between the FW and the destination VM : inbound and outbound on both directions. Once outbound from the source VNet (Hub or Spoke), once inbound on the destination VNet (Spoke or Hub). The table above reflects only data transfer charges within Azure and does not include NVA/Azure Firewall processing costs nor the costs related to ExpressRoute connectivity. 5.3. H&S Hybrid connectivity via a 3rd party connectivity NVA (SDWAN or IPSec) Standard inter-VNet VM-to-VM charges apply between the NVA and the destination VM: inbound and outbound on both directions, both in the Hub VNet and in the Spoke VNet. 5.4. vWAN scenarios VNet peering is charged only from the point of view of the spoke – see examples and vWAN pricing components. Next steps with cost management To optimize cost management, Azure offers tools for monitoring and analyzing network charges. Azure Cost Management and Billing allows you to track and allocate costs across various services and resources, ensuring transparency and control over your expenses. By leveraging these tools, businesses can gain a deeper understanding of their network costs and make informed decisions to optimize their Azure spending.13KViews14likes2CommentsInspection Patterns in Hub-and-Spoke and vWAN Architectures
By shruthi_nair Mays_Algebary Inspection plays a vital role in network architecture, and each customer may have unique inspection requirements. This article explores common inspection scenarios in both Hub-and-Spoke and Virtual WAN (vWAN) topologies. We’ll walk through design approaches assuming a setup with two Hubs or Virtual Hubs (VHubs) connected to on-premises environments via ExpressRoute. The specific regions of the Hubs or VHubs are not critical, as the same design principles can be applied across regions. Scenario1: Hub-and-Spoke Inspection Patterns In the Hub-and-Spoke scenarios, the baseline architecture assumes the presence of two Hub VNets. Each Hub VNet is peered with its local spoke VNets as well as with the other Hub VNet (Hub2-VNet). Additionally, both Hub VNets are connected to both local and remote ExpressRoute circuits to ensure redundancy. Note: In Hub-and-Spoke scenarios, connectivity between virtual networks over ExpressRoute circuits across Hubs is intentionally disabled. This ensures that inter-Hub traffic uses VNet peering, which provides a more optimized path, rather than traversing the ExpressRoute circuit. In Scenario 1, we present two implementation approaches: a traditional method and an alternative leveraging Azure Virtual Network Manager (AVNM). Option1: Full Inspection A widely adopted design pattern is to inspect all traffic, both east-west and north-south, to meet security and compliance requirements. This can be implemented using a traditional Hub-and-Spoke topology with VNet Peering and User-Defined Routes (UDRs), or by leveraging AVNM with Connectivity Configurations and centralized UDR management. In the traditional approach: VNet Peering is used to connect each spoke to its local Hub, and to establish connectivity between the two Hubs. UDRs direct traffic to the firewall as the next hop, ensuring inspection before reaching its destination. These UDRs are applied at the Spoke VNets, the Gateway Subnet, and the Firewall Subnet (especially for inter-region scenarios), as shown in the below diagram. As your environment grows, managing individual UDRs and VNet Peerings manually can become complex. To simplify deployment and ongoing management at scale, you can use AVNM. With AVNM: Use the Hub-and-Spoke connectivity configuration to manage routing within a single Hub. Use the Mesh connectivity configuration to establish Inter-Hub connectivity between the two Hubs. AVNM also enables centralized creation, assignment, and management of UDRs, streamlining network configuration at scale. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Spoke ✅ Spoke ↔ Internet ✅ Option2: Selective Inspection Between Azure VNets In some scenarios, full traffic inspection is not required or desirable. This may be due to network segmentation based on trust zones, for example, traffic between trusted VNets may not require inspection. Other reasons include high-volume data replication, latency-sensitive applications, or the need to reduce inspection overhead and cost. In this design, VNets are grouped into trusted and untrusted zones. Trusted VNets can exist within the same Hub or across different Hubs. To bypass inspection between trusted VNets, you can connect them directly using VNet Peering or AVNM Mesh connectivity topology. It’s important to note that UDRs are still used and configured as described in the full inspection model (Option 1). However, when trusted VNets are directly connected, system routes (created by VNet Peering or Mesh connectivity) take precedence over custom UDRs. As a result, traffic between trusted VNets bypasses the firewall and flows directly. In contrast, traffic to or from untrusted zones follows the UDRs, ensuring it is routed through the firewall for inspection. t Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Internet ✅ Spoke ↔ Spoke (Same Zones) ❌ Spoke ↔ Spoke (Across Zones) ✅ Option3: No Inspection to On-premises In cases where a firewall at the on-premises or colocation site already inspects traffic from Azure, customers typically aim to avoid double inspection. To support this in the above design, traffic destined for on-premises is not routed through the firewall deployed in Azure. For the UDRs applied to the spoke VNets, ensure that "Propagate Gateway Routes" is set to true, allowing traffic to follow the ExpressRoute path directly without additional inspection in Azure. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ❌ Spoke ↔ Spoke ✅ Spoke ↔ Internet ✅ Option4: Internet Inspection Only While not generally recommended, some customers choose to inspect only internet-bound traffic and allow private traffic to flow without inspection. In such cases, spoke VNets can be directly connected using VNet Peering or AVNM Mesh connectivity. To ensure on-premises traffic avoids inspection, set "Propagate Gateway Routes" to true in the UDRs applied to spoke VNets. This allows traffic to follow the ExpressRoute path directly without being inspected in Azure. Scenario2: vWAN Inspection Options Now we will explore inspection options using a vWAN topology. Across all scenarios, the base architecture assumes two Virtual Hubs (VHubs), each connected to its respective local spoke VNets. vWAN provides default connectivity between the two VHubs, and each VHub is also connected to both local and remote ExpressRoute circuits for redundancy. It's important to note that this discussion focuses on inspection in vWAN using Routing Intent. As a result, bypassing inspection for traffic to on-premises is not supported in this model. Option1: Full Inspection As noted earlier, inspecting all traffic, both east-west and north-south, is a common practice to fulfill compliance and security needs. In this design, enabling Routing Intent provides the capability to inspect both, private and internet-bound traffic. Unlike the Hub-and-Spoke topology, this approach does not require any UDR configuration. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Spoke ✅ Spoke ↔ Internet ✅ Option2: Using Different Firewall Flavors for Traffic Inspection Using different firewall flavors inside VHub for traffic inspection Some customers require specific firewalls for different traffic flows, for example, using Azure Firewall for East-West traffic while relying on a third-party firewall for North-South inspection. In vWAN, it’s possible to deploy both Azure Firewall and a third-party network virtual appliance (NVA) within the same VHub. However, as of this writing, deploying two different third-party NVAs in the same VHub is not supported. This behavior may change in the future, so it’s recommended to monitor the known limitations section for updates. With this design, you can easily control which firewall handles East-West versus North-South traffic using Routing Intent, eliminating the need for UDRs. Using different firewall flavors inside VHub for traffic inspection Deploying third-party firewalls in spoke VNets when VHub limitations apply If the third-party firewall you want to use is not supported within the VHub, or if the managed firewall available in the VHub lacks certain required features compared to the version deployable in a regular VNet, you can deploy the third-party firewall in a spoke VNet instead, while using Azure Firewall in the VHub. In this design, the third-party firewall (deployed in a spoke VNet) handles internet-bound traffic, and Azure Firewall (in the VHub) inspects East-West traffic. This setup is achieved by peering the third-party firewall VNet to the VHub, as well as directly peering it with the spoke VNets. These spoke VNets are also connected to the VHub, as illustrated in the diagram below. UDRs are required in the spoke VNets to forward internet-bound traffic to the third-party firewall VNet. East-West traffic routing, however, is handled using the Routing Intent feature, directing traffic through Azure Firewall without the need for UDRs. Deploying third-party firewalls in spoke VNets when VHub limitations apply Note: Although it is not required to connect the third-party firewall VNet to the VHub for traffic flow, doing so is recommended for ease of management and on-premises reachability. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Inspected using Azure Firewall Spoke ↔ Spoke ✅ Inspected using Azure Firewall Spoke ↔ Internet ✅ Inspected using Third Party Firewall Option3: Selective Inspection Between Azure VNets Similar to the Hub-and-Spoke topology, there are scenarios where full traffic inspection is not ideal. This may be due to Azure VNets being segmented into trusted and untrusted zones, where inspection is unnecessary between trusted VNets. Other reasons include large data replication between specific VNets or latency-sensitive applications that require minimizing inspection delays and associated costs. In this design, trusted and untrusted VNets can reside within the same VHub or across different VHubs. Routing Intent remains enabled to inspect traffic between trusted and untrusted VNets, as well as internet-bound traffic. To bypass inspection between trusted VNets, you can connect them directly using VNet Peering or AVNM Mesh connectivity. Unlike the Hub-and-Spoke model, this design does not require UDR configuration. Because trusted VNets are directly connected, system routes from VNet peering take precedence over routes learned through the VHub. Traffic destined for untrusted zones will continue to follow the Routing Intent and be inspected accordingly. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Internet ✅ Spoke ↔ Spoke (Same Zones) ❌ Spoke ↔ Spoke (Across Zones) ✅ Option4: Internet Inspection Only While not generally recommended, some customers choose to inspect only internet-bound traffic and bypass inspection of private traffic. In this design, you only enable the Internet Inspection option within Routing Intent, so private traffic bypasses the firewall entirely. The VHub manages both intra-and inter-VHub routing directly. Internet Inspection Only Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ❌ Spoke ↔ Internet ✅ Spoke ↔ Spoke ❌2.6KViews9likes3CommentsUnderstanding ExpressRoute private peering to address ExpressRoute resiliency
This article provides an overview of Microsoft ExpressRoute, including its various components such as the Circuit, the Gateway and the Connection, and different connectivity models like ExpressRoute Service Provider and ExpressRoute Direct. It also covers the resilience and failure scenarios related to ExpressRoute, including geo-redundancy, Availability Zones, and route advertisement limits. If you're looking to learn more about ExpressRoute and its implementation, this article is a great resource.11KViews7likes3CommentsNetwork Redundancy Between AVS, On-Premises, and Virtual Networks in a Multi-Region Design
By Mays_Algebary shruthi_nair Establishing redundant network connectivity is vital to ensuring the availability, reliability, and performance of workloads operating in hybrid and cloud environments. Proper planning and implementation of network redundancy are key to achieving high availability and sustaining operational continuity. This article focuses on network redundancy in multi-region architecture. For details on single-region design, refer to this blog. The diagram below illustrates a common network design pattern for multi-region deployments, using either a Hub-and-Spoke or Azure Virtual WAN (vWAN) topology, and serves as the baseline for establishing redundant connectivity throughout this article. In each region, the Hub or Virtual Hub (VHub) extends Azure connectivity to Azure VMware Solution (AVS) via an ExpressRoute circuit. The regional Hub/VHub is connected to on-premises environments by cross-connecting (bowtie) both local and remote ExpressRoute circuits, ensuring redundancy. The concept of weight, used to influence traffic routing preferences, will be discussed in the next section. The diagram below illustrates the traffic flow when both circuits are up and running. Design Considerations If a region loses its local ExpressRoute connection, AVS in that region will lose connectivity to the on-premises environment. However, VNets will still retain connectivity to on-premises via the remote region’s ExpressRoute circuit. The solutions discussed in this article aim to ensure redundancy for both AVS and VNets. Looking at the diagram above, you might wonder: why do we need to set weights at all, and why do the AVS-ER connections (1b/2b) use the same weight as the primary on-premises connections (1a/2a)? Weight is used to influence routing decisions and ensure optimal traffic flow. In this scenario, both ExpressRoute circuits, ER1-EastUS and ER2-WestUS, advertise the same prefixes to the Azure ExpressRoute gateway. As a result, traffic from the VNet to on-premises would be ECMPed across both circuits. To avoid suboptimal routing and ensure that traffic from the VNets prefers the local ExpressRoute circuit, a higher weight is assigned to the local path. It’s also critical that the ExpressRoute gateway connection to on-premises (1a/2a) and to AVS (1b/2b), is assigned the same weight. Otherwise, traffic from the VNet to AVS will follow a less efficient route as AVS routes are also learned over ER1-EastUS via Global Reach. For instance, VNets in EastUS will connect to AVS EUS through ER1-EastUS circuit via Global Reach (as shown by the blue dotted line), instead of using the direct local path (orange line). This suboptimal routing is illustrated in the below diagram. Now let us see what solutions we can have to achieve redundant connectivity. The following solutions will apply to both Hub-and-Spoke and vWAN topology unless noted otherwise. Note: The diagrams in the upcoming solutions will focus only on illustrating the failover traffic flow. Solution1: Network Redundancy via ExpressRoute in Different Peering Location In the solution, deploy an additional ExpressRoute circuit in a different peering location within the same metro area (e.g., ER2–PeeringLocation2), and enable Global Reach between this new circuit and the existing AVS ExpressRoute (e.g., AVS-ER1). If you intend to use this second circuit as a failover path, apply prepends to the on-premises prefixes advertised over it. Alternatively, if you want to use it as an active-active redundant path, do not prepend routes, in this case, both AVS and Azure VNets will ECMP to distribute traffic across both circuits (e.g., ER1–EastUS and ER–PeeringLocation2) when both are available. Note: Compared to the Standard Topology, this design removes both the ExpressRoute cross-connect (bowtie) and weight settings. When adding a second circuit in the same metro, there's no benefit in keeping them, otherwise traffic from the Azure VNet will prefer the local AVS circuit (AVS-ER1/AVS-ER2) to reach on-premises due to the higher weight, as on-premises routes are also learned over AVS circuit (AVS-ER1/AVS-ER2) via Global Reach. Also, when connecting the new circuit (e.g., ER–Peering Location2), remove all weight settings across the connections. Traffic will follow the optimal path based on BGP prepending on the new circuit, or load-balance (ECMP) if no prepend is applied. Note: Use public ASN to prepend the on-premises prefix as AVS circuit (e.g., AVS-ER) will strip the private ASN toward AVS. Solution Insights Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. It could be cost prohibitive depending on the bandwidth of the second circuit. Solution2: Network Redundancy via ExpressRoute Direct In this solution, ExpressRoute Direct is used to provision multiple circuits from a single port pair in each region, for example, ER2-WestUS and ER4-WestUS are created from the same port pair. This allows you to dedicate one circuit for local traffic and another for failover to a remote region. To ensure optimal routing, prepend the on-premises prefixes using public ASN on the newly created circuit (e.g., ER3-EastUS and ER4-WestUS). Remove all weight settings across the connections; traffic will follow the optimal path based on BGP prepending on the new circuit. For instance, if ER1-EastUS becomes unavailable, traffic from AVS and VNets in the EastUS region will automatically route through ER4-WestUS circuit, ensuring continuity. Note: Compared to the Standard Topology, this design connects the newly created ExpressRoute circuits (e.g., ER3-EastUS/ER4-WestUS) to the remote region of ExpressRoute gateway (black dotted lines) instead of having the bowtie to the primary circuits (e.g., ER1-EastUS/ER2-WestUS). Solution Insights Easy to implement if you have ExpressRoute Direct. ExpressRoute Direct supports over- provisioning where you can create logical ExpressRoute circuits on top of your existing ExpressRoute Direct resource of 10-Gbps or 100-Gbps up to the subscribed Bandwidth of 20 Gbps or 200 Gbps. For example, you can create two 10-Gbps ExpressRoute circuits within a single 10-Gbps ExpressRoute Direct resource (port pair). Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. Solution3: Network Redundancy via ExpressRoute Metro Metro ExpressRoute is a new configuration that enables dual-homed connectivity to two different peering locations within the same city. This setup enhances resiliency by allowing traffic to continue flowing even if one peering location goes down, using the same circuit. Solution Insights Higher Resiliency: Provides increased reliability with a single circuit. Limited regional availability: Currently available in select regions, with more being added over time. Cost-effective: Offers redundancy without significantly increasing costs. Solution4: Deploy VPN as a Backup to ExpressRoute This solution mirrors solution 1 for a single region but extends it to multiple regions. In this approach, a VPN serves as the backup path for each region in the event of an ExpressRoute failure. In a Hub-and-Spoke topology, a backup path to and from AVS can be established by deploying Azure Route Server (ARS) in the hub VNet. ARS enables seamless transit routing between ExpressRoute and the VPN gateway. In vWAN topology, ARS is not required; the vHub's built-in routing service automatically provides transitive routing between the VPN gateway and ExpressRoute. In this design, you should not cross-connect ExpressRoute circuits (e.g., ER1-EastUS and ER2-WestUS) to the ExpressRoute gateways in the Hub VNets (e.g., Hub-EUS or Hub-WUS). Doing so will lead to routing issues, where the Hub VNet only programs the on-premises routes learned via ExpressRoute. For instance, in the EastUS region, if the primary circuit (ER1-EastUS) goes down, Hub-EUS will receive on-premises routes from both the VPN tunnel and the remote ER2-WestUS circuit. However, it will prefer and program only the ExpressRoute-learned routes from ER2-WestUS circuit. Since ExpressRoute gateways do not support route transitivity between circuits, AVS connected via AVS-ER will not receive the on-premises prefixes, resulting in routing failures. Note: In vWAN topology, to ensure optimal route convergence when failing back to ExpressRoute, you should prepend the prefixes advertised from on-premises over the VPN. Without route prepending, VNets may continue to use the VPN as the primary path to on-premises. If prepend is not an option, you can trigger the failover manually by bouncing the VPN tunnel. Solution Insights Cost-effective and straightforward to deploy. Increased Latency: The VPN tunnel over the internet adds latency due to encryption overhead. Bandwidth Considerations: Multiple VPN tunnels might be needed to achieve bandwidth comparable to a high-capacity ExpressRoute circuit (e.g., over 1G). For details on VPN gateway SKU and tunnel throughput, refer to this link. As you can't cross connect ExpressRoute circuits, VNets will utilize the VPN for failover instead of leveraging remote region ExpressRoute circuit. Solution5: Network Redundancy-Multiple On-Premises (split-prefix) In many scenarios, customers advertise the same prefix from multiple on-premises locations to Azure. However, if the customer can split prefixes across different on-premises sites, it simplifies the implementation of failover strategy using existing ExpressRoute circuits. In this design, each on-premises advertises region-specific prefixes (e.g., 10.10.0.0/16 for EastUS and 10.70.0.0/16 for WestUS), along with a common supernet (e.g., 10.0.0.0/8). Under normal conditions, AVS and VNets in each region use longest prefix match to route traffic efficiently to the appropriate on-premises location. For instance, if ER1-EastUS becomes unavailable, AVS and VNets in EastUS will automatically fail over to ER2-WestUS, routing traffic via the supernet prefix to maintain connectivity. Solution Insights Cost-effective: no additional deployment, using existing ExpressRoute circuits. Advertising specific prefixes over each region might need additional planning. Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. Solution6: Prioritize Network Redundancy for One Region Over Another If you're operating under budget constraints and can prioritize one region (such as hosting critical workloads in a single location) and want to continue using your existing ExpressRoute setup, this solution could be an ideal fit. In this design, assume AVS in EastUS (AVS-EUS) hosts the critical workloads. To ensure high availability, AVS-ER1 is configured with Global Reach connections to both the local ExpressRoute circuit (ER1-EastUS) and the remote circuit (ER2-WestUS). Make sure to prepend the on-premises prefixes advertised to ER2-WestUS using public ASN to ensure optimal routing (no ECMP) from AVS-EUS over both circuits (ER1-EastUS and ER2-WestUS). On the other hand, AVS in WestUS (AVS-WUS) is connected via Global Reach only to its local region ExpressRoute circuit (ER2-WestUS). If that circuit becomes unavailable, you can establish an on-demand Global Reach connection to ER1-EastUS, either manually or through automation (e.g., a triggered script). This approach introduces temporary downtime until the Global Reach link is established. You might be thinking, why not set up Global Reach between the AVS-WUS circuit and remote region circuits (like connecting AVS-ER2 to ER1-EastUS), just like we did for AVS-EUS? Because it would lead to suboptimal routing. Due to AS path prepending on ER2-WestUS, if both ER1-EastUS and ER2-WestUS are linked to AVS-ER2, traffic would favor the remote ER1-EastUS circuit since it presents a shorter AS path. As a result, traffic would bypass the local ER2-WestUS circuit, causing inefficient routing. That is why for AVS-WUS, it's better to use on-demand Global Reach to ER1-EastUS as a backup path, enabled manually or via automation, only when ER2-WestUS becomes unavailable. Note: VNets will failover via local AVS circuit. E.g., HUB-EUS will route to on-prem through AVS-ER1 and ER2-WestUS via Global Reach Secondary (purple line). Solution Insights Cost-effective Workloads hosted in AVS within the non-critical region will experience downtime if the local region ExpressRoute circuit becomes unavailable, until the on-demand Global Reach connection is established. Conclusion Each solution has its own advantages and considerations, such as cost-effectiveness, ease of implementation, and increased resiliency. By carefully planning and implementing these solutions, organizations can ensure operational continuity and optimal traffic routing in multi-region deployments.2.5KViews6likes0CommentsCombining firewall protection and SD-WAN connectivity in Azure virtual WAN
Virtual WAN (vWAN) introduces new security and connectivity features in Azure, including the ability to operate managed third-party firewalls and SD-WAN virtual appliances, integrated natively within a virtual WAN hub (vhub). This article will discuss updated network designs resulting from these integrations and examine how to combine firewall protection and SD-WAN connectivity when using vWAN. The objective is not to delve into the specifics of the security or SD-WAN connectivity solutions, but to provide an overview of the possibilities. Firewall protection in vWAN In a vWAN environment, the firewall solution is deployed either automatically inside the vhub (Routing Intent) or manually in a transit VNet (VM-series deployment). Routing Intent (managed firewall) Routing Intent refers to the concept of implementing a managed firewall solution within the vhub for internet protection or private traffic protection (VNet-to-VNet, Branch-to-VNet, Branch-to-Branch), or both. The firewall could be either an Azure Firewall or a third-party firewall, deployed within the vhub as Network Virtual Appliances or a SaaS solution. A vhub containing a managed firewall is called a secured hub. For an updated list of Routing Intent supported third-party solutions please refer to the following links: managed NVAs SaaS solution Transit VNet (unmanaged firewall) Another way to provide inspection in vWAN is to manually deploy the firewall solution in a spoke of the vhub and to cascade the actual spokes behind that transit firewall VNet (aka indirect spoke model or tiered-VNet design). In this discussion, the primary reasons for choosing unmanaged deployments are: either the firewall solution lacks an integrated vWAN offer, or it has an integrated offer but falls short in horizontal scalability or specific features compared to the VM-based version. For a detailed analysis on the pros and cons of each design please refer to this article. SD-WAN connectivity in vWAN Similar to the firewall deployment options, there are two main methods for extending an SDWAN overlay into an Azure vWAN environment: a managed deployment within the vhub, or a standard VM-series deployment in a spoke of the vhub. More options here. SD-WAN in vWAN deployment (managed) In this scenario, a pair of virtual SD-WAN appliances are automatically deployed and integrated in the vhub using dynamic routing (BGP) with the vhub router. Deployment and management processes are streamlined as these appliances are seamlessly provisioned in Azure and set up for a simple import into the partner portal (SD-WAN orchestrator). For an updated list of supported SDWAN partners please refer to this link. For more information on SD-WAN in vWAN deployments please refer to this article. VM-series deployment (unmanaged) This solution requires manual deployment of the virtual SD-WAN appliances in a spoke of the vhub. The underlying VMs and the horizontal scaling are managed by the customer. Dynamic route exchange with the vWAN environment is achieved leveraging BGP peering with the vhub. Alternatively, and depending on the complexity of your addressing plan, static routing may also be possible. Firewall protection and SD-WAN in vWAN THE CHALLENGE! Currently, it is only possible to chain managed third-party SD-WAN connectivity with Azure Firewall in the same vhub, or to use dual-role SD-WAN connectivity and security appliances. Routing Intent provided by third-party firewalls combined with another managed SD-WAN solution inside the same vhub is not yet supported. But how can firewall protection and SD-WAN connectivity be integrated together within vWAN? Solution 1: Routing Intent with Azure Firewall and managed SD-WAN (same vhub) Firewall solution: managed. SD-WAN solution: managed. This design is only compatible with Routing Intent using Azure Firewall, as it is the sole firewall solution that can be combined with a managed SD-WAN in vWAN deployment in that same vhub. With the private traffic protection policy enabled in Routing Intent, all East-West flows (VNet-to-VNet, Branch-to-VNet, Branch-to-Branch) are inspected. Solution 2: Routing Intent with a third-party firewall and managed SD-WAN (2 vhubs) Firewall solution: managed. SD-WAN solution: managed. To have both a third-party firewall managed solution in vWAN and an SD-WAN managed solution in vWAN in the same region, the only option is to have a vhub dedicated to the security solution deployment and another vhub dedicated to the SD-WAN solution deployment. In each region, spoke VNets are connected to the secured vhub, while SD-WAN branches are connected to the vhub containing the SD-WAN deployment. In this design, Routing Intent private traffic protection provides VNet-to-VNet and Branch-to-VNet inspection. However, Branch-to-Branch traffic will not be inspected. Solution 3: Routing Intent and SD-WAN spoke VNet (same vhub) Firewall solution: managed. SD-WAN solution: unmanaged. This design is compatible with any Routing Intent supported firewall solution (Azure Firewall or third-party) and with any SD-WAN solution. With Routing Intent private traffic protection enabled, all East-West flows (VNet-to-VNet, Branch-to-VNet, Branch-to-Branch) are inspected. Solution 4: Transit firewall VNet and managed SDWAN (same vhub) Firewall solution: unmanaged. SD-WAN solution: managed. This design utilizes the indirect spoke model, enabling the deployment of managed SD-WAN in vWAN appliances. This design provides VNet-to-VNet and Branch-to-VNet inspection. But because the firewall solution is not hosted in the hub, Branch-to-Branch traffic will not be inspected. Solution 5 - Transit firewall VNet and SD-WAN spoke VNet (same vhub) Firewall solution: unmanaged. SD-WAN solution: unmanaged. This design integrates both the security and the SD-WAN connectivity as unmanaged solutions, placing the responsibility for deploying and managing the firewall and the SD-WAN hub on the customer. Just like in solution #4, only VNet-to-VNet and Branch-to-VNet traffic is inspected. Conclusion Although it is currently not possible to combine a managed third-party firewall solution with a managed SDWAN deployment within the same vhub, numerous design options are still available to meet various needs, whether managed or unmanaged approaches are preferred.4.3KViews6likes2CommentsvWAN to vWAN Connectivity Options
Option 1: IPSec Tunnels using Azure Virtual Network Gateways In this option users can simply provision Azure Virtual Network Gateways on each vhub in each vWAN environment to connect both vWANs together. It's important to note that today you cannot change the vWAN virtual Network Gateway ASN, which is 65515. This is hardcoded inside the vhub gateway config. At a later date, there should be an option to change the virtual network gateway ASN. For this reason, you cannot do BGP over IPSEC due to BGP loop prevention and seeing its own ASN in the path. The remote vhub will receive routes from the source vhub with 65515 in the AS-PATH and BGP will drop that. Therefore, if you want to connect two different vWAN environments, the tunnels need to be static, no BGP option. The other thing to note is max throughput you can get per tunnel is around 2.4Gbps, and this is an SDN limitation. This can be increased by adding more tunnels. Pros: Uses Azure native VPN gateways for each respective vhub and vWAN Cheaper than using dedicated high bandwidth links like ExpressRoute Cons: Cannot use BGP due to the vHub gateway ASN hard-coded with 65515 VPN tunnel throughput is max 2.4Gbps per tunnel depending on ciphers Option 2: ExpressRoute Circuits using MSEE hair pinning This is another option using Azure native solutions. If there is already an Express-Route circuit provisioned, we can simply hook the other vhubs in each vWAN environment to the use the same express-route circuit. Like regular hub+spoke, when multiple vnets are connected to the same circuit, the egress point is the MSEE at the peering pops. Traffic between each vhub in different vWANs will simply hairpin down to the Azure MSEEs and each pop location before traveling to the remote vhub and vwan environment. Side note, I wrote another blog with my colleague Daniel Mauser on MSEE hairpin alternatives here Pros: Minimal setup required with existing express-route circuits Higher bandwidth then SDWan or IPSEC tunnels Cons: Latency can be an issue as the egress point is the peering pop This can tax the remote gateway for ingress flows if fast-path is not available Cost of running the Express-Route circuit Option 3: SDWan Tunnels with BGP+IPSEC Inside the vHubs This option is good for users who are already running their own SDWAN devices today to connect to on-prem WAN links or don't need the extra bandwidth of Express-Route. Deploying SDWan partner devices in each respective vhub to build the connections between vWans, users can run BGP over IPSEC to make the connectivity. In order to make the routing work, inside the SDWAN operating system, it's necessary to do "AS-Path Replace or AS-Path Exclude" BGP commands for the vhub 65520 and Route Server 65515 ASNs. The command for example would be "as-path exclude 65520 65515" or similar depending on the vendor. You would then need to apply that inbound route-map to each BGP peer. That way, the remote vWAN vhub won't drop the route, because it won't see its own ASN in the path. This is the same behavior is option 1, except here we have the ability to do BGP manipulation on third party devices unlike the Azure Virtual Network Gateways. The SDWANs can use different ASNs and do eBGP, or they could also be the same ASN and have an iBGP session. Pros: More granular control and options to manipulate routing Potential for more optimal throughput and tunning Some devices can also double duty and do inspection Cons: Complexity to setup and maintain Only certain SDWan devices are supported inside the vhubt today Potentially more expensive than Azure Virtual Network Gateways SDWan devices cannot be combined with NVAs inside the vhub Option 4: SDWan Tunnels with BGP+IPSEC with SDWan devices BGP peered in Spokes This would be the similar to option 3, except we have the SDWAN device in a spoke vnet that is vnet peered to each vhub. We then would BGP peer the SDWAN device to the Route Server instances inside the vhub. This is a good for scenarios where users have SDWAN devices that cannot be deployed inside vhubs, but still support BGP. Like above, we need to apply inbound route-maps to each SDWan device and do the "same as-path exclude or as-path replace on both 65520 and 65515 ASNs" so the receiving end does not drop the routes. Pros: More granular control and options to manipulate routing Potential for more optimal throughput and tunning Some devices can also double duty and do inspection Flexibility for deployment outside the vhubs Cons: Complexity to setup and maintain Potentially more expensive than Azure Virtual Network Gateway Side note regarding non vWAN vnets that you wish to connect to vWAN environments for new deployments! The following steps would need to be enabled on both the vhub and regular Azure Vnet with the Express-Route Gateway to allow the hair-pinning!2.3KViews6likes0Comments