azure expressroute
36 TopicsAzure Networking Portfolio Consolidation
Overview Over the past decade, Azure Networking has expanded rapidly, bringing incredible tools and capabilities to help customers build, connect, and secure their cloud infrastructure. But we've also heard strong feedback: with over 40 different products, it hasn't always been easy to navigate and find the right solution. The complexity often led to confusion, slower onboarding, and missed capabilities. That's why we're excited to introduce a more focused, streamlined, and intuitive experience across Azure.com, the Azure portal, and our documentation pivoting around four core networking scenarios: Network foundations: Network foundations provide the core connectivity for your resources, using Virtual Network, Private Link, and DNS to build the foundation for your Azure network. Try it with this link: Network foundations Hybrid connectivity: Hybrid connectivity securely connects on-premises, private, and public cloud environments, enabling seamless integration, global availability, and end-to-end visibility, presenting major opportunities as organizations advance their cloud transformation. Try it with this link: Hybrid connectivity Load balancing and content delivery: Load balancing and content delivery helps you choose the right option to ensure your applications are fast, reliable, and tailored to your business needs. Try it with this link: Load balancing and content delivery Network security: Securing your environment is just as essential as building and connecting it. The Network Security hub brings together Azure Firewall, DDoS Protection, and Web Application Firewall (WAF) to provide a centralized, unified approach to cloud protection. With unified controls, it helps you manage security more efficiently and strengthen your security posture. Try it with this link: Network security This new structure makes it easier to discover the right networking services and get started with just a few clicks so you can focus more on building, and less on searching. What you’ll notice: Clearer starting points: Azure Networking is now organized around four core scenarios and twelve essential services, reflecting the most common customer needs. Additional services are presented within the context of these scenarios, helping you stay focused and find the right solution without feeling overwhelmed. Simplified choices: We’ve merged overlapping or closely related services to reduce redundancy. That means fewer, more meaningful options that are easier to evaluate and act on. Sunsetting outdated services: To reduce clutter and improve clarity, we’re sunsetting underused offerings such as white-label CDN services and China CDN. These capabilities have been rolled into newer, more robust services, so you can focus on what’s current and supported. What this means for you Faster decision-making: With clearer guidance and fewer overlapping products, it's easier to discover what you need and move forward confidently. More productive sales conversations: With this simplified approach, you’ll get more focused recommendations and less confusion among sellers. Better product experience: This update makes the Azure Networking portfolio more cohesive and consistent, helping you get started quickly, stay aligned with best practices, and unlock more value from day one. The portfolio consolidation initiative is a strategic effort to simplify and enhance the Azure Networking portfolio, ensuring better alignment with customer needs and industry best practices. By focusing on top-line services, combining related products, and retiring outdated offerings, Azure Networking aims to provide a more cohesive and efficient product experience. Azure.com Before: Our original Solution page on Azure.com was disorganized and static, displaying a small portion of services in no discernable order. After: The revised solution page is now dynamic, allowing customers to click deeper into each networking and network security category, displaying the top line services, simplifying the customer experience. Azure Portal Before: With over 40 networking services available, we know it can feel overwhelming to figure out what’s right for you and where to get started. After: To make it easier, we've introduced four streamlined networking hubs each built around a specific scenario to help you quickly identify the services that match your needs. Each offers an overview to set the stage, key services to help you get started, guidance to support decision-making, and a streamlined left-hand navigation for easy access to all services and features. Documentation For documentation, we looked at our current assets as well as created new assets that aligned with the changes in the portal experience. Like Azure.com, we found the old experiences were disorganized and not well aligned. We updated our assets to focus on our top-line networking services, and to call out the pillars. Our belief is these changes will allow our customers to more easily find the relevant and important information they need for their Azure infrastructure. Azure Network Hub Before the updates, we had a hub page organized around different categories and not well laid out. In the updated hub page, we provided relevant links for top-line services within all of the Azure networking scenarios, as well as a section linking to each scenario's hub page. Scenario Hub pages We added scenario hub pages for each of the scenarios. This provides our customers with a central hub for information about the top-line services for each scenario and how to get started. Also, we included common scenarios and use cases for each scenario, along with references for deeper learning across the Azure Architecture Center, Well Architected Framework, and Cloud Adoption Framework libraries. Scenario Overview articles We created new overview articles for each scenario. These articles were designed to provide customers with an introduction to the services included in each scenario, guidance on choosing the right solutions, and an introduction to the new portal experience. Here's the Load balancing and content delivery overview: Documentation links Azure Networking hub page: Azure networking documentation | Microsoft Learn Scenario Hub pages: Azure load balancing and content delivery | Microsoft Learn Azure network foundation documentation | Microsoft Learn Azure hybrid connectivity documentation | Microsoft Learn Azure network security documentation | Microsoft Learn Scenario Overview pages What is load balancing and content delivery? | Microsoft Learn Azure Network Foundation Services Overview | Microsoft Learn What is hybrid connectivity? | Microsoft Learn What is Azure network security? | Microsoft Lea Improving user experience is a journey and in coming months we plan to do more on this. Watch out for more blogs over the next few months for further improvements.699Views1like0CommentsConnectivity options between Hub-and-Spoke and Azure Virtual WAN
Contents Overview Scenario 1 – Traffic hair-pinning using ExpressRoute Scenario 2 – Build a virtual tunnel (SD-WAN or IPSec) Scenario 3 – vNet Peering and vHub connection coexistence Scenario 4 – Transit virtual network for decentralized vNets Conclusion Bonus Overview This article is going to discuss different options that interconnect the Hub and Spoke networking with Virtual WAN for migrations scenarios. The goal of this article is to expand on additional options that can help customers to migrate to their existing Hub and Spoke topology to Azure Virtual WAN. You can find a comprehensive article Migrate to Azure Virtual WAN to go over several considerations during the migration process. The focus of this article is to focus only on the connectivity to facilitate the migration process. Therefore, it is important to note that the interconnectivity options listed here are intended to be used in the short term to ensure a temporary coexistence between both topologies while the workload on the Spoke vNets with the end goal of disconnecting both environments after migration is completed. This article mainly discusses scenarios with a Virtual WAN Secured Virtual Hub; exceptions are noted where applicable. The setup assumes the use of routing intent and route policies, replacing the previous approach of using route tables to secure Virtual Hubs. For more information, please consult: How to configure Virtual WAN Hub routing intent and routing policies. Scenario 1 – Traffic hair-pinning using ExpressRoute circuits To begin the migration, ensure that the target Virtual WAN Hub (vHub) includes all necessary components. For existing vHubs equipped with Firewalls, SD-WAN, VPN (Point-to-Site or Site-to-Site), confirm that these elements are also present and correctly configured on the target Virtual WAN. Additionally, for any migrated Spoke, an optional vNet peering can be maintained to the original Hub vNet if there are dependencies, such as shared services (DNS, Active Directory, and other services). Make sure that the peering configuration has the option for using remote gateway disabled, because once connected to the vHub, the vNet connection requires using remote gateway to be enabled. On this scenario traffic between Hub and Spoke and Virtual WAN Hub is facilitated using an ExpressRoute circuit that is connected to both environments. When a single circuit is connected to both environments’ routes will be exchanged between both environments, and it will hairpin at the MSEE (Microsoft Enterprise Edge) routers. This scenario is a similar approach used described in the article: Migrate to Azure Virtual WAN. Connectivity flow: Source Destination Data Path Spoke vNet Migrated Spokes vNets 1. vNet Hub Firewall 2. vNet ExpressRoute Gateway 3. MSEE via Hairpin 4. vHub ExpressRoute Gateway 5. vHub Firewall Spoke vNet Branches (VPN/SD-WAN) 1. vNet Hub Firewall 2. vNet SD-WAN NVA or VPN Gateway Spoke vNet On-premises DC 1. vNet Hub Firewall 2. ExpressRoute Gateway 3. ExpressRoute Circuit (MSEE) 4. Provider/Customer Migrated vNet Branches (VPN/SD-WAN) 1. vHub Firewall 2. vHub SD-WAN NVA or VPN Gateway Migrated vNet On-premises DC 1. vNet Hub Firewall 2. vNet ExpressRoute Gateway 3. ExpressRoute Circuit (MSEE) 4. Provider/Customer Note: Connectivity also considers that return traffic follows the same path and components. Pros Traffic stays in the Microsoft Backbone and does not go over the Provider or Customer CPE. Built-in route provided by the Azure Platform (this is configurable, see considerations). Cons Expect high latency. Traffic between VNET Hub and vHubs crosses MSEE routers outside the Azure Region in a Cloud Exchange facility, increasing latency due to the distance to the region. Single point of failure. Because the MSEE is located at the Edge location, an outage at that site can impact communication. To ensure redundancy, you can utilize a second MSEE at a different Edge location within the same metro area to achieve redundancy and lower latency. Additionally, a second MSEE in different metro areas can also provide redundancy, although this might result in increased latency. Considerations A new feature has been introduced to block MSEE hairpin. To enable this scenario, you need to activate Allow Traffic from remote Virtual WAN Networks (on VNET Hub side) and Allow Traffic from non-Virtual WAN Networks (on Virtual WAN Hub side). For more details, refer to this article: Customisation controls for connectivity between Virtual Networks over ExpressRoute.. Scenario 2 – Build a virtual tunnel (SD-WAN or IPSec) The same prerequisites for the target vHub apply for this option before beginning the migration. However, instead of utilizing ExpressRoute transit, in this scenario you establish a direct virtual tunnel between the existing VNET Hub and the vHub to facilitate communication. There are several options for achieving this, including: Use Azure native VPN Gateway on both VNET Hub and vHub for IPSec tunnels. Up to four tunnels can be created when VNET Hub VPN Gateway is configured to Active/Active (by default, vHub VPN Gateways are already Active/Active). It is important to consider that customers can use either BGP or static routing when implementing this option. However, BGP will be restricted, if the VNET VPN Gateway is the only gateway present, you can use custom ASN other than 65515. If there is another gateway, such as ExpressRoute or Azure Route Server, ASN must be set to 65515. Since vHub VPN Gateway does not allow custom ASN at this moment (65515 is the default ASN), static routes will be required for this setup. Use 3 rd party NVA to establish SD-WAN connectivity between both sides or IPSec tunnel. Using this option, you can leverage either static or BGP routing, where BGP will offer better integration with vHub and less administrative effort. Connectivity flow: Source Destination Data Path Spoke vNet Migrated Spokes vNets 1. vNet Hub Firewall 2. vNet Hub SD-WAN NVA or VPN Gateway 3. vHub Hub SD-WAN NVA or VPN Gateway 4. vHub Firewall Spoke vNet Branches (VPN/SD-WAN) 1. vNet Hub Firewall 2. vNet Hub SD-WAN NVA or VPN Gateway Spoke vNet On-premises DC 1. vNet Hub Firewall 2. ExpressRoute Gateway 3. MSEE Hairpin 4. Provider/Customer Migrated vNet Branches (VPN/SD-WAN) 1. vHub Firewall 2. SD-WAN NVA or VPN Gateway Migrated vNet On-premises DC 1. vNet Hub Firewall 2. ExpressRoute Gateway 3. MSEE Hairpin 4. Provider/Customer Note: Connectivity also considers that return traffic follows the same path and components. Pros Traffic remains within the Microsoft Backbone in the region, resulting in lower latency compared to Option 1. Cons Administrative overhead when using static routes and managing extra network components. Cost of adding a new VPN Gateway or 3 rd party NVA to build the virtual tunnel Throughput may be limited based on the type of virtual tunnel technology used. This limitation can be mitigated by adding multiple tunnels, which require BGP + ECMP to balance traffic between them. It is important to note that Azure allows up to eight tunnels, which is the maximum number of programmed routes for the same networks with different next hops, indicating the specific tunnel. Scenario 3 – vNet Peering and vHub connection coexistence In this scenario, spokes vNet originally connected to vNet Hub are migrated to the vHub while maintaining existing peering with the vNet Hub but with the Use Remote Gateway configuration disabled. This allows the migrated vNets to retain connectivity with the source vNet Hub while also connecting to the vHub. The connection to the vHub necessitates the Use Remote Gateway, which directs all traffic towards on-premises to use the vHub. To connect with other spokes via vHub, the migrated vNet needs a UDR with routes to the vNet spoke prefixes using the vNet Hub Firewall as the next hop. Use route summarization for contiguous prefixes or enter specific prefixes if they are not. Additionally, enable Gateway Propagation in the UDR so migrated Spoke vNets can learn routes from the vHub (RFC 1918, default route, or both). Connectivity flow: Source Destination Data Path Spoke vNet Migrated Spokes vNet 1. vNet Hub Firewall Spoke vNet Branches (VPN/SD-WAN) 1. vNet Hub Firewall 2. vNet SD-WAN NVA or VPN Gateway Spoke vNet On-premises DC 1. vNet Hub Firewall 2. ExpressRoute Gateway 3. ExpressRoute Circuit (MSEE) 4. Provider/Customer Migrated vNet Branches (VPN/SD-WAN) 1. vHub Firewall 2. vHub SD-WAN NVA or VPN Gateway Migrated vNet On-premises DC 1. vHub Firewall 2. vHub ExpressRoute Gateway 3. ExpressRoute Circuit (MSEE) 4. Provider/Customer Note: Connectivity means that return traffic follows the same path and components. Pros Traffic remains within the Microsoft Backbone in the region, resulting in lower latency compared to option 1. No throughput limitation imposed by virtual tunnels compared to option 2. Throughput will be limited by the VM size. Cons Administrative overhead to adjust the UDR to reach the Spoke vNets on connected over the vNet Hub. Scenario 4 – Transit virtual network for decentralized vNets This use case involves a decentralized virtual network model where each customer has an ExpressRoute Gateway for connectivity to on-premises systems. Traffic between virtual networks is managed using virtual network peering, based on the specific connectivity requirements of the customer. Each virtual network has its own gateway, which prevents connecting them directly to the virtual hub because the remote gateway option needs to be enabled. If the customer can tolerate the downtime associated with removing the Express Route Gateway from the migrated vNet, they have the option to establish a direct vNet connection to the vHub, thereby simplifying the solution. This process typically takes approximately 45 minutes, excluding the rollback procedure which would require an additional 45 minutes, potentially making this approach prohibitive for most customers. However, customers with existing Azure workloads often aim to minimize downtime. As illustrated in the diagram below, they can create a transit vNet equipped with a firewall or a Network Virtual Appliance (NVA) with routing capabilities. This configuration allows the migrated vNet to establish regular peering, thereby maintaining connectivity without significant disruption. The solution illustrated on this section uses a static route propagation at the vNet connection level towards the Transit vNet, which now requires non Secured-Virtual WAN hubs (note that support for static route propagation is on the Virtual WAN roadmap). Alternatively, you can use BGP peering from the Firewall or NVA to program the migrated vNets summary prefixes. For Firewall implementations with BGP it is recommended to leverage Next hop IP support for Virtual WAN where traffic flows over a load balance feature to ensure traffic symmetry. In that scenario you can also leverage Secured-vHubs. The migration process also necessitates adjustments to the routes for the migrated vNET to facilitate traffic flow to on-premises systems using the vHUB. This includes utilizing static routes at the connection from the Transit vNet to the vHub to advertise a summary route via the Firewall in the transit vNET to ensure return traffic and proper advertisement to the on-premises environment. Once the route configurations are established, the ExpressRoute connection can be removed. The customer can then proceed to Step 2, which will allow them to make the final adjustments and complete the full integration with the vHub following the outlined steps. Remove ExpressRoute Gateway. Create the vNet connection to the vHub, that will allow the specific Migrated vNet prefix to advertise to the vHub as well as leak down to the ExpressRoute. Once the step 2 is completed the traffic should start to flow over the vNET connection to the Vub. Removed the vNet peering to the Transit vNET. Connectivity flow: Source Destination Data Path vNet1/VNet2 Migrated Spokes vNet 1. Direct vNet peering vNet1/VNet2 Branches (VPN/SD-WAN) 1. ExpressRoute Gateway 2. ExpressRoute Circuit (MSEE) 3. Provider/Customer 4. VPN/SD-WAN vNet1/VNet2 On-premises DC 1. ExpressRoute Gateway 2. ExpressRoute Circuit (MSEE) 3. Provider/Customer Migrated vNet (Step1) Branches (VPN/SD-WAN) 1. Transit Firewall 2. vHub SD-WAN NVA or VPN Gateway Migrated vNet (Step1) On-premises DC 1. Transit Firewall 2. vHub ExpressRoute Gateway 3. ExpressRoute Circuit (MSEE) 4. Provider/Customer Migrated vNet (Step2) Branches (VPN/SD-WAN) 1. vHub SD-WAN NVA or VPN Gateway Migrated vNet (Step2) On-premises DC 1. vHub ExpressRoute Gateway 2. ExpressRoute Circuit (MSEE) 3. Provider/Customer Note: Connectivity means that return traffic follows the same path and components. Pros Traffic remains on the Microsoft backbone, ensuring minimal latency. Not the same throughput limits associated with option 2 solution (virtual tunnels). Cons Administrative overhead associated with maintaining the additional transit virtual network, including user-defined route management and vHub vNet Connection static route configuration. Costs incurred from operating any supplementary firewalls (FWs) or network virtual appliances (NVAs) in the transit vNet. Conclusion This article outlined four strategies for migrating from Hub and Spoke networking to Azure Virtual WAN—ExpressRoute hair pinning, VPN or SD-WAN virtual tunnels, vNet peering with vHub connections, and transit virtual networks for decentralized vNets—highlighting their pros, cons, and administrative considerations. It is important to assess which approach best fits your needs by weighing each scenario's advantages and drawbacks. Bonus The diagrams in Excalidraw format related to this blog post are available in the following GitHub repository.3.1KViews2likes3CommentsNetwork Redundancy Between AVS, On-Premises, and Virtual Networks in a Multi-Region Design
By Mays_Algebary shruthi_nair Establishing redundant network connectivity is vital to ensuring the availability, reliability, and performance of workloads operating in hybrid and cloud environments. Proper planning and implementation of network redundancy are key to achieving high availability and sustaining operational continuity. This article focuses on network redundancy in multi-region architecture. For details on single-region design, refer to this blog. The diagram below illustrates a common network design pattern for multi-region deployments, using either a Hub-and-Spoke or Azure Virtual WAN (vWAN) topology, and serves as the baseline for establishing redundant connectivity throughout this article. In each region, the Hub or Virtual Hub (VHub) extends Azure connectivity to Azure VMware Solution (AVS) via an ExpressRoute circuit. The regional Hub/VHub is connected to on-premises environments by cross-connecting (bowtie) both local and remote ExpressRoute circuits, ensuring redundancy. The concept of weight, used to influence traffic routing preferences, will be discussed in the next section. The diagram below illustrates the traffic flow when both circuits are up and running. Design Considerations If a region loses its local ExpressRoute connection, AVS in that region will lose connectivity to the on-premises environment. However, VNets will still retain connectivity to on-premises via the remote region’s ExpressRoute circuit. The solutions discussed in this article aim to ensure redundancy for both AVS and VNets. Looking at the diagram above, you might wonder: why do we need to set weights at all, and why do the AVS-ER connections (1b/2b) use the same weight as the primary on-premises connections (1a/2a)? Weight is used to influence routing decisions and ensure optimal traffic flow. In this scenario, both ExpressRoute circuits, ER1-EastUS and ER2-WestUS, advertise the same prefixes to the Azure ExpressRoute gateway. As a result, traffic from the VNet to on-premises would be ECMPed across both circuits. To avoid suboptimal routing and ensure that traffic from the VNets prefers the local ExpressRoute circuit, a higher weight is assigned to the local path. It’s also critical that the ExpressRoute gateway connection to on-premises (1a/2a) and to AVS (1b/2b), is assigned the same weight. Otherwise, traffic from the VNet to AVS will follow a less efficient route as AVS routes are also learned over ER1-EastUS via Global Reach. For instance, VNets in EastUS will connect to AVS EUS through ER1-EastUS circuit via Global Reach (as shown by the blue dotted line), instead of using the direct local path (orange line). This suboptimal routing is illustrated in the below diagram. Now let us see what solutions we can have to achieve redundant connectivity. The following solutions will apply to both Hub-and-Spoke and vWAN topology unless noted otherwise. Note: The diagrams in the upcoming solutions will focus only on illustrating the failover traffic flow. Solution1: Network Redundancy via ExpressRoute in Different Peering Location In the solution, deploy an additional ExpressRoute circuit in a different peering location within the same metro area (e.g., ER2–PeeringLocation2), and enable Global Reach between this new circuit and the existing AVS ExpressRoute (e.g., AVS-ER1). If you intend to use this second circuit as a failover path, apply prepends to the on-premises prefixes advertised over it. Alternatively, if you want to use it as an active-active redundant path, do not prepend routes, in this case, both AVS and Azure VNets will ECMP to distribute traffic across both circuits (e.g., ER1–EastUS and ER–PeeringLocation2) when both are available. Note: Compared to the Standard Topology, this design removes both the ExpressRoute cross-connect (bowtie) and weight settings. When adding a second circuit in the same metro, there's no benefit in keeping them, otherwise traffic from the Azure VNet will prefer the local AVS circuit (AVS-ER1/AVS-ER2) to reach on-premises due to the higher weight, as on-premises routes are also learned over AVS circuit (AVS-ER1/AVS-ER2) via Global Reach. Also, when connecting the new circuit (e.g., ER–Peering Location2), remove all weight settings across the connections. Traffic will follow the optimal path based on BGP prepending on the new circuit, or load-balance (ECMP) if no prepend is applied. Note: Use public ASN to prepend the on-premises prefix as AVS circuit (e.g., AVS-ER) will strip the private ASN toward AVS. Solution Insights Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. It could be cost prohibitive depending on the bandwidth of the second circuit. Solution2: Network Redundancy via ExpressRoute Direct In this solution, ExpressRoute Direct is used to provision multiple circuits from a single port pair in each region, for example, ER2-WestUS and ER4-WestUS are created from the same port pair. This allows you to dedicate one circuit for local traffic and another for failover to a remote region. To ensure optimal routing, prepend the on-premises prefixes using public ASN on the newly created circuit (e.g., ER3-EastUS and ER4-WestUS). Remove all weight settings across the connections; traffic will follow the optimal path based on BGP prepending on the new circuit. For instance, if ER1-EastUS becomes unavailable, traffic from AVS and VNets in the EastUS region will automatically route through ER4-WestUS circuit, ensuring continuity. Note: Compared to the Standard Topology, this design connects the newly created ExpressRoute circuits (e.g., ER3-EastUS/ER4-WestUS) to the remote region of ExpressRoute gateway (black dotted lines) instead of having the bowtie to the primary circuits (e.g., ER1-EastUS/ER2-WestUS). Solution Insights Easy to implement if you have ExpressRoute Direct. ExpressRoute Direct supports over- provisioning where you can create logical ExpressRoute circuits on top of your existing ExpressRoute Direct resource of 10-Gbps or 100-Gbps up to the subscribed Bandwidth of 20 Gbps or 200 Gbps. For example, you can create two 10-Gbps ExpressRoute circuits within a single 10-Gbps ExpressRoute Direct resource (port pair). Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. Solution3: Network Redundancy via ExpressRoute Metro Metro ExpressRoute is a new configuration that enables dual-homed connectivity to two different peering locations within the same city. This setup enhances resiliency by allowing traffic to continue flowing even if one peering location goes down, using the same circuit. Solution Insights Higher Resiliency: Provides increased reliability with a single circuit. Limited regional availability: Currently available in select regions, with more being added over time. Cost-effective: Offers redundancy without significantly increasing costs. Solution4: Deploy VPN as a Backup to ExpressRoute This solution mirrors solution 1 for a single region but extends it to multiple regions. In this approach, a VPN serves as the backup path for each region in the event of an ExpressRoute failure. In a Hub-and-Spoke topology, a backup path to and from AVS can be established by deploying Azure Route Server (ARS) in the hub VNet. ARS enables seamless transit routing between ExpressRoute and the VPN gateway. In vWAN topology, ARS is not required; the vHub's built-in routing service automatically provides transitive routing between the VPN gateway and ExpressRoute. In this design, you should not cross-connect ExpressRoute circuits (e.g., ER1-EastUS and ER2-WestUS) to the ExpressRoute gateways in the Hub VNets (e.g., Hub-EUS or Hub-WUS). Doing so will lead to routing issues, where the Hub VNet only programs the on-premises routes learned via ExpressRoute. For instance, in the EastUS region, if the primary circuit (ER1-EastUS) goes down, Hub-EUS will receive on-premises routes from both the VPN tunnel and the remote ER2-WestUS circuit. However, it will prefer and program only the ExpressRoute-learned routes from ER2-WestUS circuit. Since ExpressRoute gateways do not support route transitivity between circuits, AVS connected via AVS-ER will not receive the on-premises prefixes, resulting in routing failures. Note: In vWAN topology, to ensure optimal route convergence when failing back to ExpressRoute, you should prepend the prefixes advertised from on-premises over the VPN. Without route prepending, VNets may continue to use the VPN as the primary path to on-premises. If prepend is not an option, you can trigger the failover manually by bouncing the VPN tunnel. Solution Insights Cost-effective and straightforward to deploy. Increased Latency: The VPN tunnel over the internet adds latency due to encryption overhead. Bandwidth Considerations: Multiple VPN tunnels might be needed to achieve bandwidth comparable to a high-capacity ExpressRoute circuit (e.g., over 1G). For details on VPN gateway SKU and tunnel throughput, refer to this link. As you can't cross connect ExpressRoute circuits, VNets will utilize the VPN for failover instead of leveraging remote region ExpressRoute circuit. Solution5: Network Redundancy-Multiple On-Premises (split-prefix) In many scenarios, customers advertise the same prefix from multiple on-premises locations to Azure. However, if the customer can split prefixes across different on-premises sites, it simplifies the implementation of failover strategy using existing ExpressRoute circuits. In this design, each on-premises advertises region-specific prefixes (e.g., 10.10.0.0/16 for EastUS and 10.70.0.0/16 for WestUS), along with a common supernet (e.g., 10.0.0.0/8). Under normal conditions, AVS and VNets in each region use longest prefix match to route traffic efficiently to the appropriate on-premises location. For instance, if ER1-EastUS becomes unavailable, AVS and VNets in EastUS will automatically fail over to ER2-WestUS, routing traffic via the supernet prefix to maintain connectivity. Solution Insights Cost-effective: no additional deployment, using existing ExpressRoute circuits. Advertising specific prefixes over each region might need additional planning. Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. Solution6: Prioritize Network Redundancy for One Region Over Another If you're operating under budget constraints and can prioritize one region (such as hosting critical workloads in a single location) and want to continue using your existing ExpressRoute setup, this solution could be an ideal fit. In this design, assume AVS in EastUS (AVS-EUS) hosts the critical workloads. To ensure high availability, AVS-ER1 is configured with Global Reach connections to both the local ExpressRoute circuit (ER1-EastUS) and the remote circuit (ER2-WestUS). Make sure to prepend the on-premises prefixes advertised to ER2-WestUS using public ASN to ensure optimal routing (no ECMP) from AVS-EUS over both circuits (ER1-EastUS and ER2-WestUS). On the other hand, AVS in WestUS (AVS-WUS) is connected via Global Reach only to its local region ExpressRoute circuit (ER2-WestUS). If that circuit becomes unavailable, you can establish an on-demand Global Reach connection to ER1-EastUS, either manually or through automation (e.g., a triggered script). This approach introduces temporary downtime until the Global Reach link is established. You might be thinking, why not set up Global Reach between the AVS-WUS circuit and remote region circuits (like connecting AVS-ER2 to ER1-EastUS), just like we did for AVS-EUS? Because it would lead to suboptimal routing. Due to AS path prepending on ER2-WestUS, if both ER1-EastUS and ER2-WestUS are linked to AVS-ER2, traffic would favor the remote ER1-EastUS circuit since it presents a shorter AS path. As a result, traffic would bypass the local ER2-WestUS circuit, causing inefficient routing. That is why for AVS-WUS, it's better to use on-demand Global Reach to ER1-EastUS as a backup path, enabled manually or via automation, only when ER2-WestUS becomes unavailable. Note: VNets will failover via local AVS circuit. E.g., HUB-EUS will route to on-prem through AVS-ER1 and ER2-WestUS via Global Reach Secondary (purple line). Solution Insights Cost-effective Workloads hosted in AVS within the non-critical region will experience downtime if the local region ExpressRoute circuit becomes unavailable, until the on-demand Global Reach connection is established. Conclusion Each solution has its own advantages and considerations, such as cost-effectiveness, ease of implementation, and increased resiliency. By carefully planning and implementing these solutions, organizations can ensure operational continuity and optimal traffic routing in multi-region deployments.2.5KViews6likes0CommentsAzure ExpressRoute Direct: A Comprehensive Overview
What is Express Route Azure ExpressRoute allows you to extend your on-premises network into the Microsoft cloud over a private connection made possible through a connectivity provider. With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure, and Microsoft 365. ExpressRoute allows you to create a connection between your on-premises network and the Microsoft cloud in four different ways, CloudExchange Colocation, Point-to-point Ethernet Connection, Any-to-any (IPVPN) Connection, and ExpressRoute Direct. ExpressRoute Direct gives you the ability to connect directly into the Microsoft global network at peering locations strategically distributed around the world. ExpressRoute Direct provides dual 100-Gbps or 10-Gbps connectivity that supports active-active connectivity at scale. Why ExpressRoute Direct Is Becoming the Preferred Choice for Customers ExpressRoute Direct with ExpressRoute Local – Free Egress: ExpressRoute Direct includes ExpressRoute Local, which allows private connectivity to Azure services within the same metro or peering location. This setup is particularly cost-effective because egress (outbound) data transfer is free, regardless of whether you're on a metered or unlimited data plan. By avoiding Microsoft's global backbone, ExpressRoute Local offers high-speed, low-latency connections for regionally co-located workloads without incurring additional data transfer charges. Dual Port Architecture Both ExpressRoute Direct and the service provider model feature a dual-port architecture, with two physical fiber pairs connected to separate Microsoft router ports and configured in an active/active BGP setup that distributes traffic across both links simultaneously for redundancy and improved throughput. What sets Microsoft apart is making this level of resiliency standard, not optional. Forward-thinking customers in regions like Sydney take it even further by deploying ExpressRoute Direct across multiple colocation facilities for example, placing one port pair in Equinix SY2 and another in NextDC S1 creating four connections across two geographically separate sites. This design protects against facility-level outages from power failures, natural disasters, or accidental infrastructure damage, ensuring business continuity for organizations where downtime is simply not an option. When Geography Limits Your Options: Not every region offers facility diversity, example New Zealand has only one ExpressRoute peering location, businesses needing geographic redundancy must connect to Sydney incurring Auckland to Sydney link costs but gaining critical diversity to mitigate outages. While ExpressRoute’s dual ports provide active/active redundancy, both are on the same Microsoft edge, so true disaster recovery requires using Sydney’s edge. ExpressRoute Direct scales from basic dual-port setups to multi-facility deployments and offers another advantage: free data transfer within the same geopolitical region. Once traffic enters Microsoft’s network, New Zealand customers can move data between Azure services across the trans-Tasman link without per-GB fees, with Microsoft absorbing those costs. Premium SKU: Global Reach: Azure ExpressRoute Direct with the Premium SKU enables Global Reach, allowing private connectivity between your on-premises networks across different geographic regions through Microsoft's global backbone. This means you can link ExpressRoute circuits in different countries or continents, facilitating secure and high-performance data exchange between global offices or data centers. The Premium SKU extends the capabilities of ExpressRoute Direct by supporting cross-region connectivity, increased route limits, and access to more Azure regions, making it ideal for multinational enterprises with distributed infrastructure. MACsec: Defense in Depth and Enterprise Security ExpressRoute Direct uniquely supports MACsec (IEEE 802.1AE) encryption at the data-link layer, allowing your router and Microsoft's router to establish encrypted communication even within the colocation facility. This optional feature provides additional security for compliance-sensitive workloads in banking or government environments. High-Performance Data Transfer for the Enterprise: Azure ExpressRoute Direct enables ultra-fast and secure data transfer between on-premises infrastructure and Azure by offering dedicated bandwidth of 10 to 100 Gbps. This high-speed connectivity is ideal for large-scale data movement scenarios such as AI workloads, backup, and disaster recovery. It ensures consistent performance, low latency, and enhanced reliability, making it well-suited for hybrid and multicloud environments that require frequent or time-sensitive data synchronization. FastPath Support: Azure ExpressRoute Direct now supports FastPath for Private Endpoints and Private Link, enabling low-latency, high-throughput connections by bypassing the virtual network gateway. This feature is available only with ExpressRoute Direct circuits (10 Gbps or 100 Gbps) and is in limited general availability. While a gateway is still needed for route exchange, traffic flows directly once FastPath is enabled. Supported gateway ExpressRoute Direct Setup Workflow Before provisioning ExpressRoute Direct resources, proper planning is essential. Key considerations for connectivity include understanding the two connectivity patterns available for ExpressRoute Direct from the customer edge to Microsoft Enterprise Edge (MSEE). Option 1: Colocation of Customer Equipment: This is a common pattern where the customer racks their network device (edge router) in the same third-party data center facility that houses Microsoft's networking gear (e.g., Equinix or NextDC). They install their router or firewall there and then order a short cross-connect from their cage to Microsoft's cage in that facility. The cross-connect is simply a fiber cable run through the facility's patch panel connecting the two parties. This direct colocation approach has the advantage of a single, highly efficient physical link (no intermediate hops) between the customer and Microsoft, completing the layer-1 connectivity in one step. Option 2: Using a Carrier/Exchange Provider: If the customer prefers not to move hardware into a new facility (due to cost or complexity), they can leverage a provider that already has presence in the relevant colocation. In this case, the customer connects from their data center to the provider's network, and the provider extends connectivity into the Microsoft peering location. For instance, the customer could contract with Megaport or a local telco to carry traffic from their on-premises location into Megaport's equipment, and Megaport in turn handles the cross-connection to Microsoft in the target facility. The conversation cited that the customer had already set up connections to Megaport in their data center. Using an exchange can simplify logistics since the provider arranges the cross-connect and often provides an LOA on the customer's behalf. It may also be more cost-effective where the customer's location is far from any Microsoft peering site. Many enterprises find that placing equipment in a well-connected colocation facility works best for their needs. Banks and large organizations have successfully taken this approach, such as placing routers in Equinix Sydney or NextDC Sydney to establish a direct fiber link to Azure. However, we understand that not every organization wants the capital expense or complexity of managing physical equipment in a new location. For those situations, using a cloud exchange like Megaport offers a practical alternative that still delivers the dedicated connectivity you're looking for, while letting someone else handle the infrastructure management. Once the decision on the connectivity pattern is made, the next step is to provision ExpressRoute Direct ports and establish the physical link: Step1: Provisioning Express Route Direct Ports Through the Azure portal (or CLI), the customer creates an ExpressRoute Direct resource. Customer must select an appropriate peering location, which corresponds to the colocation facility housing Azure's routers. For example, the customer would select the specific facility (such as "Vocus Auckland" or "Equinix Sydney SY2") where they intend to connect. Customer also choose the port bandwidth (either 10 Gbps or 100 Gbps) and the encapsulation type (Dot1Q or QinQ) during this setup. Azure then allocates two ports on two separate Microsoft devices in that location – essentially giving the customer a primary and secondary interface for redundancy, to remove a single point of failure affecting their connectivity. ****Critical considerations we need to keep in mind during this step**** Encapsulation: When configuring ExpressRoute Direct ports, the customer must choose an encapsulation method. Dot1Q (802.1Q) uses a single VLAN tag for the circuit, whereas Q-in-Q (802.1ad) uses stacked VLAN tags (an Outer S-Tag and Inner C-Tag). Q-in-Q allows multiple circuits on one physical port with overlapping customer VLAN IDs because Azure assigns a unique outer tag per circuit (making it ideal if the customer needs several ExpressRoute circuits on the same port). Dot1Q, by contrast, requires each VLAN ID to be unique across all circuits on the port, and is often used if the equipment doesn’t support Q-in-Q. (Most modern deployments prefer Q-in-Q for flexibility.) Capacity Planning: This offering allows customers to overprovision and utilize 20Gbps of capacity. Design for 10 Gbps with redundancy, not 20 Gbps total capacity. During Microsoft's monthly maintenance windows, one port may go offline, and your network must handle this seamlessly. Step 2: Generate Letter of Authorization After the ExpressRoute Direct resource is created, Microsoft generates a Letter of Authorization. The LOA is a document (often a PDF) that authorizes the data center operator to connect a specific Microsoft port to the designated port. It includes details like the facility name, patch panel identifier, and port numbers on Microsoft's side. If co-locating your own gear, you will also obtain a corresponding LOA from the facility for your port (or simply indicate your port details on the cross-connect order form). If a provider like Megaport is involved, that provider will generate an LOA for their port as well. Two LOAs are typically needed – one for Microsoft's ports and one for the other party's ports which are then submitted to the facility to execute the cross-connect. Step 3: Complete Cross Connect with data center provider Using the LOAs, the data center’s technicians will perform the cross-connection in the meet-me room. At this point, the physical fiber link is established between the Microsoft router and the customer (or provider) equipment. The link goes through a patch panel in the MMR – Meet me room rather than a direct cable between cages, for security and manageability. After patching, the circuit is in place but typically kept “administratively down” until ready. *****Critical considerations we need to keep in mind during this step. ***** When port allocation conflicts occur, engage Microsoft Support rather than recreating resources. They coordinate with colocation providers to resolve conflicts or issue new LOAs. Step 4: Change Admin Status of each link Once the cross-connect is physically completed, you can head into Azure's portal and flip the Admin State of each ExpressRoute Direct link to "Enabled." This action lights up the optical interface on Microsoft's side and starts your billing meter running, so you'll want to make sure everything is working properly first. The great thing is that Azure gives you visibility into the health of your fiber connection through optical power metrics. You can check the receive light levels right in the portal , a healthy connection should show power readings somewhere between -1 dBm and -9 dBm, which indicates a strong fiber signal. If you're seeing readings outside this range, or worse, no light at all, that's a red flag pointing to a potential issue like a mis-patch or faulty fiber connector. There was a real case where someone had a bad fiber connector that was caught because the light levels were too low, and the facility had to come back and re-patch the connection. So, this optical power check is really your first line of defence , once you see good light levels within the acceptable range, you know your physical layer is solid and you're ready to move on to the next steps. ****Critical considerations we need to keep in mind during this step. **** Proactive Monitoring: Set up alerts for BGP session failures and optical power thresholds. Link failures might not immediately impact users but require quick restoration to maintain full redundancy. At this stage, you've successfully navigated the physical infrastructure challenge, ExpressRoute Direct port pair is provisioned, fiber cross-connects are in place, and those critical optical power levels are showing healthy readings. Essentially, private physical highway directly connecting your network edge to Microsoft's backbone infrastructure has been built Step 5: Create Express Route Circuits ExpressRoute circuits represent the logical layer that transforms your physical ExpressRoute Direct ports into functional network connections. Through the Azure portal, organizations create circuit resources linked to their ExpressRoute Direct infrastructure, specifying bandwidth requirements and selecting the appropriate SKU (Local, Standard, or Premium) based on connectivity needs. A key advantage is the ability to provision multiple circuits on the same physical port pair, provided aggregate bandwidth stays within physical limits. For example, an organization with 10 Gbps ExpressRoute Direct might run a 1 Gbps non-production circuit alongside a 5 Gbps production circuit on the same infrastructure. Azure handles the technical complexity through automatic VLAN management: Step 6: Establish Peering Once your ExpressRoute circuit is created and VLAN connectivity is established, the next crucial step involves setting up BGP (Border Gateway Protocol) sessions between your network and Microsoft's infrastructure. ExpressRoute supports two primary BGP peering types: Private Peering for accessing Azure Virtual Networks and Microsoft Peering for reaching Microsoft SaaS services like Office 365 and Azure PaaS offerings. For most enterprise scenarios connecting data centers to Azure workloads, Private Peering becomes the focal point. Azure provides specific BGP IP addresses for your circuit configuration, defining /30 subnets for both primary and secondary link peering, which you'll configure on your edge router to exchange routing information. The typical flow involves your organization advertising on-premises network prefixes while Azure advertises VNet prefixes through these BGP sessions, creating dynamic route discovery between your environments. Importantly, both primary and secondary links maintain active BGP sessions, ensuring that if one connection fails, the secondary BGP session seamlessly maintains connectivity and keeps your network resilient against single points of failure. Step 7: Routing and Testing Once BGP sessions are established, your ExpressRoute circuit becomes fully operational, seamlessly extending your on-premises network into Azure virtual networks. Connectivity testing with ping, traceroute, and application traffic confirms that your on-premises servers can now communicate directly with Azure VMs through the private ExpressRoute path, bypassing the public internet entirely. The traffic remains completely isolated to your circuit via VLAN tags, ensuring no intermingling with other tenants while delivering the low latency and predictable performance that only dedicated connectivity can provide. At the end of this stage, the customer’s data center is linked to Azure at layer-3 via a private, resilient connection. They can access Azure resources as if they were on the same LAN extension, with low latency and high throughput. All that remains is to connect this circuit to relevant Azure virtual networks (via an ExpressRoute Gateway) and verify end-to-end application traffic. Step by step instructions are available as below Configure Azure ExpressRoute Direct using the Azure portal | Microsoft Learn Azure ExpressRoute: Configure ExpressRoute Direct | Microsoft Learn Azure ExpressRoute: Configure ExpressRoute Direct: CLI | Microsoft Learn1.4KViews3likes3CommentsAZ-500: Microsoft Azure Security Technologies Study Guide
The AZ-500 certification provides professionals with the skills and knowledge needed to secure Azure infrastructure, services, and data. The exam covers identity and access management, data protection, platform security, and governance in Azure. Learners can prepare for the exam with Microsoft's self-paced curriculum, instructor-led course, and documentation. The certification measures the learner’s knowledge of managing, monitoring, and implementing security for resources in Azure, multi-cloud, and hybrid environments. Azure Firewall, Key Vault, and Azure Active Directory are some of the topics covered in the exam.22KViews4likes3CommentsInter-Hub Connectivity Using Azure Route Server
By Mays_Algebary shruthi_nair As your Azure footprint grows with a hub-and-spoke topology, managing User-Defined Routes (UDRs) for inter-hub connectivity can quickly become complex and error-prone. In this article, we’ll explore how Azure Route Server (ARS) can help streamline inter-hub routing by dynamically learning and advertising routes between hubs, reducing manual overhead and improving scalability. Baseline Architecture The baseline architecture includes two Hub VNets, each peered with their respective local spoke VNets as well as with the other Hub VNet for inter-hub connectivity. Both hubs are connected to local and remote ExpressRoute circuits in a bowtie configuration to ensure high availability and redundancy, with Weight used to prefer the local ExpressRoute circuit over the remote one. To maintain predictable routing behavior, the VNet-to-VNet configuration on the ExpressRoute Gateway should be disabled. Note: Adding ARS to an existing Hub with Virtual Network Gateway will cause downtime that expect to last 10 minutes. Scenario 1: ARS and NVA Coexist in the Hub Option A: Full Traffic Inspection ARS and NVA Coexist in the Hub In this scenario, ARS is deployed in each Hub VNet, alongside the Network Virtual Appliances (NVAs). NVA1 in Region1 establishes BGP peering with both the local ARS (ARS1) and the remote ARS (ARS2). Similarly, NVA2 in Region2 peers with both ARS2 (local) and ARS1 (remote). Let’s break down what each BGP peering relationship accomplishes. For clarity, we’ll focus on Region1, though the same logic applies to Region2: NVA1 Peering with Local ARS1 Through BGP peering with ARS1, NVA1 dynamically learns the prefixes of Spoke1 and Spoke2 at the OS level, eliminating the need to manually configure these routes. The same applies for NVA2 learning Spoke3 and Spoke4 prefixes via its BGP peering with ARS2. NVA1 Peering with Remote ARS2 When NVA1 peers with ARS2, the Spoke1 and Spoke2 prefixes are propagated to ARS2. ARS2 then injects these prefixes into NVA2 at both the NIC level with NVA1 as the next hop, and at the OS level. This mechanism removes the need for UDRs on the NVA subnets to enable inter-hub routing. Additionally, ARS2 advertises the Spoke1 and Spoke2 prefixes to both ExpressRoute circuits (EXR2 and EXR1 due to bowtie configuration) via GW2, making them reachable from on-premises through either EXR1 or EXR2. 👉Important: To ensure that ARS2 accepts and propagates Spoke1/Spoke2 prefixes received via NVA1, AS-Override must be enabled. Without AS-Override, BGP loop prevention will block these routes at ARS2, since both ARS1 and ARS2 use the default ASN 65515, and ARS2 will consider the route as already originated locally. The same principle applies in reverse for Spoke3 and Spoke4 prefixes being advertised from NVA2 to ARS1. Traffic Flow Inter-Hub Traffic: Spoke VNets are configured with UDRs that contain only a default route (0.0.0.0/0) pointing to the local NVA as the next hop. Additionally, the “Propagate Gateway Routes” setting should be set to False to ensure all traffic, whether East-West (intra-hub/inter-hub) or North-South (to/from internet), is forced through the local NVA for inspection. Local NVAs will have the next hop to the other region spokes injected at the NIC level by local ARS, pointing to the other region NVA, for example NVA2 will have next hop to Spoke1 and Spoke2 as NVA1 (10.0.1.4) and vice versa. Why are UDRs still needed on spokes if ARS handles dynamic routing? Even with ARS in place, UDRs are required to maintain control of the next hop for traffic inspection. For instance, if Spoke1 and Spoke2 do not have UDRs, they will learn the remote spoke prefixes (e.g., Spoke3/Spoke4) injected via ARS1, which received them from NVA2. This results in Spoke1/Spoke2 attempting to route traffic directly to NVA2, a path that is invalid, since the spokes don’t have the path to NVA2. The UDR ensures traffic correctly routes through NVA1 instead. On-Premises Traffic: To explain the on-premises traffic flow, we'll break it down into two directions: Azure to on-premises, and on-premises to Azure. Azure to On-Premises Traffic Flow: As previously noted, Spokes send all traffic, including traffic to on-premises, via NVA1 due to the default route in the UDR. NVA1 then routes traffic to the local ExpressRoute circuit, using Weight to prefer the local path over the remote. Note: While NVA1 learns on-premises prefixes from both local and remote ARSs at the OS level, this doesn’t affect routing decisions. The actual NIC-level route injection determines the next hop, ensuring traffic is sent via the correct path—even if the OS selects a different “best” route internally. The screenshot below from NVA1 shows four next hops to the on-premises network 10.2.0.0/16. These include the local ARS (ARS1: 10.0.2.5 and 10.0.2.4) and the remote ARS (ARS2: 10.1.2.5 and 10.1.2.4). On-Premises to Azure Traffic Flow In a bowtie ExpressRoute configuration, Azure VNet prefixes are advertised to on-premises through both local and remote ExpressRoute circuits. Because of this dual advertisement, the on-premises network must ensure optimal path selection when routing traffic to Azure. From Azure side, to maintain traffic symmetry, add UDRs at the GatewaySubnet (GW1 and GW2) with specific routes to the local Spoke VNets, using the local NVA as the next hop. This ensures return traffic flows back through the same path it entered. 👉How Does the ExpressRoute Edge Router Select the Optimal Path? You might ask: If Spoke prefixes are advertised by both GW1 and GW2, how does the ExpressRoute edge router choose the best path? (e.g., diagram below shows EXR1 learns Region1 prefixes from GW1 and GW2) Here’s how: Edge routers (like EXR1) receive the same Spoke prefixes from both gateways. However, these routes have different AS-Path lengths: - Routes from the local gateway (GW1) have a shorter AS-Path. - Routes from the remote gateway (GW2) have a longer AS-Path because NVA1’s ASN (e.g., 65001) is prepended twice as part of the AS-Override mechanism. As a result, the edge router (EXR1) will prefer the local path from GW1, ensuring efficient and predictable routing. For example: EXR1 receives Spoke1, Spoke2, and Hub1-VNet prefixes from both GW1 and GW2. But because the path via GW1 has a shorter AS-Path, EXR1 will select that as the best route. (Refer to the diagram below for a visual of the AS-Path difference). Final Traffic Flow: Option-A Insights: This design simplifies UDR configuration for inter-hub routing, especially useful when dealing with non-contiguous prefixes or operating across multiple hubs. For simplicity, we used a single NVA in each Hub-VNet while explaining the setup and traffic flow throughout this article. However, a high available (HA) NVA deployment is recommended. To maintain traffic symmetry in an HA setup, you’ll need to enable the next-hop IP feature when peering with Azure Route Server (ARS). When on-premises traffic inspection is required, the UDR setup in the GatewaySubnet becomes more complex as the number of Spokes increases. Additionally, each route table is currently limited to 400 UDR entries. As your Azure network scales, keep in mind that Azure Route Server supports a maximum of 8 BGP peers per instance (as of the time writing this article). This limit can impact architectures involving multiple NVAs or hubs. Option B: Bypass On-Premises Inspection If on-premises traffic inspection is not required, NVAs can advertise a supernet prefix summarizing the local Spoke VNets to the remote ARS. This approach provides granular control over which traffic is routed through the NVA and eliminates the need for BGP peering between the local NVA and local ARS. All other aspects of the architecture remain the same as described in Option A. For example, NVA2 can advertise the supernet 192.168.2.0/23 (supernet of Spoke3 and Spoke4) to ARS1. As a result, Spoke1 and Spoke2 will learn this route with NVA2 as the next hop. To ensure proper routing (as discussed earlier) and inter-hub inspection, you need apply a UDR in Spoke1 and Spoke2 that overrides this exact supernet prefix, redirecting traffic to NVA1 as the next hop. At the same time, traffic destined for on-premises will follow the system route through the local ExpressRoute gateway, bypassing NVA1 altogether. In this setup: UDRs on the Spokes should have "Propagate Gateway Routes" set to True. No UDRs are needed in the GatewaySubnet. 👉Can NVA2 Still Advertise Specific Spoke Prefixes? You might wonder: Can NVA2 still advertise specific prefixes (e.g., Spoke3 and Spoke4) learned from ARS2 to ARS1 instead of a supernet? Yes, this is technically possible, but it requires maintaining BGP peering between NVA2 and ARS2. However, this introduces UDR complexity in Spoke1 and Spoke2, as you'd need to manually override each specific prefix. This also defeats the purpose of using ARS for simplified route propagation, undermining the efficiency and scalability of the design. Bypass On-Premises Inspection Final Traffic Flow: Option B: Bypass on-premises inspection traffic flow Option-B Insights: This approach reduces the number of BGP peerings per ARS. Instead of maintaining two BGP sessions (local NVA and remote NVA) per Hub, you can limit it to just one, preserving capacity within ARS’s 8-peer limit for additional inter-hub NVA peerings. Each NVA should advertise a supernet prefix to the remote ARS. This can be challenging if your Spokes don’t use contiguous IP address spaces, as described in Option B. Scenario 2: ARS in the Hub and the NVA in Transit VNet In Scenario 1, we highlighted that when on-premises inspection is required, managing UDRs at the GatewaySubnet becomes increasingly complex as the number of Spoke VNets grows. This is due to the need for UDRs to include specific prefixes for each Spoke VNet. In this scenario, we eliminate the need to apply UDRs at the GatewaySubnet altogether. In this design, the NVA will be deployed in Transit VNet, where: Transit-VNet will be peered with local Spoke VNets and with the local Hub-VNet to enable intra-Hub and on-premises connectivity. Transit-VNet also peered with remote Transit VNets (e.g., Transit-VNet1 peered with Transit-VNet2) to handle inter-Hub connectivity through the NVAs. Additionally, Transit-VNets are peered with remote Hub-VNets, to establish BGP peering with the remote ARS. NVAs OS will need to add static routes for the local Spoke VNets prefixes, it can be specific or it can supernet prefix, which will later be advertised to ARSs over BGP Peering, then ARS will advertise it to on-premises via ExpressRoute. NVAs will BGP peer with local ARS and also with the remote ARS. To understand the reasoning behind this design, let’s take a closer look at the setup in Region1, focusing on how ARS and NVA are configured to connect to Region2. This will help illustrate both inter-hub and on-premises connectivity. The same concept applies in reverse from Region2 to Region1. Inetr-Hub: To enable NVA1 in Region1 to learn prefixes from Region2, NVA2 will configure static routes at the OS level for Spoke3 and Spoke4 (or their supernet prefix) and advertise them to ARS1 via remote BGP peering. As a result, these prefixes will be received by NVA1, both at the NIC level, with NVA2 as the next hop, and at the OS level for proper routing. Spoke1 and Spoke2 will have a UDR with a default route pointing to NVA1 as the next hop. For instance, when Spoke1 needs to communicate with Spoke3, the traffic will first route through NVA1. NVA1 will then forward the traffic to NVA2 using VNet peering between the two Hubs. A similar configuration will be applied in Region2, where NVA1 will configure static routes at the OS level for Spoke1 and Spoke2 (or their supernet prefix) and advertise them to ARS2 via remote BGP peering, as a result, these prefixes will be received by NVA2, both at the NIC level (injected by ARS2), with NVA1 as the next hop, and at the OS level for proper routing. Note: At the OS level, NVA1 learns Spoke3 and Spoke4 prefixes from both local and remote ARSs. However, the NIC-level route injection determines the actual next hop, so even if the OS selects a different best route, it won’t affect forwarding behavior. same applies to NVA2. On-Premises Traffic: To explain the on-premises traffic flow, we'll break it down into two directions: Azure to on-premises, and on-premises to Azure. Azure to On-Premises Traffic Flow: Spokes in Region1 route all traffic through NVA1 via a default route defined in their UDRs. Because of BGP peering between NVA1 and ARS1, ARS1 advertises the Spoke1 and Spoke2 (or their supernet prefix) to on-premises through ExpressRoute (EXR1). The Transit-VNet1 (hosting NVA1) is peered with Hub1-VNet, with “Use Remote Gateway” enabled. This allows NVA1 to learn on-premises prefixes from the local ExpressRoute gateway (GW1), and traffic to on-premises is routed through the local ExpressRoute circuit (EXR1) due to higher BGP Weight configuration. Note: At the OS level, NVA1 learns on-prem prefixes from both local and remote ARSs. However, the NIC-level route injection determines the actual next hop, so even if the OS selects a different best route, it won’t affect forwarding behavior. same applies to NVA2. On-Premises to Azure Traffic Flow: Through BGP peering with ARS1, NVA1 enables ARS1 to advertise Spoke1 and Spoke2 (or their supernet prefix) to both EXR1 and EXR2 circuits (due to the ExpressRoute bowtie setup). Additionally, due to BGP peering between NVA1 and ARS2, ARS2 also advertises Spoke1 and Spoke2 (or their supernet prefix) to EXR2 and EXR1 circuits. As a result, both ExpressRoute edge routers in Region1 and Region2 learn the same Spoke prefixes (or their supernet prefix) from both GW1 and GW2, with identical AS-Path lengths, as shown below. This causes non-optimal inbound routing, where traffic from on-premises destined to Region1 Spokes may first land in Region2’s Hub2-VNet before traversing to NVA1 in Region1. However, return traffic from Spoke1 and Spoke2 will always exit through Hub1-VNet. To prevent suboptimal routing, configure NVA1 to prepend the AS path for Spoke1 and Spoke2 (or their supernet prefix) when advertising them to the remote ARS2. Likewise, ensure NVA2 prepends the AS path for Spoke3 and Spoke4 (or their supernet prefix) when advertising to ARS1. This approach helps maintain optimal routing under normal conditions and during ExpressRoute failover scenarios. Below diagram shows NVA1 is setting AS-Prepend for Spoke1 and Spoke2 supernet prefix when BGP peer with remote ARS (ARS1), same will apply for NVA2 when advertising Spoke3 and Spoke4 prefixes to ARS1. Final Traffic Flow: Full Inspection: Traffic flow when NVA in Transit-VNet Insights: This solution is ideal when full traffic inspection is required. Unlike Scenario 1 - Option A, it eliminates the need for UDRs in the GatewaySubnet. When ARS is deployed in a VNet (typically in Hub VNets), the VNet will be limited to 500 VNet peerings (as of the time writing this article). However, in this design, Spokes peer with the Transit-VNet instead of directly with the ARS VNet, allowing you to scale beyond the 500-peer limit by leveraging Azure Virtual Network Manager (AVNM) or submitting a support request. Some enterprise customers may encounter the 1,000-route advertisement limit on the ExpressRoute circuit from the ExpressRoute gateway. In traditional hub-and-Spoke designs, there's no native control over what is advertised to ExpressRoute. With this architecture, NVAs provide greater control over route advertisement to the circuit. For simplicity, we used a single NVA in each Hub-VNet while explaining the setup and traffic flow throughout this article. However, a high available (HA) NVA deployment is recommended. To maintain traffic symmetry in an HA setup, you’ll need to enable the next-hop IP feature when peering with Azure Route Server (ARS). This design does require additional VNet peerings, including: Between Transit-VNets (inter-region), Between Transit-VNets and local Spokes, and Between Transit-VNets and both local and remote Hub-VNets.2KViews3likes2CommentsInspection Patterns in Hub-and-Spoke and vWAN Architectures
By shruthi_nair Mays_Algebary Inspection plays a vital role in network architecture, and each customer may have unique inspection requirements. This article explores common inspection scenarios in both Hub-and-Spoke and Virtual WAN (vWAN) topologies. We’ll walk through design approaches assuming a setup with two Hubs or Virtual Hubs (VHubs) connected to on-premises environments via ExpressRoute. The specific regions of the Hubs or VHubs are not critical, as the same design principles can be applied across regions. Scenario1: Hub-and-Spoke Inspection Patterns In the Hub-and-Spoke scenarios, the baseline architecture assumes the presence of two Hub VNets. Each Hub VNet is peered with its local spoke VNets as well as with the other Hub VNet (Hub2-VNet). Additionally, both Hub VNets are connected to both local and remote ExpressRoute circuits to ensure redundancy. Note: In Hub-and-Spoke scenarios, connectivity between virtual networks over ExpressRoute circuits across Hubs is intentionally disabled. This ensures that inter-Hub traffic uses VNet peering, which provides a more optimized path, rather than traversing the ExpressRoute circuit. In Scenario 1, we present two implementation approaches: a traditional method and an alternative leveraging Azure Virtual Network Manager (AVNM). Option1: Full Inspection A widely adopted design pattern is to inspect all traffic, both east-west and north-south, to meet security and compliance requirements. This can be implemented using a traditional Hub-and-Spoke topology with VNet Peering and User-Defined Routes (UDRs), or by leveraging AVNM with Connectivity Configurations and centralized UDR management. In the traditional approach: VNet Peering is used to connect each spoke to its local Hub, and to establish connectivity between the two Hubs. UDRs direct traffic to the firewall as the next hop, ensuring inspection before reaching its destination. These UDRs are applied at the Spoke VNets, the Gateway Subnet, and the Firewall Subnet (especially for inter-region scenarios), as shown in the below diagram. As your environment grows, managing individual UDRs and VNet Peerings manually can become complex. To simplify deployment and ongoing management at scale, you can use AVNM. With AVNM: Use the Hub-and-Spoke connectivity configuration to manage routing within a single Hub. Use the Mesh connectivity configuration to establish Inter-Hub connectivity between the two Hubs. AVNM also enables centralized creation, assignment, and management of UDRs, streamlining network configuration at scale. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Spoke ✅ Spoke ↔ Internet ✅ Option2: Selective Inspection Between Azure VNets In some scenarios, full traffic inspection is not required or desirable. This may be due to network segmentation based on trust zones, for example, traffic between trusted VNets may not require inspection. Other reasons include high-volume data replication, latency-sensitive applications, or the need to reduce inspection overhead and cost. In this design, VNets are grouped into trusted and untrusted zones. Trusted VNets can exist within the same Hub or across different Hubs. To bypass inspection between trusted VNets, you can connect them directly using VNet Peering or AVNM Mesh connectivity topology. It’s important to note that UDRs are still used and configured as described in the full inspection model (Option 1). However, when trusted VNets are directly connected, system routes (created by VNet Peering or Mesh connectivity) take precedence over custom UDRs. As a result, traffic between trusted VNets bypasses the firewall and flows directly. In contrast, traffic to or from untrusted zones follows the UDRs, ensuring it is routed through the firewall for inspection. t Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Internet ✅ Spoke ↔ Spoke (Same Zones) ❌ Spoke ↔ Spoke (Across Zones) ✅ Option3: No Inspection to On-premises In cases where a firewall at the on-premises or colocation site already inspects traffic from Azure, customers typically aim to avoid double inspection. To support this in the above design, traffic destined for on-premises is not routed through the firewall deployed in Azure. For the UDRs applied to the spoke VNets, ensure that "Propagate Gateway Routes" is set to true, allowing traffic to follow the ExpressRoute path directly without additional inspection in Azure. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ❌ Spoke ↔ Spoke ✅ Spoke ↔ Internet ✅ Option4: Internet Inspection Only While not generally recommended, some customers choose to inspect only internet-bound traffic and allow private traffic to flow without inspection. In such cases, spoke VNets can be directly connected using VNet Peering or AVNM Mesh connectivity. To ensure on-premises traffic avoids inspection, set "Propagate Gateway Routes" to true in the UDRs applied to spoke VNets. This allows traffic to follow the ExpressRoute path directly without being inspected in Azure. Scenario2: vWAN Inspection Options Now we will explore inspection options using a vWAN topology. Across all scenarios, the base architecture assumes two Virtual Hubs (VHubs), each connected to its respective local spoke VNets. vWAN provides default connectivity between the two VHubs, and each VHub is also connected to both local and remote ExpressRoute circuits for redundancy. It's important to note that this discussion focuses on inspection in vWAN using Routing Intent. As a result, bypassing inspection for traffic to on-premises is not supported in this model. Option1: Full Inspection As noted earlier, inspecting all traffic, both east-west and north-south, is a common practice to fulfill compliance and security needs. In this design, enabling Routing Intent provides the capability to inspect both, private and internet-bound traffic. Unlike the Hub-and-Spoke topology, this approach does not require any UDR configuration. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Spoke ✅ Spoke ↔ Internet ✅ Option2: Using Different Firewall Flavors for Traffic Inspection Using different firewall flavors inside VHub for traffic inspection Some customers require specific firewalls for different traffic flows, for example, using Azure Firewall for East-West traffic while relying on a third-party firewall for North-South inspection. In vWAN, it’s possible to deploy both Azure Firewall and a third-party network virtual appliance (NVA) within the same VHub. However, as of this writing, deploying two different third-party NVAs in the same VHub is not supported. This behavior may change in the future, so it’s recommended to monitor the known limitations section for updates. With this design, you can easily control which firewall handles East-West versus North-South traffic using Routing Intent, eliminating the need for UDRs. Using different firewall flavors inside VHub for traffic inspection Deploying third-party firewalls in spoke VNets when VHub limitations apply If the third-party firewall you want to use is not supported within the VHub, or if the managed firewall available in the VHub lacks certain required features compared to the version deployable in a regular VNet, you can deploy the third-party firewall in a spoke VNet instead, while using Azure Firewall in the VHub. In this design, the third-party firewall (deployed in a spoke VNet) handles internet-bound traffic, and Azure Firewall (in the VHub) inspects East-West traffic. This setup is achieved by peering the third-party firewall VNet to the VHub, as well as directly peering it with the spoke VNets. These spoke VNets are also connected to the VHub, as illustrated in the diagram below. UDRs are required in the spoke VNets to forward internet-bound traffic to the third-party firewall VNet. East-West traffic routing, however, is handled using the Routing Intent feature, directing traffic through Azure Firewall without the need for UDRs. Deploying third-party firewalls in spoke VNets when VHub limitations apply Note: Although it is not required to connect the third-party firewall VNet to the VHub for traffic flow, doing so is recommended for ease of management and on-premises reachability. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Inspected using Azure Firewall Spoke ↔ Spoke ✅ Inspected using Azure Firewall Spoke ↔ Internet ✅ Inspected using Third Party Firewall Option3: Selective Inspection Between Azure VNets Similar to the Hub-and-Spoke topology, there are scenarios where full traffic inspection is not ideal. This may be due to Azure VNets being segmented into trusted and untrusted zones, where inspection is unnecessary between trusted VNets. Other reasons include large data replication between specific VNets or latency-sensitive applications that require minimizing inspection delays and associated costs. In this design, trusted and untrusted VNets can reside within the same VHub or across different VHubs. Routing Intent remains enabled to inspect traffic between trusted and untrusted VNets, as well as internet-bound traffic. To bypass inspection between trusted VNets, you can connect them directly using VNet Peering or AVNM Mesh connectivity. Unlike the Hub-and-Spoke model, this design does not require UDR configuration. Because trusted VNets are directly connected, system routes from VNet peering take precedence over routes learned through the VHub. Traffic destined for untrusted zones will continue to follow the Routing Intent and be inspected accordingly. Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ✅ Spoke ↔ Internet ✅ Spoke ↔ Spoke (Same Zones) ❌ Spoke ↔ Spoke (Across Zones) ✅ Option4: Internet Inspection Only While not generally recommended, some customers choose to inspect only internet-bound traffic and bypass inspection of private traffic. In this design, you only enable the Internet Inspection option within Routing Intent, so private traffic bypasses the firewall entirely. The VHub manages both intra-and inter-VHub routing directly. Internet Inspection Only Connectivity Inspection Table Connectivity Scenario Inspected On-premises ↔ Azure ❌ Spoke ↔ Internet ✅ Spoke ↔ Spoke ❌2.6KViews9likes3CommentsNetwork Redundancy from On-Premises to Azure VMware and VNETs in a Single-Region Design
By shruthi_nair Mays_Algebary Establishing redundant network connectivity is vital to ensuring the availability, reliability, and performance of workloads operating in hybrid and cloud environments. Proper planning and implementation of network redundancy are key to achieving high availability and sustaining operational continuity. This guide presents common architectural patterns for building redundant connectivity between on-premises datacenters, Azure Virtual Networks (VNets), and Azure VMware Solution (AVS) within a single-region deployment. AVS allows organizations to run VMware-based workloads directly on Azure infrastructure, offering a streamlined path for migrating existing VMware environments to the cloud without the need for significant re-architecture or modification. Connectivity Between AVS, On-Premises, and Virtual Networks The diagram below illustrates a common network design pattern using either a Hub-and-Spoke or Virtual WAN (vWAN) topology, deployed within a single Azure region. ExpressRoute is used to establish connectivity between on-premises environments and VNets. The same ExpressRoute circuit is extended to connect AVS to the on-premises infrastructure through ExpressRoute Global Reach. Consideration: This design presents a single point of failure. If the ExpressRoute circuit (ER1-EastUS) experiences an outage, connectivity between the on-premises environment, VNets, and AVS will be disrupted. Let’s examine some solutions to establish redundant connectivity in case the ER1-EastUS experiences an outage. Solution1: Network Redundancy via VPN In this solution, one or more VPN tunnels are deployed as a backup to ExpressRoute. If ExpressRoute becomes unavailable, the VPN provides an alternative connectivity path from the on-premises environment to VNets and AVS. In a Hub-and-Spoke topology, a backup path to and from AVS can be established by deploying Azure Route Server (ARS) in the hub VNet. ARS enables seamless transit routing between ExpressRoute and the VPN gateway. In vWAN topology, ARS is not required. The vHub's built-in routing service automatically provides transitive routing between the VPN gateway and ExpressRoute. Note: In vWAN topology, to ensure optimal route convergence when failing back to ExpressRoute, you should prepend the prefixes advertised from on-premises over the VPN. Without route prepending, VNets may continue to use the VPN as the primary path to on-premises. If prepend is not an option, you can trigger the failover manually by bouncing the VPN tunnel. Solution Insights: Cost-effective and straightforward to deploy. Latency: The VPN tunnel introduces additional latency due to its reliance on the public internet and the overhead associated with encryption. Bandwidth Considerations: Multiple VPN tunnels might be needed to achieve bandwidth comparable to a high-capacity ExpressRoute circuit (e.g., over 10G). For details on VPN gateway SKU and tunnel throughput, refer to this link. Solution2: Network Redundancy via SD-WAN In this solution, SDWAN tunnels are deployed as a backup to ExpressRoute. If ExpressRoute becomes unavailable, SDWAN provides an alternative connectivity path from the on-premises environment to VNets and AVS. In a Hub-and-Spoke topology, a backup path to and from AVS can be established by deploying ARS in the hub VNet. ARS enables seamless transit routing between ExpressRoute and the SDWAN appliance. In vWAN topology, ARS is not required. The vHub's built-in routing service automatically provides transitive routing between the SDWAN and ExpressRoute. Note: In vWAN topology, to ensure optimal convergence from the VNets when failing back to ExpressRoute, prepend the prefixes advertised from on-premises over the SDWAN to make sure it's longer that ExpressRoute learned routes. If you don't prepend, VNets will continue to use SDWAN path to on-prem as primary path. In this design using Azure vWAN, the SD-WAN can be deployed either within the vHub or in a spoke VNet connected to the vHub the same principle applies in both cases. Solution Insights: If you have an existing SDWANs no additional deployment is needed. Bandwidth Considerations: Vendor specific. Management consideration: Third party dependency and need to manage the HA deployment, except for SD-WAN SaaS solution. Solution3: Network Redundancy via ExpressRoute in Different Peering Locations Deploy an additional ExpressRoute circuit at a different peering location and enable Global Reach between ER2-peeringlocation2 and AVS-ER. To use this circuit as a backup path, prepend the on-premises prefixes on the second circuit; otherwise, AVS or VNet will perform Equal-Cost Multi-Path (ECMP) routing across both circuits to on-prem. Note: Use public ASN to prepend the on-premises prefix as AVS-ER will strip the private ASN toward AVS. Refer to this link for more details. Solution Insights: Ideal for mission-critical applications, providing predictable throughput and bandwidth for backup. Could be cost prohibitive depending on the bandwidth of the second circuit. Solution4: Network Redundancy via Metro ExpressRoute Metro ExpressRoute is the new ExpressRoute configuration. This configuration allows you to benefit from a dual-homed setup that facilitates diverse connections to two distinct ExpressRoute peering locations within a city. This configuration offers enhanced resiliency if one peering location experiences an outage. Solution Insights Higher Resiliency: Provides increased reliability with a single circuit. Limited regional availability: Availability is restricted to specific regions within the metro area. Cost-effective Conclusion: The choice of failover connectivity should be guided by the specific latency and bandwidth requirements of your workloads. Ultimately, achieving high availability and ensuring continuous operations depend on careful planning and effective implementation of network redundancy strategies.2.4KViews5likes0CommentsAccelerate designing, troubleshooting & securing your network with Gen-AI powered tools, now GA.
We are thrilled to announce the general availability of Azure Networking skills in Copilot, an extension of Copilot in Azure and Security Copilot designed to enhance cloud networking experience. Azure Networking Copilot is set to transform how organizations design, operate, and optimize their Azure Network by providing contextualized responses tailored to networking-specific scenarios and using your network topology.1.5KViews1like1Comment