Recently we added new customer configurable toggles for ExpressRoute Virtual Network Gateways and Virtual WAN Hubs, allowing customers to control the behaviour of routing across their ExpressRoute circuits for resources within Azure. These changes make it easier for customers to correctly use the Microsoft Global Network for connectivity between Virtual Networks, ensuring they obtain the lowest possible latency, highest network bandwidth and most resilient network paths.
Using the ExpressRoute Private Peering for Virtual Network connectivity has long been inadvisable. To understand why, lets first acknowledge that Virtual Networks (VNets) are Azure resources deployed within Azure Regions (E.g. Azure East US), whilst Azure ExpressRoute Circuits are resources with a data plane that is deployed within Peering Locations (E.g. Washington DC). These entities (Regions and Peering Locations) can sometimes be close together, sometimes separated by a large geographical distance, but always logically separate locations.
Therefore, if we build an Azure Networking topology that utilises ExpressRoute Circuits for connectivity between our Virtual Networks, we are creating several sub-optimal conditions in our network design, including:
The following checkboxes are now configurable and active across all Azure ExpressRoute Virtual Network Gateways and Azure Virtual WAN Hubs in every Public Azure Region.
Configurable within the VNG configuration blade, two new toggles are available, allowing filtering of prefixes learnt from the ExpressRoute MSEE, originated from either another VNG, or a Virtual WAN Hub.
Pre-existing VNG resources will keep the existing behaviour of accepting all network prefix advertisements, including those originating from other Virtual Networks and Virtual WAN Hubs, learnt from the ExpressRoute MSEE. For these existing resources, the below checkboxes will show in the Azure Portal in a “checked/ticked” state representing the continuation of the existing behaviour. Therefore, no changes will be observed for existing VNG, but customers are able to alter the configuration if they desire.
Going forward, the default for newly created VNG, will be to not accept network prefix advertisements from either remote VNG or remote Virtual WAN Hubs. I.e. By default, VNet-to-VNet traffic via ExpressRoute will be blocked on all newly created VNG. For these new resources, the above checkboxes will show in the Azure Portal in a “unchecked/unticked” state, representing the new default behaviour. For these newly deployed VNG, customers are still able to alter the configuration if they have a reason to utilise the older behaviour.
Configurable within the Virtual WAN hub-level “edit virtual hub” blade, a new toggle is available, allowing filtering of prefixes learnt from the ExpressRoute MSEE, originated from remote VNG.
Pre-existing Virtual WAN hub resources will keep the existing behaviour of accepting all network prefix advertisements, including those originating from remote ExpressRoute Virtual Network Gateways, learnt from the ExpressRoute MSEE. For these existing resources, the below checkbox will show in the Azure Portal in a “checked/ticked” state representing the continuation of the existing behaviour.
Going forward, the default for newly created Virtual WAN Hubs, will be to not accept network prefix advertisements from remote VNG when advertised over ExpressRoute. I.e. By default, Customer-Managed VNet to Virtual WAN traffic via ExpressRoute will be blocked on all newly created Virtual WAN Hubs. For these new resources, the above checkboxes will show in the Azure Portal in a “unchecked/unticked” state, representing the new default behaviour.
Customers are now reminded about this behaviour change and directed to the location of the new checkbox in Virtual WAN, when using the Azure Portal to connect a new ExpressRoute Circuit to an Azure Virtual WAN Hub, as shown in the screenshot below.
Where customers have multiple Virtual WAN Hubs in the same parent-level Virtual WAN resource, connected to the same ExpressRoute circuit(s) , they should utilise Virtual Hub Routing Preference to ensure optimised hub-to-hub routing of Virtual Networks directly connected to the Azure Virtual WAN hubs.
In the rare scenario where customers need to connect Virtual WAN Hubs in different parent-level Virtual WAN resources, Hub Routing preference is not applicable. Therefore, this traffic will continue to flow via ExpressRoute for Virtual Networks directly connected to the Azure Virtual WAN Hubs.
Please note that there are certain transit scenarios that are blocked on Azure irrespective of the toggle settings discussed in this article. Specifically, when routes are classified as having originated from transitive sources, they are filtered out by both Virtual WAN Hubs with ExpressRoute Gateways, and Customer-Managed Hub/Spoke VNG if using Azure Route server with the Branch-to-Branch setting. Examples of transit-originated routes include:
The following diagram shows a common pattern used by many customers, wherein Hub Virtual Networks in multiple regions are connected to common ExpressRoute Circuit(s) for the purpose of resilient connectivity back to On-Premises networks.
The diagram shows the required UDR configuration historically needed to optimally place VNet-to-VNet (Most often Inter-region) traffic on the Microsoft Global Network using the most efficient and performant method; Global VNet Peering. This common pattern typically uses Azure Firewall or Network Virtual Appliances (NVA) in each Hub VNet to forward traffic between Spokes VNets in each disparate Hub/Spoke environment. For ease-of-viewing, the diagram shows only the required UDR configuration in one region, this must be mirrored in both Hubs for end-to-end reachability in both directions.
Notice how the UDR configuration of the AzureFirewallSubnet must contain a route entry for every Spoke VNet in the remote region. This is to override the BGP learnt routes from ExpressRoute, which are always advertised at the specificity-level of every CIDR ranges allocated to a VNet Address Space. E.g. If you have 100 VNets in Region A, you need at least 100 lines in your UDR in region B. This complexity must be managed or automated and the configuration is prone to error. Also, when adding Spokes in Region A, there must be a framework in place to update the UDR in region B accordingly, and vice versa.
Now let’s update the diagram, after modifying the ExpressRoute Virtual Network Gateways to take advantage of the new feature to filter out prefixes via ExpressRoute that originate from remote VNG.
The above diagram shows how the new ability to filter our remote VNG prefixes allows us to greatly simplify our UDR route management operation process. As the VNG in region B is blocked from learning the specific CIDR ranges from Region A via ExpressRoute, our UDR entry on the AzureFirewallSubnet being used to force Inter-region traffic over Global VNet Peering can make use of a summary/aggregate route. The single line entry of 10.100.0.0/16 is wide enough to encompass all Spoke Virtual Networks in region A will be the “winning route” that is used to reach these remote VNets. Again, the same configuration is required in the opposite direction to ensure traffic symmetry.
As we can see, this has drastically reduced our UDR configuration from potentially 100’s of routes, that may dynamically change over time, to only a single static route entry. Note, this assumes that Region A Spoke VNets only ever get a CIDR range allocated from the specified summary/aggregate route. The value of this approach scales exponentially as we consider the use of this feature in customer networks which comprise of more than two disparate Hub/Spoke networks connected via Global VNet Peering.
There are some scenarios on Azure where customers require connectivity from existing customer-managed Hub/Spoke networks to Azure Virtual WAN connected Virtual Networks. The most common scenario is during a migration to Azure Virtual WAN as the chosen foundational network building block for a multi-region topology, wherein for a specified period customers may need to utilise ExpressRoute for network connectivity between these environments, often for the duration of the migration. Note, it is always recommended to connect Virtual Networks directly to the Virtual WAN Hub, but as it's not possible to connect a single Virtual Network to both a VWAN Hub and a regular customer-managed Hub (with VNG), the scenario of using ExpressRoute to facilitate this connectivity is sometimes encountered.
The diagram below shows the required toggles to enable this connectivity, wherein the Virtual WAN Hub settings are set to “allow traffic from non-Virtual Networks” to allow injection of the existing Hub/Spoke prefixes (10.100.10.0/24, 10.100.20.0.24 etc), and the existing ExpressRoute VNG is set to “allow traffic from remote Virtual WAN networks” to allow the reverse path.
Connectivity between virtual networks over ExpressRoute
Configure a Virtual Network Gateway for ExpressRoute
Create an ExpressRoute association in Virtual WAN
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.