Context
Azure Virtual WAN (vWAN) simplifies large-scale branch connectivity by optimizing routing using the Microsoft global network for connectivity of branch offices, on-premises data centers and Azure Cloud resources.
SD-WAN is a software-defined approach that can be used to manage the WAN between these same branch offices, data centers, and Cloud services.
In addition to Azure's native hybrid connectivity options such as ExpressRoute and VPN, Microsoft has collaborated with SD-WAN partners to offer a managed SD-WAN in vWAN solution. This solution allows for hosting an SD-WAN hub/gateway within the virtual hub and thus extending the SD-WAN overlay into Azure.
When combined, virtual WAN and SD-WAN can provide a robust and cloud-oriented global network, leveraging the Microsoft backbone for optimized inter-region performances and resiliency.
In this article we will delve into the managed SD-WAN in vWAN solution, providing insights into the deployment process, as well as considerations for throughput and scaling. Additionally, we will explore the various underlay options available for managed SD-WAN in vWAN deployments.
Managed SD-WAN in vWAN deployment
Solution overview
Managed SD-WAN in vWAN is a scenario where a pair of SD-WAN NVAs (SD-WAN virtual edges) are automatically deployed in an Azure virtual hub and integrated within the virtual hub router using dynamic routing (BGP).
NVA stands for Network Virtual Appliance and refers to a VM loaded with a 3rd party OS. For a deep-dive on NVAs in a virtual WAN hub see About Network Virtual Appliances - Virtual WAN hub - Azure Virtual WAN | Microsoft Learn.
This scenario simplifies the deployment and management of the SD-WAN NVAs, as they are seamlessly provisioned in Azure and set up for a simple import into the partner portal (SD-WAN orchestrator).
The focus of this document is on managed SD-WAN in vWAN deployments, with the following partner solutions currently supported: Aruba, Barracuda, Cisco, Fortinet, Versa, and VMware.
The Azure Marketplace offers associated with each of these partners are as follows:
Info on the other options to extend SD-WAN overlays into Azure can be found at the end of this article.
SD-WAN NVA deployment process
As part of the managed SD-WAN in vWAN deployment process, 2 VM instances are deployed and loaded with the selected SD-WAN OS.
The aggregate bandwidth capacity of the managed SD-WAN in vWAN solution is determined by the number of NVA Scale Units or NVA Infrastructure Units selected. Each NVA Infrastructure Unit provides a bandwidth capacity of 500 Mbps.
The underlying VM type is pre-defined and pre-tested based on the number of NVA Infrastructure Units chosen, with the size of the VMs increasing proportionally to ensure adequate bandwidth capacity.
Each VM is set up with one internal NIC and one external NIC. However, for 10+ Infrastructure Units deployments, it is recommended to confirm the specifics with the SD-WAN partner.
The SD-WAN NVA deployment process varies slightly depending on the SD-WAN solution, but generally involves four steps:
- deployment of the NVA instances triggered from the Azure portal,
- import and update of the SD-WAN virtual edges on the partner portal,
- set up of the routing and the SD-WAN overlay on the partner portal,
- Update of the SD-WAN branches with the new SD-WAN hub
Example for a VMware SD-WAN managed deployment:
Customers should refer to the partner-specific documentation and guidance for the deployment process.
Throughput considerations and scaling options
NVA Infrastructure Units throughput vs SD-WAN throughput
The throughput of the managed SD-WAN NVAs depend on several factors:
- the throughput available on the underlying VMs and determined by the number of NVA Infrastructure Units deployed as part of the managed SD-WAN in vWAN solution,
- the impact of SD-WAN specific features and functions (encapsulation/decapsulation, encryption/decryption, deep packet inspection etc),
- the support of accelerated networking in the image provided by the SD-WAN partner.
The mapping of vWAN NVA Infrastructure Units to expected throughputs, which may vary depending on the testing conditions (IMIX-400, IMIX-1300…), is typically provided by the SD-WAN partner.
Vertical scaling and horizontal scaling
In a vWAN hub, the most straightforward way to scale the SD-WAN NVAs is through vertical scaling, which entails using larger VMs. This, in turn, translates into increasing the number of NVA Infrastructure Units within the vhub.
Another option for scaling the SD-WAN NVAs in a vWAN hub is through horizontal scaling, which means increasing the number of NVA instances. Depending on the vhub configuration, it may be possible to deploy a second set of SD-WAN NVAs within a vhub.
The two deployments can operate as two separate SD-WAN hubs running in parallel, as illustrated in the diagram below. This configuration can help ensure sufficient bandwidth for a critical on-premises data center, for instance.
Assuming the SD-WAN solution allows it, it might also be possible to merge two deployments into a single SD-WAN cluster, as shown in the diagram below.
Not all SD-WAN solutions may support combining two deployments into a single SD-WAN cluster.
It is recommended to consult with your SD-WAN partner to verify the feasibility of horizontal scaling scenarios and determine the optimal scaling option for specific requirements. Customers should also seek the guidance of their SD-WAN partner to ensure the expected throughput and performance of their SD-WAN NVAs.
Underlay options for managed SD-WAN in vWAN deployments
Overlay vs underlay
The term SD-WAN overlay refers to a mesh of SD-WAN tunnels that are established over different underlying networks (the underlays).
Underlays are the physical or virtual network connections that carry the SD-WAN tunnels established between SD-WAN devices, located either on-premises or in the cloud.
What are public and private underlays?
Public underlays use the public internet as the transport medium, while private underlays use private connections, such as dedicated P2P, MPLS or ExpressRoute, that offer more reliability, security, and performance.
Depending on their network requirements and on their SD-WAN solution capabilities, customers may want to rely only on the public underlay, or leverage both public and private underlays to achieve the best of both worlds.
From an Azure standpoint, the public underlay is available by default. Managed SD-WAN NVAs in vWAN connect to internet via the public IP addresses assigned to their external interfaces.
Private underlay usage for SD-WAN in vWAN deployments
Most of the managed SD-WAN in vWAN deployments can also use a private underlay.
When supported, access to the private underlay is possible through ExpressRoute connectivity, providing it has been provisioned. In such cases, the clear way to access it is via the private peering of the ExpressRoute circuit, as depicted on the diagram below:
The SD-WAN partners have different approaches to supporting the use of a private underlay and should provide the necessary documentation to facilitate the implementation.
ExpressRoute Microsoft peering, the bonus underlay?
Finally, it may also be worth considering the option of using the Microsoft peering as an underlay, as described here.
In summary, the idea is to NAT a private underlay behind a public prefix advertised over a Microsoft peering of an ExpressRoute circuit that is landing on a customer MPLS network.
From the perspective of an SD-WAN NVA in Azure, the Microsoft peering represents a public underlay that advertises customer public prefixes from a specific ExpressRoute peering location, which can be accessed through the NVA public transport interface. Private IP addresses that represent SD-WAN tunnel endpoints on a private underlay, such as an MPLS network, can then be hidden (NATted) behind these customer public prefixes, enabling an SD-WAN tunnel to be built on the customer private underlay, yet terminating on the SD-WAN NVA public IP addresses.
It is important to carefully evaluate the requirements and impact of using both public and private underlays for managed SD-WAN in vWAN deployments and in the overall network design, and consult with the SD-WAN partner and Microsoft for recommandations and guidance on these designs.
Conclusion and recap: exploring additional solutions
To sum up, extending an SD-WAN overlay into Azure is now possible and many customers are already taking advantage of it.
Managed SD-WAN in vWAN is only one of the available options. In total, there are three main approaches to extend the reachability of an SD-WAN overlay into Azure, whether in Hub & Spoke or in vWAN:
- As discussed in this article, managed SD-WAN in vWAN offers a managed solution for extending the SD-WAN overlay directly into Azure up to virtual SD-WAN appliances integrated natively inside the virtual hub.
- VM series deployments are another way to stretch the SD-WAN overlay up to virtual SD-WAN appliances, here within a customer managed VNet. This solution requires manual deployment and leverages BGP endpoint in vWAN, or the Azure Route Server (ARS) in Hub & Spoke, for dynamic route exchange. Depending on the complexity of the addressing plan, static routing may also be possible.
GitHub - adstuart/azure-sdwan and About BGP peering with a virtual hub - Azure Virtual WAN | Microsoft Learn
- Finally, the cloud edge colo model uses physical SD-WAN appliances in the cloud edge PoP to terminate the SD-WAN overlay, providing direct connectivity to Azure via ExpressRoute, for instance.
Architecture: Virtual WAN and SD-WAN connectivity - Azure Virtual WAN | Microsoft Learn
For a full list of available options to run SD-WAN in Azure see here.
For SD-WAN in vWAN options, see here.
For SD-WAN in H&S: see here.
For any of these options, and as reminded many times in this article, it is crucial to check with your SD-WAN partner for reference designs that complement the high-level patterns discussed.