In this article
Azure Services and the solutions you deploy into Azure are connected to the Microsoft global wide-area network also known as the Microsoft Global Network or the Azure Backbone. There are a few different ways to connect to an Azure service from a subnet, depending on your requirements around securing access to these services. Your requirements should dictate which method you choose. There are some common misconceptions around connectivity, and the goal of this article is to provide some clarity around connecting to Azure Services. The three general methods of connectivity are:
Before we talk about the different connectivity methods, it’s important to understand the network in which Azure Services live. The Microsoft Global Backbone is a network that connects customers to Microsoft cloud services through trillions of requests and offers high availability, capacity, and flexibility. Microsoft uses a strategy to reduce latency and improve quality of service by using direct interconnections, edge nodes, and innovative software to route data as close as possible to customers and their services. By default, all connections to Azure Services are on the Azure Backbone and use what is called cold potato routing. This means that traffic destined to an Azure service will enter the Microsoft Global Network at an edge Point of Presence (POP) closest to the user. Alternatively, return traffic will exit the Microsoft Global Network at the edge POP closest to the user. This ensures that traffic spends most of its time on the Microsoft Global Network enjoying the following benefits:
Having your Azure service publicly accessible can pose security risks and should be done with caution. Microsoft recommends instead leveraging Private Link for secure and private access to Azure services. However, there are some reasons for which you may need to leave your Azure service publicly accessible:
Default Public Access on the Microsoft Global Network
There exists a common misconception regarding the connectivity to an Azure service through its public IP address, which implies that traffic is compelled to traverse outside the Microsoft Global Network onto the public internet before reconnecting to the Azure service's public IP. This notion is false.
Upon creating a subnet, Azure establishes a default system route for the address prefix 0.0.0.0/0, setting the next hop to the internet. This route comes into play when there is outbound traffic that doesn't match any other route specified in the subnet's route table and thus exits the Azure backbone for the public internet. However, the Microsoft Global Network is smart enough to recognize public IP addresses affiliated with Azure services and ensure that traffic destined for these addresses remains confined within the Microsoft Global Network. As illustrated in Figure 1, when accessing a public Azure service like storage, the traffic stays on the Microsoft Global Network.
Figure 1 - Public access via Microsoft Global Network
Default Public Access with Custom UDR
When you create a User Defined Route (UDR) within your subnet to override the default system route for the 0.0.0.0/0 prefix, as is typically done when using a Network Virtual Appliance (NVA), any traffic bound for unknown prefixes will be first routed to the NVA rather than being directed to the public internet. However, traffic destined to public Azure Services will usually remain on the Microsoft Global Network. Figure 2 demonstrates traffic to a public Azure service such as storage after overriding the default system route.
Figure 2 - Public access via UDR
Default Public Access with Forced Tunneling
Forced Tunneling allows you to direct some or all internet bound traffic to a firewall residing on-premises. This allows you to leverage your existing on-prem firewalls to inspect both on-prem and Azure traffic directed to the internet. When using a forced tunnel over Express Route, traffic bound for public Azure Services will not traverse the public internet. Once the on-prem firewall completes its task, thanks to Microsoft Peering and the local preference attribute, traffic bound for public Azure Services will return back through the Azure Global Network via the Express Route connection. Figure 3 demonstrates traffic to a public Azure service such as storage via a forced tunnel using Express Route.
Figure 3 - Access to public Azure Service via forced tunnel and ER
However, if you are using forced tunnelling with a site-to-site VPN instead of Express Route, then traffic to a public Azure Service will traverse the public internet. This is because the Microsoft Peering feature is unique to Express Route. Figure 4 demonstrates access to a public Azure Service using a forced tunnel and a site-to-site VPN.
Figure 4 - Access to public Azure Services via forced tunnelling and s2s
Employing Service Endpoints provides the capability to limit access to your public Azure service exclusively to designated Virtual Networks and specified IP addresses and ranges. These IP addresses and ranges may extend beyond the boundaries of the Microsoft Global Network.
Once Service Endpoints are enabled, the Azure service retains its public IP address, and data traffic stays within the Microsoft Global Network as it does with default public access. Although Service Endpoints offer enhanced security compared to the default public access method, Microsoft strongly advises the utilization of Private Link as the preferred option.
Service Endpoints on the Microsoft Global Network
By enabling Service Endpoints, you guarantee that traffic directed to your public Azure service remains confined within the Microsoft Global Network, even when you override the default system route 0.0.0.0/0. As illustrated in Figure 5, you can observe how Service Endpoints maintain the traffic within the Microsoft Global Network, even in cases of overriding the default system route.
Figure 5 - Accessing public Azure services using Service Endpoints and UDR
Leveraging Private Link enables you to connect with Azure services through a private IP address confined within your subnet via Private Endpoints. Moreover, you have the option to grant access to your Azure service for customers or partners by authorizing Private Endpoints in their respective subnets.
After enabling Private Link and its corresponding Private Endpoint, you can disable public access to the Azure service, effectively facilitating all communication through a private network. Nevertheless, there may be instances where it is preferable to maintain public access, such as during gradual migration processes or to support legacy integrations.
Private Link on the Microsoft Global Network
Just like default public access and Service Endpoints, traffic directed to Azure services via Private Link stays within the confines of the Microsoft Global Network. When it comes to Private Link traffic, overriding the 0.0.0.0/0 system route has no effect, as Private Endpoints exclusively employ Private IP addresses.
Figure 4 – Accessing an Azure service via Private Link
Several Azure services offer flexibility to modify how public traffic is routed via the Microsoft Global Network. As previously mentioned, the default routing approach is referred to as "cold potato," where traffic enters and exits the Microsoft Global Network at points of presence (POPs) closest to the client.
However, certain Azure services provide the option to associate the public IP with the routing preference "internet" instead of "Microsoft Global Network." When this setting is activated, the routing behavior for the public Azure service shifts to a "hot potato" approach, wherein traffic enters and exits the Microsoft Global Network at the nearest point to the Azure service itself. This results in traffic spending more time on the public internet and less time on the Azure backbone. Changing this routing preference does not impact traffic within the Microsoft Global Network. For example, traffic from a subnet to a public Azure service which is using the “internet” routing choice will still remain on the Azure backbone.
By choosing this routing option, you can make a trade-off between performance and cost, as data egress charges are reduced when using the "internet" routing choice. Additionally, this adjustment aligns with the default routing behavior of other cloud providers, which also use the "hot potato" approach.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.