Azure Front Door using Private Link Service with two High Available Network Virtual Appliances.
Published Jun 14 2022 11:06 AM 25.8M Views
Microsoft

Introduction and Purpose

 

Network security is increasingly taking importance when setting up a virtual cloud environment. At the same time, more customers are looking to enable global connectivity in a low latency environment, relying on Content Delivery Network solutions like Azure Front Door. With the new added capabilities in AFD, customers are combining WAF (Azure Web Application Firewall) filtering while hardening their internal environment with the use of highly available Network Virtual Appliances for further traffic inspection. Azure Front Door now provides CDN capabilities and integrates with Private Link Services to obscure access to a customer internal network.  This article explores the behavior of the Private Link Service integration with Front Door and how it can be leveraged to expose a Web Server securely. 

 

This blog provides a working demo of using Azure Front Door with Private Link Service with a highly available Network Virtual Appliance setup behind the Private Link Service. The NVA traffic inspects and forwards traffic to a Web Server on the same Vnet. Private Link Service is different from private endpoint as it is a way of providing services to different entities (I.e., companies). PLS enables private services that are running behind an Azure Load Balancer, by adding a NAT PLS private IP within the associated Vnet. This Vnet can be a transit Vnet or workload Vnet.  For More please read https://docs.microsoft.com/azure/private-link/private-link-service-overview.

 

For those familiar with traditional routing using TCP/IP headers, routing with Front Door requires a change in mindset as it performs routing based on layer 7 headers.  I hope this blog gives the reader an idea of how this can be addressed, and encourages future explorations based on this.   

The lab setup looks like this:

 

Mauricio_Rojas_0-1655169321566.png

 

Pre-Requisites 

 

 

Considerations: 

 

  • In a scenario where highly-available Network Virtual Appliances (NVAs) are deployed, it is common practice to have an “untrust” subnet, “trust” subnet, and a server subnet or separate VNET where the workloads such as web servers and databases reside. The trust subnet interfaces of the NVAs are load balanced by an Azure Internal Load Balancer. We will follow this pattern for this demo.  
  • The Private Link Service will direct traffic to the ILB of the NVAs and from there, the necessary routing will take place to reach the Windows Server Virtual Machine running IIS (Internet Information Services). The host header of this website is http://webserver.maugamo.info which must be enabled on the Origin as depicted above.  
  • When traffic traverses the Private Link Service, it is now considered East to West Traffic, rather than North to South, hence, excluding the participation of any external Load Balancer on the Vnet for the purposes of this article.  
  • For this demo, we will use HTTP only for simplicity.   
  • Front Door only supports private endpoints in a subset of regions. They are listed here: https://docs.microsoft.com/azure/frontdoor/private-link#region-availability 

 

Traffic Flow Description: 

 

The Client machine will hit the URL http://webserver.maugamo9.info. From there, the nearest AFD (Azure Front Door) Point of Presence to the originator will service the request. The traffic will be Source NAT ’ed from the POP (Point of Presence) IP to the Private Link Service IP. This will be the source IP now. With a new source IP of 10.150.1.8, the packet will reach the Internal Load Balancer at 10.150.1.4 and, from there, traffic will be load balanced across the two trust interfaces of the NVAs. Once the requests arrive at the NVA, it will be forwarded out through one of the trust interfaces of the NVA to its destination, which is the Web Server on the Vnet (10.150.2.6). The traffic response from the Web Server will go back to the ILB having a UDR (user-defined routes) with the next hop of the ILB frontend IP. This is to ensure symmetry of traffic.  

 

 

Deployment  

 

Front Door 

 

  • The Front Door Service will have the Origin set to the ‘trust’ Internal Load Balancer front end IP depicted in the diagram. Because our ILB is accessed from AFD through the Private Link Service, we need to associate the PLS with AFD in our Origin configuration. 

Mauricio_Rojas_1-1655169542708.png

 

  • The origin host header will be the end Webserver running IIS. This will be the Layer 7 traffic we are trying to route.  

  • The Front Door Service will have the Origin set to the ‘trust’ Internal Load Balancer front end IP depicted in the diagram above. Because our ILB is accessed from ADF through the Private Link Service, we need to associate the PLS with AFD in our Origin configuration. This will be accomplished utilizing the URI of our pre-created PLS. For this demo, we will use HTTP only for simplicity. 

 

Mauricio_Rojas_0-1655169614093.png

  • The routing on the Front Door side sends traffic to the Private Link Services associated with. This will be accomplished utilizing the URI of our pre-created PLS. 

Mauricio_Rojas_1-1655169654774.png

 

Private Link Service 

 

  • The Azure Private Link Service will provide a NAT IP address within the range of my ILB subnet. This IP will be seen on the packet captures and will be the source IP of the traffic incoming into the Network Virtual Appliance.  

 

Mauricio_Rojas_2-1655169696094.png

 

 

NVA OPNsense  

 

Firewall section 

 

  • Now let’s talk about the configuration within the NVA itself. There are two interfaces: WAN is the untrust interface. LAN interface is the trust interface. Some firewall rules are configured already for you on the original script depicted below. One allowing the probe to come into the NVA for health checks. The second one allows traffic on the LAN interface. Finally, there is an explicit allow rule from the Private Link Service (PLS) NATT’ed IP to the Web Server.  

 

Network Address Translation section.  

 

  • In order to filter traffic from AFD through the OPNsense firewalls, we need to add a NAT rule to accept traffic on the trust interface that originates on the PLS, then route the traffic back out the trust interface to the web server, after applying firewall rules. In other words, we will create what OPNsense refers to as a 'port forwarding' NAT rule, which will forward traffic from the PLS IP (10.150.1.8) port 80 to the web server (10.150.2.6) on port 80. Please observe the image below. The NAT options on the OPNsense include Port Forwarding and One-to-One: 
    • Port Forward allows traffic between the external and internal interfaces to be untranslated. 
    • One-to-One allows for Network Address Translation. This is a scenario yet to explore 

 

Mauricio_Rojas_3-1655169795862.png

 

  • The LAN rule is depicted above. 

Routing  

 

OPNsense Routing Configuration 

 

  • This is where it starts getting a little bit interesting, and this can be universal to routing in general in any Network Virtual Appliance. As a best practice, we add static routes to our NVA to ensure that internal traffic is routed to our internal interfaces, as shown below. This is especially important when you have Vnet peers (Spokes), because of the nature of the System Routes in Azure. It is best not to take chances of internal traffic going through the untrust subnet and coming back through the trust subnet. I will explain more on the Azure Routing section.  

 

Mauricio_Rojas_4-1655169861471.png

 

  • Please cross reference the picture above to the networking diagram. Also,  a very specific /32 route was added for the PLS NAT’ed IP to reduce the risk of asymmetry.  

 

Azure VNet Routing Configuration 

 

  • Due to the nature of Azure System Routes, if we added a 0.0.0.0/0 with next hop ILB IP 10.150.1.4, we still have a more specific Vnet System Route on the effective Route table (shown below).  

 

Mauricio_Rojas_5-1655169899815.png

 

 

  • This System Route will hinder our ability to route properly. Packets will effectively take different paths across both NVAs and both trust and untrust interface. This can be tested very easily by shutting down one of your NVAs to ensure only one is active.  , this can be observed on the NVA by running some packet captures with the command tcpdump -I  host 10.150.1.8 
  • This can also be solved ensuring the return traffic goes through the ILB by adding a specific UDR like shown below.  

 

Mauricio_Rojas_6-1655169899817.png

 

Future work: 

 

  • In a future blogpost, I hope to demonstrate the same setup, except replacing the web server with a storage account and static content. Implement a 1:1 NAT or SNAT on an NVA that has the capability to do so and explore if the UDR mentioned above will still be required. There is a reasonable doubt that it might be needed. 
  • Scale this up to a spoke Vnet 
2 Comments
Co-Authors
Version history
Last update:
‎Jun 14 2022 10:29 AM
Updated by: