Special thanks to Chacko Daniel for helping out with the SFRP connectivity issue.
Today, the default Service Fabric configuration exposes the ports 19080 and 19000 publicly. Those ports are usually protected by certificate-based security or AAD, but it’s definitely a good idea to hide those ports from the Internet.
There are multiple ways to achieve this goal:
Using Network Security Groups to limit traffic to selected public networks
Exposing internal services using Internal Load Balancer to a private VNET, while still exposing public services with Azure Load Balancer
More complex solutions
In this article, I will focus on the second approach.
Azure Load Balancer will receive traffic on the public IP addresses
Internal Load Balancer will recieve traffic on the private VNET
Important: Service Fabric Resource Provider (SFRP) integration
There is a slight issue with this configuration, as for SF runtime 5.4, SFRP requires access to the SF endpoints and ports 19000 and 19080 for management purposes (and it is able to use only public addresses for that).
The current VMSS implementation allows neither referencing a single port on two load balancers, nor configuring multiple IP configs per NIC, nor configuring multiple NICs per node. This makes exposing the single port 19080 for both load balancers virtually impossible. Even if possible, it would make the configuration much more complex and would require a Network Security Group.
Fortunately, this is no longer an issue in 5.5. Starting from this version, SF requires only an outbound connection to the SFRP https://<region>.servicefabric.azure.com/runtime/clusters/, which is provided by ALB to all the nodes.
ALB and ILB step-by-step
Below is a short step-by-step guide. A lot of points in this guides apply also to configuring ALB and ILB for Virtual Machine Scale Sets without Service Fabric.
Basic cluster configuration
Create a project using the template service-fabric-secure-cluster-5-node-1-nodetype from the quickstart gallery.
Make it running (you need to do the standard steps with Key Vault etc). It is a good idea to deploy it just to make sure the cluster is up and running – ILB can be added later on by redeploying a modified template.
Configuring ILB and ALB
Now you need to create a secondary subnet in which your ILB will expose its front endpoint. In azuredeploy.json:
. After the subnet0Ref variable, insert these:
. Now you can create the ILB. Find the section responsible for creating ALB - it has "name": "[concat('LB','-', parameters('clusterName'),'-',variables('vmNodeType0Name'))]", and after this entire large section, insert the ILB config:
At this point, you can deploy your template, and Service Fabric administrative endpoints are only available at your ILB IP 10.0.1.10.
. It is also good idea to get rid of the rule allowing remote desktop access to your node cluster on its public IP (you can still access them from your internal network on addresses like 10.0.0.4, 10.0.0.5, etc.).
i) You need to delete it from the ALB configuration:
If you have already deployed the template, you need to do 6ii, redeploy and then 6i. Otherwise you will get an error: LoadBalancerInboundNatPoolInUseByVirtualMachineScaleSet.
. Last thing – there is an option in the ARM template for Service Fabric called managementEndpoint – the best idea is to reconfigure it to the Fully Qualified Domain Name of your ILB IP Address. This option is related to the aforementioned SFRP-integration issue in 5.4 and earlier.
You can now freely configure all your services and decide which one is exposed on which load balancer.