Clustering the BizTalk EntSSO Master Secret Server, SQL and MSDTC services on Azure IaaS
Published Mar 04 2024 08:48 AM 1,363 Views
Microsoft

 

 

reynaldom_0-1709740735454.jpeg

 

 

This article applies to BizTalk Server  2020 and SQL Server  2019.

 

While most of our current BizTalk Server customers are migrating to Azure Logic Apps, some customers are conducting a gradual hybrid migration to Azure, where they are running their BizTalk and Host Integration workloads in Azure leveraging Infrastructure as a Service (IaaS) capabilities. For them, we have prepared this article, to be able to cluster the Enterprise Single Sign-On Master Secret Server (EntSSO MSS) and Microsoft Distributed Transaction Coordinator (MSDTC) service on Azure virtual machines. 

 

The following are the general steps for this scenario:

 

  1. Create 4 Virtual Machines (VMs) within the same VNET/SNET:
    • 1 AD/DNS.
    • 1 BizTalk Server 2020 VM.
    • 2 SQL Server 2019 VMs for BizTalk DBs in traditional Failover Cluster Instance (FCI, preferred) or Always On Availability Groups, clustered MSS and MSDTC.
  2. Create Azure Internal Load Balancer (ILB) with Standard SKU.
  3. Create the SQL FCI with Azure Shared disks on SQL VMs
  4. Cluster MSDTC in SQL VMs, in the same Cluster Role (group) and per SQL instance if applicable.
  5. Cluster EntSSO in the same cluster role where SSODB resides.
  6. Install and configure BizTalk Server.

 

Create the AD/DNS, BizTalk 2020 and SQL 2019 VMs within the same VNet/Subnet:

Because BizTalk Server, EntSSO and MSDTC services make highly use of RPC and HA ports, implementation of standard SKU for Azure Internal Load Balancer is recommended. You can also use the basic SKU. However, this will lead to additional rules in ILB configuration as basic SKU does not have HA ports option. MSDTC service works best with ping enabled and ping is good validation tool. The ILB basic SKU does not allow ping. Also basic SKU will be retired in 2025 and migrating later requires to move from basic IP to standard IP which requires NIC swap also. In short avoid Basic SKU and use Standard SKU to avoid problems and complexity. 

 

It is recommended fixing ports for all SQL, MSDTC and EntSSO services (We will cover this later).

 

For this scenario, we are going to implement the Standard SKU for ILB.

 

These are the general steps for configurations of the ILB:

 

  1. 3 Static fronted IPs: 1 for the SQL FCI, 1 for SSO and 1 for MSDTC. 
  2. 1 Backend pool: 2 SQL FCI VMs for BizTalk DBs with the clustered EntSSO MSS and MSDTC services.
  3. 3 Health probe ports: 1 for the SQL FCI, 1 for SSO and 1 for MSDTC.
  4. 3 Health load balancing rules: 1 for the SQL FCI, 1 for SSO and 1 for MSDTC.

 

Create Internal Load Balancer (ILB) with Standard SKU.

 

Once the Azure Portal, search for “Load Balancers” resources >> click Create  and complete the basic information as follows:

 

ReynaldoMSFT_14-1709558931889.png

 

Click the Review + create button and after validation passes, click the Create button and wait until the ILB is created.

 

 

For more information see Quickstart: Create an internal load balancer - Azure portal - Azure Load Balancer | Microsoft Learn

 

Add the fronted IP’s:

 

On the previously created Load Balancer >> Fronted IP Configuration and click Add. Create 3 Fronted IPs as detailed below:

ReynaldoMSFT_15-1709558948010.png

Add the SQL Server VM’s backend pools:

 

Now go to Load Balancer >> Backend polls and click Add for adding the two SQL Server VM’s:

ReynaldoMSFT_16-1709558962313.png

Now we need to add the Health probes for SQL Server instance, EntSSO and MSDTC.

 

Go to Settings >> Health probes

ReynaldoMSFT_17-1709558975383.png

 

Click the “Add” button and then complete the configuration for the health probe:

ReynaldoMSFT_18-1709558987390.png

 

Click Save

 

Repeat for MSDTCHealthProbe

ReynaldoMSFT_19-1709559013632.png

 

And for SSOMSSHealthProbe

ReynaldoMSFT_21-1709559036155.png

 

After adding all Health probes, we will have the following 3 health probes:

ReynaldoMSFT_22-1709559055765.png

Notice the ports and rules used by each Health probe.

 

Finally, we need to configure the Load balancing rules.

 

Go to Settings > Load balancing rules

 

Click the “Add” button and then complete the configuration for the each of the rules:

ReynaldoMSFT_23-1709559112929.png

ReynaldoMSFT_24-1709559145709.png

This is the configuration for the SQL instance rule SQLMSSRule. Notice that the health probe SQLMSSHealthProbe is the one created on the previous set and uses port 59999. In addition, HA Ports and Floating IP are enabled. Floating IP is very important for cluster to work. 

 

ReynaldoMSFT_25-1709559171343.png

This is the configuration for the MSDTC rule MSDTCRule. Notice that the health probe MSDTCHealthProbe is the one created on the previous steps and is using port 60000. In addition, HA Ports and Floating IP are enabled.

 

ReynaldoMSFT_26-1709559190603.png

 

This is the configuration for the Master Secret Server-Single Sign On rule SSOMSSRule. Notice that the health probe SSOMSSHealthProbe is the one created on the previous steps and is using port 49999. In addition, HA Ports and Floating IP are enabled.

 

After adding all the rules, we will have the following 3 Load balancing rules:

ReynaldoMSFT_27-1709559215263.png

 

Create the SQL FCI with Azure Shared disks on SQL VMS

For this scenario, we are going to use SQL FCI with Azure Shared Disks within a single subnet. We recommend to follow the steps in articles: Prepare virtual machines for an FCI - SQL Server on Azure VMs | Microsoft Learn and then Create an FCI with Azure shared disks - SQL Server on Azure VMs | Microsoft Learn

Notice that because we need to bind it, the SQL Server FCI IP must be the same that the fronted IP for SQL Server we configure before (SQLMSSCluFIP) for the load balancer.

If you want to implement AOAG you can also follow (after preparing the machines) related process on the same link).

After creating the SQL FCI or AOAG you need to bind the IP Address ($ListenerILBIP of the below powershell) of the SQL instance or SQL listener (this will depend on your SQL configuration) with the ILB Fronted IP SQLMSSCluFIP created during previous steps for SQL and its probe port. For this action, running a Powershell script is needed:

 

 

$ClusterNetworkName = "Cluster Network 1" # the cluster network name (Use Get-ClusterNetwork on Windows Server 2012 or higher to find the name)

$IPResourceName = "SQL IP Address 1 (SQLMSSClu)" # the IP Address cluster resource name

$ListenerILBIP = "192.0.1.8" # the IP Address of the Internal Load Balancer (ILB). This is the static frontend IP address for the load balancer you configured in the Azure portal.

[int]$ListenerProbePort = 59999

 

Import-Module FailoverClusters

 

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{"Address"="$ListenerILBIP";"ProbePort"=$ListenerProbePort;"SubnetMask"="255.255.255.255";"Network"="$ClusterNetworkName";"EnableDhcp"=0;"EnableNetBIOS"=1}

 

ReynaldoMSFT_29-1709559458472.png

Notice that you need to restart the cluster role so the changes take effect.

 

In your Failover cluster manager, it will look like this:

 

ReynaldoMSFT_0-1709559547399.png

 

Once this step is completed we can continue with:

Clustering the SSO Service and MSDTC within the same Cluster Group for SQL VMS:

In this scenario, we are going to cluster the SSO and MSDTC on the same SQL FCI cluster.

For more information refer to the following link:

Clustering MSDTC:

Important: for MSDTC service it is recommended to create a dedicated clustered instance per DB cluster role (group) in case you have multiple clustered SQL instances. You avoid extra network roundtrips and you spread the load as well. In this case, we have all DBs in the same clustered SQL instance.

  1. As we did for SQL, we need to deploy an Azure Standard shared SSD. Because DTC service does not consume too many resources, for this scenario we attached a 4GB Standard Azure Manage Disk.  

 

Enable shared disks for Azure managed disks - Azure Virtual Machines | Microsoft Docs

After that, go to Failover Cluster Manager, create a new role and in the “Select Role” section, select Distributed Transaction Coordinator (DTC) and then click Next.

ReynaldoMSFT_1-1709559547402.png

 

In the Client Access Point section, give a name for your clustered role.

ReynaldoMSFT_2-1709559572877.png

 

Select the disk configured on the previous step.

ReynaldoMSFT_3-1709559572878.png

 

Click Next on confirmation page

ReynaldoMSFT_4-1709559572879.png

 

Your role is successfully clustered. Click Finish.

ReynaldoMSFT_5-1709559572880.png

 

You can also refer to the following link for more information: MSDTC – How to Cluster – Raspberryfield – IT, video games and comics

 

 

We will now see something like this on our DTC Role resources tab:

ReynaldoMSFT_6-1709559589278.png

 

ReynaldoMSFT_7-1709559589280.png

 

For fixing it, we need to bind the ILB Fronted IP MSDTCCluFIP created during previous steps of the MSDTC and its probe port. For this action, running a Powershell script is needed.

$ClusterNetworkName = "Cluster Network 1" # the cluster network name (Use Get-ClusterNetwork on Windows Server 2012 or higher to find the name)

$IPResourceName = "IP Address 192.0.1.0" # the IP Address resource name

$ListenerILBIP = "192.0.1.9" # the IP Address of the Internal Load Balancer (ILB). This is the static IP address for the load balancer you configured in the Azure portal.

[int]$ListenerProbePort = 60000

 

Import-Module FailoverClusters

 

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{"Address"="$ListenerILBIP";"ProbePort"=$ListenerProbePort;"SubnetMask"="255.255.255.255";"Network"="$ClusterNetworkName";"EnableDhcp"=0;"EnableNetBIOS"=1}

 

ReynaldoMSFT_8-1709559602882.png

 

Notice that this time no restart of the role was needed.

ReynaldoMSFT_9-1709559616084.png

 

 

Now we can see our role running successfully. Notice the IP Address properties.

ReynaldoMSFT_10-1709559616086.png

 

Move the resources across nodes and validate functionality on both nodes.

 

  1. Create the cluster role with fixed ports for MSDTC service. It is recommended to fix ports in order to avoid port exhaustion and  facilitate monitoring, tracing and firewall tasks.

 

Fixing the MSDTC ports:

For local MSDTC node, we need to set it in local registry:

HKLM\Software\Microsoft\MSDTC\[ServerTcpPort] (DWORD (32-bit) Value)

 

Details: How to Configure MSDTC to Use a Specific Port in Windows Server 2012/2012R2 | Microsoft Learn

 

How to configure the MSDTC service to listen on a specific RPC server port | Microsoft Learn

 

For clustered MSDTC we need to set it in clustered registry:

HKLM\Cluster\Resources\{Unique_DTC_ResourceID_GUID}\MSDTCPRIVATE\MSDTC\[ServerTcpPort] (DWORD (32-bit) Value)

 

More information:

MSDTC Supported Configurations - Microsoft Community Hub

Clustering Master Secret Server:

  1. Install and configure Enterprise SSO on the SQL cluster nodes (this scenario). When installing it, be sure to move the MSDTC service on the node you are installing the SSO.
  2. Start the Custom BizTalk Server configuration on each SQL Server node, first node to create the SSO DB and second node to join it to it. As above, be sure to move the MSDTC service to the node where you are performing the configuration.
  3. Create the clustered Enterprise SSO resource and the dependent resources within the cluster where MSDTC role was created.

ReynaldoMSFT_11-1709559629098.png

 

After the role is created within the cluster, we need to bind the ILB Fronted IP SSOMSSFIP created on previous steps for the clustered SSO and its probe port. For this action, running a Powershell script is needed:

$ClusterNetworkName = "Cluster Network 1" # the cluster network name (Use Get-ClusterNetwork on Windows Server 2012 or higher to find the name)

$IPResourceName = "IP Address 192.0.1.0 (2)" # the IP Address resource name

$ListenerILBIP = "192.0.1.11" # the IP Address of the Internal Load Balancer (ILB). This is the static IP address for the load balancer you configured in the Azure portal.

[int]$ListenerProbePort = 49999

 

Import-Module FailoverClusters

 

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{"Address"="$ListenerILBIP";"ProbePort"=$ListenerProbePort;"SubnetMask"="255.255.255.255";"Network"="$ClusterNetworkName";"EnableDhcp"=0;"EnableNetBIOS"=1}

 

ReynaldoMSFT_12-1709559640342.png

 

 

If you ping from the BizTalk VM to the SSOCLU DNS you must obtain a reply.

 

ReynaldoMSFT_13-1709559640348.png

 

 

 

  1. Update the master secret name in the Management database.

ReynaldoMSFT_14-1709559640349.png

 

  1. Restore the master secret on the second cluster node. If you move the master secret server to the cluster, you must restore the master secret on the first cluster node as well.
  2. Bring the cluster group that contains the SSO service online.
  3. Fix ports for SSO Service on all nodes: we need to run the following command:

 

"%CommonProgramFiles%\Enterprise Single Sign-On\ssoconfig.exe" -rpcport 30000

 

Basic steps: Clustering the Master Secret Server - BizTalk Server | Microsoft Learn

Details: How to Cluster the Master Secret Server1 - BizTalk Server | Microsoft Learn

 

 

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{"Address"="$ListenerILBIP";"ProbePort"=$ListenerProbePort;"SubnetMask"="255.255.255.255";"Network"="$ClusterNetworkName";"EnableDhcp"=0;"EnableNetBIOS"=1}

 

We can now install and configure the BizTalk Server Application on the BizTalk VM and on the SQL Server Instance created before.

  1. Install BizTalk Server on the client node. Install BizTalk Server 2020 - BizTalk Server | Microsoft Learn
  2. Configure BizTalk Server on the client node joining it to the existing SSO system which was created on previous steps. Configure using Basic or Custom configuration - BizTalk Server | Microsoft Learn

 

It is very important that the health probe port is unique and not in use on both nodes and that the health probe setting matches the setting in the clustered IP address. If e g you set health probe to port 135 Azure ILB will find both backend servers to be possible online services and will then send random traffic to both nodes, which will not work in clustered scenario where all traffic needs to go to a single node. 

 

In production loads, it is recommended to have higher performance disk SKUs for all disks (OS disk, SQL data disks, MSDTC data disks) to avoid throttling and performance issues. When you migrate from on-prem SAN to Azure it is essential to measure the disk performance to ensure that you have same or better disk speed. For some really high loads you may need 10+ high performance Azure data disks in your SQL cluster. You can use tools like DiskSpd to compare your current SAN disk with Azure disk speed. 

 

While this article discusses how to make SQL, EntSSO, MSDTC and BizTalk highly available in Azure VMs, the same applies to Message Queueing (MSMQ) where you can use also Azure Shared disk with Azure ILB in the same way to get a clustered MSMQ instance. 

 

1 Comment
Co-Authors
Version history
Last update:
‎Mar 06 2024 08:00 AM
Updated by: