Configure LDAPS within Azure VMware Solution
Published Feb 16 2023 04:05 AM 4,992 Views

Implementing LDAPS identity integration with Azure VMware Solution

Azure VMware Solution (AVS) offers a fully managed software defined data center based on VMware vSphere technologies in the shape of an Azure PaaS service. The PaaS nature of Azure VMware Solution results in a service that is functionally equivalent to your well-known on-premises VMware deployment you may have been using for years with some specific “restrictions” as Microsoft provides a service level agreement and therefor Microsoft is responsible for ensuring a robust and resilient platform in deployment and operation. Azure VMware Solution is functionally equivalent to on-premises VMware but due to the PaaS nature of the service there are significant technical differences to properly consider.


Run-Commands for high(er) privileged operations

As a consumer of Azure VMware Solution, you are granted limited administrative privileges that are aligned with the PaaS nature of the service. You are granted access to the CloudAdmin role which holds a subset of the full administrator role. As it is at times needed to allow for “privilege escalation”, Microsoft has enabled you to do so using a feature in the Azure AVS portal blades called “Run-Commands”. Run-Commands allow you to perform a selected number of high privileged actions without requiring the need to submit a service request through Azure Support from the Azure Portal. More details on Run-Commands can be found on Microsoft Learn here.


To integrate your Azure VMware Solution SDDC with a directory services provider, in our case Active Directory, we will use a Run-Command: New-LDAPSIdentitySource.


Our Azure VMware Solution lab environment

To allow us to create this article using “understandable” values for the prerequisites and the parameters required to successfully implement the LDAPS identity integration, we have created a lab environment which is shown in the picture below. Using this picture as a reference we hope to make it easier for anyone to implement LDAPS integration by simply replacing the prerequisite values and parameters (displayed in bold through-out this document) with the values applicable to your environment by simply comparing with our lab and easily translating this to what is applicable for your deployment.

Our lab consists of:

Required resources:  
One Azure VMware Solution Private Cloud called avs-fta-gwc
One Active Directory forest and domain called
Two Active Directory domain controllers called avs-gwc-dc001 and avs-gwc-dc002
One Active Directory hosted enterprise root certificate authority called avsemea-avs-gwc-rootca
One Azure Virtual WAN called avs-germanywestcentral-vwan1
One jumpbox virtual machine used for management activities called avs-gwc-jumpbox
One Azure Virtual Network called avs-gwc-172_16_11_0-24
This vnet holds three separate subnets called SN-172_16_11_0-26-ADDS, AzureBastionSubnet and SN-172_16_11_128-26

All virtual machines used in the lab are joined to the above mentioned domain Our lab environment also holds some additional resources that are not required for the LDAPS integration but they will be used when creating additional articles on different Azure VMware related topics just like this one.

Optional resources for future use:  
Two NSX-T segments inside our AVS SDDC called NSX-SN-192-168-200-0-24 and NSX-SN-192-168-201-0-24
One virtual machine for forwarding additional metrics from our AVS SDDC to the Azure Metrics infrastructure called avs-gwc-metrics001
One Azure NetApp Files account that will be used as extensible storage for our AVS SDDC in a later article called avs-gwc-anfaccount001


A graphical representation of our lab environment is shown below:




While guiding you through the process of gathering all required details and artifacts and using them to complete the LDAPS integration for AVS, we assume that you have all resources mentioned in the table required resources available and that sufficient access permissions are in place.


Configure DNS forwarding prerequisite

Before we can configure integration with an external identity store (e.g. Active Directory Domain Services) we need to make sure that the AVS platform components have the ability to resolve customer DNS zones hosting the LDAPS domain records. This configuration must be made through the Azure Portal blades for Azure VMware Solution.


Open the Azure VMware Solution DNS configuration pane

The first step in configuring DNS name resolution from the Azure VMware Solution networks (management and workload segments) is to add a DNS zone for Azure VMware. Login to the Azure Portal and select the Azure Active Directory tenant and Azure subscription where you have deployed your Azure VMware Solution Private Cloud.



As depicted in the image above:

  1. Click on “Azure VMware Solution” in the main navigation pane;
  2. Click the Azure VMware Solution Private Cloud you want to configure. In our scenario walk-through we select avs-fta-gwc as per the description of our lab environment used;
  3. In the navigation pane that now opened, scroll down to the section “Workload networking” and select “DNS”. The Azure Portal blade for configuring the Azure VMware Solution DNS configuration will now open.

Configure the required DNS zones details

The Azure VMware Solution DNS configuration pane opens the section where “conditional DNS forwarder” zones are configured.




As shown in the image above:

  1. Click “+ Add” in the top navigation structure. A new configuration panel will appear on the right side of your browser window;
  2. Under (DNS zone) Type, select the “FQDN zone” option;
  3. Under DNS zone name, enter a descriptive name used within NSX-T to identify the DNS zone. We recommend to use the DNS FQDN of the zone used by your LDAP/S identity infrastructure. In our scenario the DNS zone name we use is;
  4. Under Domain, enter the DNS zone FQDN we will be forwarding traffic for. In our scenario we use here as well;
  5. Under DNS server IP, enter the IP addresses of your DNS servers. It is recommended to use DNS servers that are situated inside of Azure. In our scenario we will use the IP addresses of our two domain controllers: and;
  6. Click the “OK” button to create the required DNS FQDN zone in NSX-T.

After a few minutes, the DNS FQDN zone,, will be listed in the DNS blade for your Azure VMware Solution Private Cloud as shown below:



Attach the DNS zone configuration to the NSX-T DNS forwarder service

Now that we created the DNS conditional forwarder zone we need to attach this zone to the NSX-T DNS service running in Azure VMware Solution to enable NSX-T to actually use the DNS conditional forwarder zone for use.



In the Azure VMware Solution DNS blade:

  1. Click “DNS service” at the top of the blade;
  2. Click “Edit” to enable configuration changes to the NSX-T DNS service;
  3. In the “Edit DNS service” screen, open the “FQDN zones” drop down and select the FQDN zone you want to attach to the service. In our scenario we select the zone;
  4. Click “OK” to save the configuration change.

When the change to the NSX-T DNS service is effectuated the DNS zone will now be listed in the DNS service configuration:




After this step the configuration of the Azure VMware Solution DNS service is complete. The next article in this series will describe the detailed steps in configuring LDAPS integration through the Azure portal or automation where possible.


Implement LDAPS integration

The following sections will guide you through the required process step-by-step


Check domain controller certificates and LDAPS

It is important to validate that the identity provider (in most cases Active Directory) is configured to support LDAPS based authentication requests before continuing with the LDAPS integration setup.

For each domain controller that is designated for use with Azure VMware Solution, check if the required certificate(s) with the “server authentication” intended purpose are present in the computer certificate store. The “Certificates” snap-in for the Microsoft Management Console (mmc.exe) offers a simple way to do this.

After locally or remotely opening the computer certificate store:

  1. Expand the “Personal” certificate store and click “Certificates”;
  2. Check if the store contains certificates with the “Server Authentication” intended purpose.

Available certificates for



Available certificates for


Next we need to verify whether the Active Directory domain controllers are configured to offer LDAPS services:

  1. Open a “Command prompt” windows;
  2. Run the command “Netstat -a”;
  3. Verify that there is an active listener for TCP port 636.

LDAPS listener for


LDAPS listener for



Extract (correct) domain controller certificates

As a next step the certificates used by the domain controllers for LDAPS services need to be extracted from the domain controllers. This step is always performed by executing the script shown in the image below. The reason for this is to ensure we extract the correct certificate which is “attached” to the LDAPS service. When multiple certificates are available with the “Server Authentication” intended purpose, the domain controller selects one of them to be used for LDAPS.

With the following commands we will connect to the required domain controllers. In our scenario, these are and and we then use OpenSSL to extract the required certificates. You will need to specify the path of your openssl.exe, currently tested version is 3.0 and was installed with Chocolatey. Our install path is C:\Program Files\OpenSSL-Win64\bin\openssl.exe. You will need to update the export path to suit your needs, we chose to use c:\certTemp

$openSSLFilePath may need to be changed, was installed using chocolatey in our example.
$remoteComputers would need to be changed to suit your environment.
$exportFolder can be changed to something more suitable for your environment.

## Get certs
$openSSLFilePath = "C:\Program Files\OpenSSL-Win64\bin\openssl.exe"
$remoteComputers = "",""
$port = "636"
$exportFolder = "c:\temp\"

foreach ($computer in $remoteComputers)
    $output =  echo "1" | & $openSSLFilePath "s_client" "-connect" "$computer`:$port" "-showcerts" | out-string
    $Matches = $null
    $cn = $output -match "0 s\:CN = (?<cn>.*?)\r\n"
    $cn = $
    $Matches = $null
    $certs = select-string -inputobject $output -pattern "(?s)(?<cert>-----BEGIN CERTIFICATE-----.*?-----END CERTIFICATE-----)" -allmatches
    $cert = $certs.matches[0]
    $exportPath = $exportFolder+($computer.split(".")[0])+".cer"
    $cert.Value | Out-File $exportPath -Encoding ascii



Validate domain controller certificate requirements

The next step is to ensure that the certificate extraction was performed successfully. This will always be a manual step in the process.


As displayed in the image above:

  1. Open Windows Explorer (or use the one opened in the previous step if not closed already :smiling_face_with_smiling_eyes:) and browse to the location where the certificates were extracted to. In our scenario the folder is c:\certTemp;
  2. Open the certificates by right-clicking and selecting “Open”

For each of the certificates:

  1. Select the “General” tab at the top;
  2. Verify that “Proves your identity to a remote computer” is available as an intended purpose;
  3. Verify that domain controller fully qualified domain name FQDN) is present in the “Issued to” field. In our scenario: and
  4. Verify that the certificate is still valid by checking the “Valid from” field.


To complete the certificate verification process, for each certificate:

  1. Click the “Certification Path” tab at the top;
  2. Verify that the full certificate authority chain is available in the “Certification path” field. In our scenario the field contains the name of the root certification authority avsemea-avs-gwc-rootca and the full certificate name for the respective domain controller the certificate is issued to. In our scenario and

Create Azure Storage account

The following content is divided into two sub-sections. One section will describe how the required procedure is performed manually through the Azure Portal and the other section describes a way to perform the required steps through automation.


Manual deployment

As part of this manual process a storage account will be created that is used to store the domain controller certificates for later use by the “New-LDAPSIdentitySource” run-command.


In the Azure Portal:

  1. Click on “Storage accounts” in the left navigation panel;
  2. Click “+ Create” at the top to create a new storage account. The “Create a storage account” blade will now be displayed.


On the “basics” tab:

  1. In the “Subscription” drop-down menu make sure you select the subscription where Azure VMware Solution has been deployed into. In our scenario this is the Azure CXP FTA Internal – AZFTA subscription;
  2. Select the resource group you want to create the storage account into. In our case avs-germanywestcentral-operational_rg.
  3. In the “Storage account name” box, enter a globally unique name for the storage account. We recommend to use a descriptive name postfixed with a GUID. In our scenario we used avsgwcsa14a2c2db;
  4. In the “Region” drop-down menu be sure to select the same region as where Azure VMware Solution is deployed. In our scenario Germany West Central.
  5. As this storage account has no need for changing any of the default settings, click “Review” at the bottom of the screen.


On the review screen:

  1. Double-check the values for “Subscription”, “Resource Group”, “Location” and “Storage account name”;
  2. Click “Create”. After a few minutes the creation of the storage account should be completed:



Automated deployment

With these commands, we will check for the Azure Module, install them if missing and then continue the script. We will create the required storage account, or use an existing storage account. The $storageAccountName and $resourceGroupLocation variables can be updated or replaced as needed to meet your needs. These scripts are designed to be run in sections one after each other to ensure the variable names are correctly referenced.

$resourceGroupLocation will need to be updated to your desired location.
$storageRgName will need to be updated.
$storageAccountName will need to be updated.


## Do you have the Azure Module installed?
if (Get-Module -ListAvaialble -name Az.Storage)
{ write-output "Module exists" }        
    write-output "Module does not exist"
    write-output "installing Module"
    Install-Module -name Az.Storage -Scope CurrentUser -Force -AllowClobber

## create storage account

$resourceGroupLocation = "germanywestcentral"
$storageRgName = "avs-$resourceGroupLocation-operational_rg"
## Storage account variables
$guid = New-Guid
$storageAccountName = "avsgwcsa"
$storageAccountNameSuffix = $guid.ToString().Split("-")[0]
$storageAccountName = (($storageAccountName.replace("-",""))+$storageAccountNameSuffix )
## Define tags to be used if needed
## tags can be modified to suit your needs, another example below.
#$tags = @{"Environment"="Development";"Owner"="FTA";"CostCenter"="123456"}
$tags = @{"deploymentMethod"="PowerShell"; "Technology"="AVS"}
## create storage account
$saCheck = Get-AzStorageAccount -ResourceGroupName $storageRgName -Name $storageAccountName -ErrorAction SilentlyContinue
if ($null -eq $saCheck)
    New-AzStorageAccount -ResourceGroupName $storageRgName -Name $storageAccountName -Location $resourceGroupLocation -SkuName Standard_LRS -Kind StorageV2 -EnableHttpsTrafficOnly $true -Tags $tags
    Write-Output "Storage account created: $storageAccountName"
} else {
    write-output "Storage Account already exists"



Create Storage Blob container

The next step is to create a blob container to help structure/organize the resources required for the LDAPS identity integration. The following content is divided into two sub-sections. One section will describe how the required procedure is performed manually through the Azure Portal and the other section describes a way to perform the required steps through automation.


Manual deployment


In the Azure Portal:

  1. In the left navigation pane, click “Storage accounts”;
  2. In the list of storage accounts, select the storage account created in the previous step. In our scenario avsgwcsa14a2c2db;
  3. Select “Containers” under the “Data storage” category;
  4. Click “+ Container” to create a new container in the storage account.


In the “New container” dialog:

  1. In the “Name” field, type a descriptive name for the new container. In our scenario ldaps-blog-post.
  2. Click “Create” The new container should now be created and displayed in the “Containers” view for the storage account:


Automated deployment

With these commands, we will create the container to upload the earlier exported certificates to, this will be important for the creation of the SAS Tokens for the AVS LDAPS Run Command.

$containerName will need to be updated

## create container
$containerName = "ldaps"
$containerCheck = Get-AzStorageContainer -name $containerName -Context (Get-AzStorageAccount -ResourceGroupName $storageRgName -Name $storageAccountName).Context -ErrorAction SilentlyContinue
if ($null -eq $containerCheck)
    New-AzStorageContainer -name $containerName -Context (Get-AzStorageAccount -ResourceGroupName $storageRgName -Name $storageAccountName).Context
    Write-Output "Storage container created: $containerName"
} else {
    write-output "Container already exists"



Upload domain controller certificates

The next step is to upload the domain controller certificates from the temporary folder where they were extracted to into the newly created ldaps-blog-post container in the avsgwcsa14a2c2db storage account we created. The following content is also divided into two sub-sections. One section will describe how the required procedure is performed manually through the Azure Portal and the other section describes a way to perform the required steps through automation.


Manual deployment

As the first step, it is needed to “enter” the ldaps-blog-post container in the storage account:


  1. In the Azure Portal, navigate to the avsgwcsa14a2c2db storage account created earlier and select “Containers”;
  2. Click the ldaps-blog-post container.

We will now upload the certificates into the container:


  1. In the ldaps-blog-post container, select “Overview”;
  2. In the top navigation, click “:up_arrow: Upload”;
  3. In the “Upload blob” panel, click the folder icon to select the certificates from your local hard drive.


Navigate to the folder where the certificates have been extracted to (in our scenario c:\certTemp) and:

  1. Select all the certificates;
  2. Click “Open”.

The “Open” screen will close and return to the Azure Portal “Upload blob” panel again, click “Upload” at the bottom of the screen:


The certificates will now be uploaded into the blob container. The process is complete when green checkmarks are shown for each certificate uploaded:



Automated deployment

With these commands, we will upload the actual certificates into the previously created container. In this example we are using “ldaps-blog-post

## upload certs to container
$certs = Get-ChildItem -Path $exportFolder -Filter *.cer
$storageContext = (Get-AzStorageAccount -Name $storageAccountName -ResourceGroupName $storageRgName).Context
foreach ($item in $certs)
    $localFilePath = $item.FullName
    $azureFileName = $localFilePath.Split('\')[$localFilePath.Split('\').count-1]
    Get-AzStorageAccount -Name $storageAccountName -ResourceGroupName $storageRgName | Get-AzStorageContainer -Name $containerName | Set-AzStorageBlobContent -File $localFilePath -Blob $azureFileName




Generate SAS tokens for domain controller certificates

As part of this step we will generate “shared access tokens” for all the certificates uploaded into the blob container so they can be accessed through the run-command for implementing the LDAPS integration for vCenter. One section describes how the required procedure is performed manually through the Azure Portal and the other section describes a way to perform the required steps through automation.


Manual deployment

The manual deployment continues within the same blade in the Azure Portal where the previous step left off. For each of the uploaded certificates, generate a “SAS token”:


For each certificate separately:

  1. Select the checkbox in front of the container;
  2. Click the ellipsis at the end of the line;
  3. Select “Generate SAS”.

In the following screen:


  1. Make sure to select a proper validity period for the SAS token. By default the SAS token will be valid for 8 hours which should be sufficient when performing the configuration in a continuous effort;
  2. Click “Generate SAS token and URL”. After clicking “Generate SAS token and URL” an additional section will be displayed at the bottom of the screen:


Be sure to copy the “Blob SAS URL” generated for each separate certificate into a text-file temporarily as they will need to be concatenated into a single string separated by a comma for use during the execution of the run-command as explained in step “Execute Run-Command”.


Automated deployment

With these commands, we will generate the SAS Token needed for the next steps, please note down BOTH tokens. In this script, the tokens are valid for 24 hours and can be modified to suit your needs.

$containerName will need to be updated

## create SAS token
$containerName = "ldaps-blog-post"
$blobs = Get-AzStorageBlob -Container $containerName -Context $storageContext | Where-Object {$ -match ".cer"}
foreach ($blob in $blobs)
    $StartTime = Get-Date
    $EndTime = $startTime.AddHours(24.0)
    $sasToken = New-AzStorageBlobSASToken -Container $containerName -Blob $ -Permission rwd -StartTime $StartTime -ExpiryTime $EndTime -Context $storageContext -FullUri
    write-host "SASToken created: $sasToken"



Execute Run-Command

These steps, for now, are run manually from the Azure Portal. This will be found “Azure VMware Solution” and under Operations, Run command. Then select “New-LDAPSIdentitySource”. An automated way of executing the run-command is in the making. Please check back soon as this article will be updated as soon as this is available. Navigate to Azure Portal and ensure you are on AVS Private Cloud blade;

  1. In the AVS Private Cloud blade, click “Run Command”;
  2. Ensure Packages is selected.
  3. Click “New-LDAPSIdentitySource”.
  4. Ensure the correct Run is open before populating the required information.


Populate the information as needed.

The SSLCertificateSasUrl is a single string consisting of the SASTokens separated with a “,”. For example:[Removed],[Removed] 

 and pasted as a single long string.

Example string:[Removed],[Removed]

The other values would need to be updated as per your environment. The BaseDNGroups and BaseDNUsers are the Distinguished Names for the groups and users. These can be found using the ADSIedit tool and navigating to the required groups within Active Directory and noting down these values. More information can be here about Distinguished Names

With the BaseDNGroups and BaseDNUsers, watch the values used as these should be under the same tree. In this example, “OU=Corp,DC=avsemea,DC=com”




Once all information is entered, Run the command. You can check the status of the Run command within the portal. Navigate to Azure Portal and ensure you are on AVS Private Cloud blade

  1. In the AVS Private Cloud blade, click “Run Command”;
  2. Ensure “Run execution status” is selected.
  3. Click the latest run execution that is applicable to your environment. In this case, New-LDAPSIdentitySource-Exec33.
  4. Ensure you click the Execution name (3.) that is the running state.


You can check the status of the job now and you are waiting for all the tasks to be completed and show that your selected group was added to CloudAdmins Once you have clicked the Execution name link

  1. Check the information tab to see the steps being executed by the script;


You can also check the output and ensure that the correct name was added. Once you have clicked the Execution name link

  1. Check the Output tab to see the steps being executed by the script;


You can log on with LDAP based credentials. Using the link found under “VMware credentials”, navigate to vCenter Server Credentials Web client URL Navigate to Azure Portal and ensure you are on AVS Private Cloud blade

  1. In the AVS Private Cloud blade, click “VMware credentials”;
  2. Click the copy icon for the WebClient URL under “vCenter Server credentials”



Validate LDAPS integration

Navigate to the URL with your browser of choice and use your Directory Services based credentials to ensure that login and authentication is working as expected.


Logon to Azure VMware Solution based components is now possible and should be now be working as expected.

All the PowerShell code can be found here - LDAPS code snippets 

Version history
Last update:
‎Feb 02 2023 06:55 AM
Updated by: