Windows Server
247 TopicsRequesting and Installing an SSL Certificate for Internet Information Server (IIS)
Generate a Certificate Signing Request (CSR) Generate the request using the Certificates snap-in in Microsoft Management Console (MMC). Step 1: Open the Certificates Snap-In Press Windows + R, type mmc, and press Enter. Go to File > Add/Remove Snap-in. Select Certificates and click Add. Choose Computer account, then click Next. Select Local computer and click Finish. Click OK to close the Add/Remove window. Step 2: Start the CSR Wizard In the left pane, expand Certificates (Local Computer). Right-click Personal and select: All Tasks → Advanced Operations → Create Custom Request Step 3: Configure the Request On the Certificate Enrollment page, click Next. Select Proceed without enrollment policy and click Next. On the “Certificate Information” page, expand Details and click Properties. On the General tab: Enter a friendly name, e.g., WS25-IIS Certificate. On the Subject tab: Under Subject name, choose Common Name. Enter the fully qualified domain name (FQDN), e.g. ws25-iis.windowserver.info. Click Add. Under Alternative name, choose DNS. Enter the same FQDN and click Add. On the Extensions tab: Under Key Usage, ensure Digital Signature and Key Encipherment are selected. Under Extended Key Usage, add Server Authentication. On the Private Key tab: Under Cryptographic Provider, select RSA, Microsoft Software Key Storage Provider. Set Key size to 2048 bits. Check Make private key exportable and Allow private key to be archived. Click Apply, then OK, and then Next. Step 4: Save the Request Choose a location to save the request file (e.g. C:\Temp). Ensure the format is set to Base 64. Provide a filename such as SSLRequest.req. Click Finish. You can open the file in Notepad to verify the Base64-encoded request text. Submit the CSR to a Certification Authority You can use an internal Windows CA or a public CA. The example below assumes a web enrollment interface. Step 1: Open the CA Web Enrollment Page Navigate to your CA’s enrollment site. If the server does not trust the CA, you may receive a warning. You'll need to or install the CA certificate as needed. Step 2: Submit an Advanced Certificate Request Select Request a certificate. Choose advanced certificate request. Open the CSR in Notepad, copy the Base64 text, and paste it into the request form. Click Submit. Step 3: Approve the Request (if required) If your CA requires approval, sign in to the CA server and approve the pending request. Step 4: Download the Issued Certificate Return to the CA web enrollment page. View the status of pending requests. Locate your request and select it. Choose the Base 64 encoded certificate format. Download the certificate. Save it to a known location and rename it meaningfully (e.g. WS25-IIS-Cert.cer). Install the SSL Certificate Double-click the .cer file to open it. Click Install Certificate. Choose Local Machine as the store location. When prompted for the store, select: Place all certificates in the following store Choose Personal Click Next, then Finish. Confirm the success message by clicking OK. The certificate is now imported and available for use by IIS. Bind the Certificate in IIS Step 1: Open IIS Manager Open Server Manager or search for IIS Manager. In the left pane, expand the server and select your website (e.g., Default Web Site). Step 2: Add an HTTPS Binding In the Actions pane, click Bindings. In the Site Bindings window, click Add. Select: Type: https Hostname: the FQDN used in the certificate (e.g., ws25-iis.windowserver.info) SSL Certificate: choose the certificate you installed (e.g. WS25-IIS Certificate) Click OK, then Close. Test the HTTPS Connection Open Microsoft Edge (or your preferred browser). Browse to the site using https:// and the FQDN. Example: https://ws25-iis.windowserver.info Confirm you see the IIS default page (or your site’s content). Click the padlock in the address bar: Verify the certificate is valid. Check the certificate details if desired. If the page loads securely without warnings, the certificate is installed and bound correctly.158Views1like0CommentsStrengthening Azure File Sync security with Managed Identities
Hello Folks, As IT pros, we’re always looking for ways to reduce complexity and improve security in our infrastructure. One area that’s often overlooked is how our services authenticate with each other. Especially when it comes to Azure File Sync. In this post, I’ll walk you through how Managed Identities can simplify and secure your Azure File Sync deployments, based on my recent conversation with Grace Kim, Program Manager on the Azure Files and File Sync team. Why Managed Identities Matter Traditionally, Azure File Sync servers authenticate to the Storage Sync service using server certificates or shared access keys. While functional, these methods introduce operational overhead and potential security risks. Certificates expire, keys get misplaced, and rotating credentials can be a pain. Managed Identities solve this by allowing your server to authenticate securely without storing or managing credentials. Once enabled, the server uses its identity to access Azure resources, and permissions are managed through Azure Role-Based Access Control (RBAC). Using Azure File Sync with Managed Identities provides significant security enhancements and simpler credential management for enterprises. Instead of relying on storage account keys or SAS tokens, Azure File Sync authenticates using a system-assigned Managed Identity from Microsoft Entra ID (Azure AD). This keyless approach greatly improves security by removing long-lived secrets and reducing the attack surface. Access can be controlled via fine-grained Azure role-based access control (RBAC) rather than a broadly privileged key, enforcing least-privileged permissions on file shares. I believe that Azure AD RBAC is far more secure than managing storage account keys or SAS credentials. The result is a secure-by-default setup that minimizes the risk of credential leaks while streamlining authentication management. Managed Identities also improve integration with other Azure services and support enterprise-scale deployments. Because authentication is unified under Azure AD, Azure File Sync’s components (the Storage Sync Service and each registered server) seamlessly obtain tokens to access Azure Files and the sync service without any embedded secrets. This design fits into common Azure security frameworks and encourages consistent identity and access policies across services. In practice, the File Sync managed identity can be granted appropriate Azure roles to interact with related services (for example, allowing Azure Backup or Azure Monitor to access file share data) without sharing separate credentials. At scale, organizations benefit from easier administration. New servers can be onboarded by simply enabling a managed identity (on an Azure VM or an Azure Arc–connected server) and assigning the proper role, avoiding complex key management for each endpoint. Azure’s logging and monitoring tools also recognize these identities, so actions taken by Azure File Sync are transparently auditable in Azure AD activity logs and storage access logs. Given these advantages, new Azure File Sync deployments now enable Managed Identity by default, underscoring a shift toward identity-based security as the standard practice for enterprise file synchronization. This approach ensures that large, distributed file sync environments remain secure, manageable, and well-integrated with the rest of the Azure ecosystem. How It Works When you enable Managed Identity on your Azure VM or Arc-enabled server, Azure automatically provisions an identity for that server. This identity is then used by the Storage Sync service to authenticate and communicate securely. Here’s what happens under the hood: The server receives a system-assigned Managed Identity. Azure File Sync uses this identity to access the storage account. No certificates or access keys are required. Permissions are controlled via RBAC, allowing fine-grained access control. Enabling Managed Identity: Two Scenarios Azure VM If your server is an Azure VM: Go to the VM settings in the Azure portal. Enable System Assigned Managed Identity. Install Azure File Sync. Register the server with the Storage Sync service. Enable Managed Identity in the Storage Sync blade. Once enabled, Azure handles the identity provisioning and permissions setup in the background. Non-Azure VM (Arc-enabled) If your server is on-prem or in another cloud: First, make the server Arc-enabled. Enable System Assigned Managed Identity via Azure Arc. Follow the same steps as above to install and register Azure File Sync. This approach brings parity to hybrid environments, allowing you to use Managed Identities even outside Azure. Next Steps If you’re managing Azure File Sync in your environment, I highly recommend transitioning to Managed Identities. It’s a cleaner, more secure approach that aligns with modern identity practices. ✅ Resources 📚 https://learn.microsoft.com/azure/storage/files/storage-sync-files-planning 🔐 https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview ⚙️ https://learn.microsoft.com/azure/azure-arc/servers/overview 🎯 https://learn.microsoft.com/azure/role-based-access-control/overview 🛠️ Action Items Audit your current Azure File Sync deployments. Identify servers using certificates or access keys. Enable Managed Identity on eligible servers. Use RBAC to assign appropriate permissions. Let me know how your transition to Managed Identities goes. If you run into any snags or have questions, drop a comment. Cheers! Pierre157Views0likes0CommentsInstalling a Standalone Root Certificate Authority & Web Enrollment on Windows Server 2025
In this post learn how to deploy a standalone root Certificate Authority (CA) on a Windows Server 2025 machine that is not joined to Active Directory. Also learn how to configure the web enrollment interface so clients can request certificates using a browser. A standalone root CA is useful when: You only need certificates trusted by a limited set of machines. You don’t want to obtain certificates from a commercial provider. You’re preparing an offline root CA scenario (covered separately). Install Active Directory Certificate Services (Standalone Root CA) 1. Open Server Manager. 2. Select Manage then Add Roles and Features. 3. Choose Role-based or feature-based installation. 4. Select the local server. 5. Check Active Directory Certificate Services. 6. Click Add Features when prompted. 7. Click Next through the wizard until the **Role Services** page. 8. Select Certification Authority only. 9. Click Install and wait for completion. Configure the Certification Authority 1. In Server Manager, click the notification flag. 2. Select Configure Active Directory Certificate Services. 3. Enter credentials. 4. On Role Services, ensure Certification Authority is selected. 5. For Setup Type, select Standalone CA. 6. Choose Root CA on the CA Type page. 7. Select Create a new private key. 8. Increase the key length to 4096 and accept the other defaults. 9. Accept the default CA name (or customize if desired). 10. Keep the default certificate validity period (5 years). 11. Accept the default database locations. 12. Confirm the configuration and allow it to complete. 13. Open the Certification Authority console from Tools to verify the CA was created. Create an SSL Certificate for Web Enrollment The CA certificate itself doesn’t include subject alternative names (SANs), so you need a separate SSL certificate for the website otherwise web enrollment will throw errors. 1. Open PowerShell and switch to the root directory. 2. Create and enter a temp folder. 3. Use Notepad to create servercert.inf with details such as: [Version] Signature="$Windows NT$" [NewRequest] Subject="CN=ws25-sa-ca" KeyLength=2048 KeySpec=1 KeyUsage=0xA0 MachineKeySet=TRUE ProviderName="Microsoft RSA SChannel Cryptographic Provider" RequestType=PKCS10 FriendlyName="IIS Server Cert" [EnhancedKeyUsageExtension] OID=1.3.6.1.5.5.7.3.1 ; Server Authentication [Extensions] 2.5.29.17 = "{text}" _continue_ = "dns=ws25-sa-ca" ; Add more if needed, e.g., _continue_ = "& " for additional DNS names 4. Save the file. 5. Run certreq -new specifying the INF file and output a .req file. certreq -new C:\temp\servercert.inf C:\temp\servercert.req 6. Submit the request: * Run `certreq -submit` with the request file. certreq -submit -attrib "CertificateTemplate:WebServer" C:\temp\servercert.req C:\temp\servercert.cer * Select the standalone CA when prompted. * The request will show as **Pending**. 7. Open the Certification Authority console. 8. Under Pending Requests, right-click the request and select All Tasks → Issue. 9. Retrieve the certificate: * Use `certreq -retrieve` with the request ID and output a `.cer` file. certreq -retrieve 2 C:\temp\servercert_issued.cer 10. Install the issued certificate with `certreq -accept` or by double-clicking. certreq -accept C:\temp\servercert_issued.cer Install the Web Enrollment Feature 1. Open Add Roles and Features again in Server Manager. 2. Click Next until the Server Roles page. 3. Expand Active Directory Certificate Services. 4. Select Certification Authority Web Enrollment. 5. Click Next and proceed. This also installs IIS automatically. 6. When finished, click Close. 7. Run Configure Active Directory Certificate Services again. 8. Select Certification Authority Web Enrollment and click Configure. Bind the SSL Certificate in IIS 1. Open IIS Manager. 2. Select Default Web Site. 3. In the Actions pane, choose Bindings. 4. Click Add. 5. Set Type to https. 6. Enter the server’s hostname. 7. Select the SSL certificate you issued earlier (e.g., `IIS serviceert`). 8. Click OK and close IIS Manager. Access the Web Enrollment Page 1. Open a browser. 2. Navigate to: `https://<your-server-name>/certsrv` Example: `https://WS25-SA-CA/certsrv` 3. The Certificate Enrollment web interface should now load securely.329Views0likes0CommentsHyper-V Virtual TPMs, Certificates, VM Export and Migration
Virtual Trusted Platform Modules (vTPM) in Hyper-V allow you to run guest operating systems, such as Windows 11 or Windows Server 2025 with security features enabled. One of the challenges of vTPMs is that they rely on certificates on the local Hyper-V server. Great if you’re only running the VM with the vTPM on that server, but a possible cause of issues if you want to move that VM to another server. In this article I’ll show you how to manage the certificates that are associated with vTPMs so that you’ll be able to export or move VMs that use them, such as Windows 11 VMs, to any prepared Hyper-V host you manage. When a vTPM is enabled on a Generation 2 virtual machine, Hyper-V automatically generates a pair of self-signed certificates on the host where the VM resides. These certificates are specifically named: "Shielded VM Encryption Certificate (UntrustedGuardian)(ComputerName)" "Shielded VM Signing Certificate (UntrustedGuardian)(ComputerName)". These certificates are stored in a unique local certificate store on the Hyper-V host named "Shielded VM Local Certificates". By default, these certificates are provisioned with a validity period of 10 years. For a vTPM-enabled virtual machine to successfully live migrate and subsequently start on a new Hyper-V host, the "Shielded VM Local Certificates" (both the Encryption and Signing certificates) from the source host must be present and trusted on all potential destination Hyper-V hosts. Exporting vTPM related certificates. You can transfer certificates from one Hyper-V host to another using the following procedure: On the source Hyper-V host, open mmc.exe. From the "File" menu, select "Add/Remove Snap-in..." In the "Add or Remove Snap-ins" window, select "Certificates" and click "Add." Choose "Computer account" and then "Local Computer". Navigate through the console tree to "Certificates (Local Computer) > Personal > Shielded VM Local Certificates". Select both the "Shielded VM Encryption Certificate" and the "Shielded VM Signing Certificate." Right-click the selected certificates, choose "All Tasks," and then click "Export". In the Certificate Export Wizard, on the "Export Private Key" page, select "Yes, export the private key". The certificates are unusable for their intended purpose without their associated private keys. Select "Personal Information Exchange - PKCS #12 (.PFX)" as the export file format. Select "Include all certificates in the certification path if possible". Provide a strong password to protect the PFX file. This password will be required during the import process. To perform this process using the command line, display details of the certificates in the "Shielded VM Local Certificates" store, including their serial numbers. certutil -store "Shielded VM Local Certificates" Use the serial numbers to export each certificate, ensuring the private key is included. Replace <Serial_Number_Encryption_Cert> and <Serial_Number_Signing_Cert> with the actual serial numbers, and "YourSecurePassword" with a strong password: certutil -exportPFX -p "YourSecurePassword" "Shielded VM Local Certificates" <Serial_Number_Encryption_Cert> C:\Temp\VMEncryption.pfx certutil -exportPFX -p "YourSecurePassword" "Shielded VM Local Certificates" <Serial_Number_Signing_Cert> C:\Temp\VMSigning.pfx Importing vTPM related certificates To import these certificates on a Hyper-V host that you want to migrate a vTPM enabled VM to, perform the following steps: Transfer the exported PFX files to all Hyper-V hosts that will serve as potential live migration targets. On each target host, open mmc.exe and add the "Certificates" snap-in for the "Computer account" (Local Computer). Navigate to "Certificates (Local Computer) > Personal." Right-click the "Personal" folder, choose "All Tasks," and then click "Import". Proceed through the Certificate Import Wizard. Ensure the certificates are placed in the "Shielded VM Local Certificates" store. After completing the wizard, verify that both the Encryption and Signing certificates now appear in the "Shielded VM Local Certificates" store on the new host. You can accomplish the same thing using PowerShell with the following command: Import-PfxCertificate -FilePath "C:\Backup\CertificateName.pfx" -CertStoreLocation "Cert:\LocalMachine\Shielded VM Local Certificates" -Password (ConvertTo-SecureString -String "YourPassword" -Force -AsPlainText) Updating vTPM related certificates. Self signed vTPM certificates automatically expire after 10 years. Resetting the key protector for a vTPM-enabled VM in Hyper-V allows you change or renew the underlying certificates (especially if the private key changes). Here are the requirements and considerations around this process: The VM must be in an off state to change security settings or reset the key protector The host must have the appropriate certificates (including private keys) in the "Shielded VM Local Certificates" store. If the private key is missing, the key protector cannot be set or validated. Always back up the VM and existing certificates before resetting the key protector, as this process can make previously encrypted data inaccessible if not performed correctly. The VM must be at a supported configuration version (typically version 7.0 or higher) to support vTPM and key protector features. To save the Current Key Protector: On the source Hyper-V host, retrieve the current Key Protector for the VM and save it to a file. Get-VMKeyProtector -VMName 'VM001' | Out-File '.\VM001.kp' To reset the key protector with a new local key protector: Set-VMKeyProtector -VMName "<VMNAME>" -NewLocalKeyProtector This command instructs Hyper-V to generate a new key protector using the current local certificates. After resetting, enable vTPM if needed: Enable-VMTPM -VMName "<VMNAME>" It is important to note that if an incorrect Key Protector is applied to the VM, it may fail to start. In such cases, the Set-VMKeyProtector -RestoreLastKnownGoodKeyProtector cmdlet can be used to revert to the last known working Key Protector. More information: Set-VMKeyProtector: https://learn.microsoft.com/en-us/powershell/module/hyper-v/set-vmkeyprotector5.9KViews5likes6CommentsInstall and run Azure Foundry Local LLM server & Open WebUI on Windows Server 2025
Foundry Local is an on-device AI inference solution offering performance, privacy, customization, and cost advantages. It integrates seamlessly into your existing workflows and applications through an intuitive CLI, SDK, and REST API. Foundry Local has the following benefits: On-Device Inference: Run models locally on your own hardware, reducing your costs while keeping all your data on your device. Model Customization: Select from preset models or use your own to meet specific requirements and use cases. Cost Efficiency: Eliminate recurring cloud service costs by using your existing hardware, making AI more accessible. Seamless Integration: Connect with your applications through an SDK, API endpoints, or the CLI, with easy scaling to Azure AI Foundry as your needs grow. Foundry Local is ideal for scenarios where: You want to keep sensitive data on your device. You need to operate in environments with limited or no internet connectivity. You want to reduce cloud inference costs. You need low-latency AI responses for real-time applications. You want to experiment with AI models before deploying to a cloud environment. You can install Foundry Local by running the following command: winget install Microsoft.FoundryLocal Once Foundry Local is installed, you download and interact with a model from the command line by using a command like: foundry model run phi-4 This will download the phi-4 model and provide a text based chat interface. If you want to interact with Foundry Local through a web chat interface, you can use the open source Open WebUI project. You can install Open WebUI on Windows Server by performing the following steps: Download OpenWebUIInstaller.exe from https://github.com/BrainDriveAI/OpenWebUI_CondaInstaller/releases. You'll get warning messages from Windows Defender SmartScreen. Copy OpenWebUIInstaller.exe into C:\Temp. In an elevated command prompt, run the following commands winget install -e --id Anaconda.Miniconda3 --scope machine $env:Path = 'C:\ProgramData\miniconda3;' + $env:Path $env:Path = 'C:\ProgramData\miniconda3\Scripts;' + $env:Path $env:Path = 'C:\ProgramData\miniconda3\Library\bin;' + $env:Path conda.exe tos accept --override-channels --channel https://repo.anaconda.com/pkgs/main conda.exe tos accept --override-channels --channel https://repo.anaconda.com/pkgs/r conda.exe tos accept --override-channels --channel https://repo.anaconda.com/pkgs/msys2 C:\Temp\OpenWebUIInstaller.exe Then from the dialog choose to install and run Open WebUI. You then need to take several extra steps to configure Open WebUI to connect to the Foundry Local endpoint. Enable Direct Connections in Open WebUI Select Settings and Admin Settings in the profile menu. Select Connections in the navigation menu. Enable Direct Connections by turning on the toggle. This allows users to connect to their own OpenAI compatible API endpoints. Connect Open WebUI to Foundry Local: Select Settings in the profile menu. Select Connections in the navigation menu. Select + by Manage Direct Connections. For the URL, enter http://localhost:PORT/v1 where PORT is the Foundry Local endpoint port (use the CLI command foundry service status to find it). Note that Foundry Local dynamically assigns a port, so it isn't always the same. For the Auth, select None. Select Save ➡️ What is Foundry Local https://learn.microsoft.com/en-us/azure/ai-foundry/foundry-local/what-is-foundry-local ➡️ Edge AI for Beginners https://aka.ms/edgeai-for-beginners ➡️ Open WebUI: https://docs.openwebui.com/433Views1like3CommentsWindows Server 2025 Hyper-V Workgroup Cluster with Certificate-Based Authentication
In this guide, we will walk through creating a 2-node or 4-node Hyper-V failover cluster where the nodes are not domain-joined, using mutual certificate-based authentication instead of NTLM or shared local accounts. Here we are going to leverage X.509 certificates for node-to-node authentication. If you don't use certificates, you can do this with NTLM, but we're avoiding that as NTLM is supported, but the general recommendation is that you deprecate it where you can. We can't use Kerberos because our nodes won't be domain joined. It's a lot easier to do Windows Server Clusters if everything is domain joined, but that's not what we're doing here because there are scenarios where people want each cluster node to be a standalone (probably why you are reading this article). Prerequisites and Environment Preparation Before diving into configuration, ensure the following prerequisites and baseline setup: Server OS and Roles: All cluster nodes must be running Windows Server 2025 (same edition and patch level). Install the latest updates and drivers on each node. Each node should have the Hyper-V role and Failover Clustering feature available (we will install these via PowerShell shortly). Workgroup configuration: Nodes must be in a workgroup. The nodes should be in the same workgroup name. All nodes should share a common DNS suffix so that they can resolve each other’s FQDNs. For example, if your chosen suffix is mylocal.net, ensure each server’s FQDN is NodeName.mylocal.net. Name Resolution: Provide a way for nodes to resolve each other’s names (and the cluster name). If you have no internal DNS server, use the hosts file on each node to map hostnames to IPs. At minimum, add entries for each node’s name (short and FQDN) and the planned cluster name (e.g. Cluster1 and Cluster1.mylocal.net) pointing to the cluster’s management IP address. Network configuration: Ensure a reliable, low-latency network links all nodes. Ideally use at least two networks or VLANs: one for management/cluster communication and one dedicated for Live Migration traffic. This improves performance and security (live migration traffic can be isolated). If using a single network, ensure it is a trusted, private network since live migration data is not encrypted by default. Assign static IPs (or DHCP reservations) on the management network for each node and decide on an unused static IP for the cluster itself. Verify that necessary firewall rules for clustering are enabled on each node (Windows will add these when the Failover Clustering feature is installed, but if your network is classified Public, you may need to enable them or set the network location to Private). Time synchronization: Consistent time is important for certificate trust. Configure NTP on each server (e.g. pointing to a reliable internet time source or a local NTP server) so that system clocks are in sync. Shared storage: Prepare the shared storage that all nodes will use for Hyper-V. This can be an iSCSI target or an SMB 3.0 share accessible to all nodes. For iSCSI or SAN storage, connect each node to the iSCSI target (e.g. using the Microsoft iSCSI Initiator) and present the same LUN(s) to all nodes. Do not bring the disks online or format them on individual servers – leave them raw for the cluster to manage. For an SMB 3 file share, ensure the share is configured for continuous availability. Note: A file share witness for quorum is not supported in a workgroup cluster, so plan to use a disk witness or cloud witness instead. Administrative access: You will need Administrator access to each server. While we will avoid using identical local user accounts for cluster authentication, you should still have a way to log into each node (e.g. the built-in local Administrator account on each machine). If using Remote Desktop or PowerShell Remoting for setup, ensure you can authenticate to each server (we will configure certificate-based WinRM for secure remote PowerShell). The cluster creation process can be done by running commands locally on each node to avoid passing NTLM credentials. Obtaining and Configuring Certificates for Cluster Authentication The core of our setup is the use of mutual certificate-based authentication between cluster nodes. Each node will need an X.509 certificate that the others trust. We will outline how to use an internal Active Directory Certificate Services (AD CS) enterprise CA to issue these certificates, and mention alternatives for test environments. We are using AD CS even though the nodes aren't domain joined. Just because the nodes aren't members of the domain doesn't mean you can't use an Enterprise CA to issue certificates, you just have to ensure the nodes are configured to trust the CA's certs manually. Certificate Requirements and Template Configuration For clustering (and related features like Hyper-V live migration) to authenticate using certificates, the certificates must meet specific requirements: Key Usage: The certificate should support digital signature and key encipherment (these are typically enabled by default for SSL certificates). Enhanced Key Usage (EKU): It must include both Client Authentication and Server Authentication EKUs. Having both allows the certificate to be presented by a node as a client (when initiating a connection to another node) and as a server (when accepting a connection). For example, in the certificate’s properties you should see Client Authentication (1.3.6.1.5.5.7.3.2) and Server Authentication (1.3.6.1.5.5.7.3.1) listed under “Enhanced Key Usage”. Subject Name and SAN: The certificate’s subject or Subject Alternative Name should include the node’s DNS name. It is recommended that the Subject Common Name (CN) be set to the server’s fully qualified DNS name (e.g. Node1.mylocal.net). Also include the short hostname (e.g. Node1) in the Subject Alternative Name (SAN) extension (DNS entries). If you have already chosen a cluster name (e.g. Cluster1), include the cluster’s DNS name in the SAN as well. This ensures that any node’s certificate can be used to authenticate connections addressed to the cluster’s name or the node’s name. (Including the cluster name in all node certificates is optional but can facilitate management access via the cluster name over HTTPS, since whichever node responds will present a certificate that matches the cluster name in SAN.) Trust: All cluster nodes must trust the issuer of the certificates. If using an internal enterprise CA, this means each node should have the CA’s root certificate in its Trusted Root Certification Authorities store. If you are using a standalone or third-party CA, similarly ensure the root (and any intermediate CA) is imported into each node’s Trusted Root store. Next, on your enterprise CA, create a certificate template for the cluster node certificates (or use an appropriate existing template): Template basis: A good starting point is the built-in “Computer” or “Web Server” template. Duplicate the template so you can modify settings without affecting defaults. General Settings: Give the new template a descriptive name (e.g. “Workgroup Cluster Node”). Set the validity period (e.g. 1 or 2 years – plan a manageable renewal schedule since these certs will need renewal in the future). Compatibility: Ensure it’s set for at least Windows Server 2016 or higher for both Certification Authority and Certificate Recipient to support modern cryptography. Subject Name: Since our servers are not domain-joined (and thus cannot auto-enroll with their AD computer name), configure the template to allow subject name supply in the request. In the template’s Subject Name tab, choose “Supply in request” (this allows us to specify the SAN and CN when we request the cert on each node). Alternatively, use the SAN field in the request – modern certificate requests will typically put the FQDN in the SAN. Extensions: In the Extensions tab, edit Key Usage to ensure it includes Digital Signature and Key Encipherment (these should already be selected by default for Computer templates). Then edit Extended Key Usage and make sure Client Authentication and Server Authentication are present. If using a duplicated Web Server template, add Client Authentication EKU; if using Computer template, both EKUs should already be there. Also enable private key export if your policy requires (though generally private keys should not be exported; here each node will have its own cert so export is not necessary except for backup purposes). Security: Allow the account that will be requesting the certificate to enroll. Since the nodes are not in AD, you might generate the CSR on each node and then submit it via an admin account. One approach is to use a domain-joined management PC or the CA server itself to submit the CSR, so ensure domain users (or a specific user) have Enroll permission on the template. Publish the template: On the CA, publish the new template so it is available for issuing. Obtaining Certificates from the Enterprise CA Now for each cluster node, request a certificate from the CA using the new template. To do this, on each node, create an INF file describing the certificate request. For example, Node1.inf might specify the Subject as CN=Node1.mylocal.net and include SANs for Node1.mylocal.net, Node1, Cluster1.mylocal.net, Cluster1. Also specify in the INF that you want Client and Server Auth EKUs (or since the template has them by default, it might not be needed to list them explicitly). Then run: certreq -new Node1.inf Node1.req This generates a CSR file (Node1.req). Transfer this request to a machine where you can reach the CA (or use the CA web enrollment). Submit the request to your CA, specifying the custom template. For example: certreq -submit -attrib "CertificateTemplate:Workgroup Cluster Node" Node1.req Node1.cer (Or use the Certification Authority MMC to approve the pending request.) This yields Node1.cer. Finally, import the issued certificate on Node1: certreq -accept Node1.cer This will automatically place the certificate in the Local Machine Personal store with the private key. Using Certificates MMC (if the CA web portal is available): On each node, open Certificates (Local Computer) MMC and under Personal > Certificates, initiate New Certificate Request. Use the Active Directory Enrollment Policy if the node can reach the CA’s web enrollment (even if not domain-joined, you can often authenticate with a domain user account for enrollment). Select the custom template and supply the DNS names. Complete the enrollment to obtain the certificate in the Personal store. On a domain-joined helper system: Alternatively, use a domain-joined machine to request on behalf of the node (using the “Enroll on behalf” feature with an Enrollment Agent certificate, or simply request and then export/import). This is more complex and usually not needed unless policy restricts direct enrollment. After obtaining each certificate, verify on the node that it appears in Certificates (Local Computer) > Personal > Certificates. The Issued To should be the node’s FQDN, and on the Details tab you should see the required EKUs and SAN entries. Also import the CA’s Root CA certificate into Trusted Root Certification Authorities on each node (the certreq -accept step may do this automatically if the chain is provided; if not, manually import the CA root). A quick check using the Certificates MMC or PowerShell can confirm trust. For example, to check via PowerShell: Get-ChildItem Cert:\LocalMachine\My | Where-Object {$_.Subject -like "*Node1*"} | Select-Object Subject, EnhancedKeyUsageList, NotAfter Make sure the EnhancedKeyUsageList shows both Client and Server Authentication and that NotAfter (expiry) is a reasonable date. Also ensure no errors about untrusted issuer – the Certificate status should show “This certificate is OK”. Option: Self-Signed Certificates for Testing For a lab or proof-of-concept (where an enterprise CA is not available), you can use self-signed certificates. The key is to create a self-signed cert that includes the proper names and EKUs, and then trust that cert across all nodes. Use PowerShell New-SelfSignedCertificate with appropriate parameters. For example, on Node1: $cert = New-SelfSignedCertificate -DnsName "Node1.mylocal.net", "Node1", "Cluster1.mylocal.net", "Cluster1" ` -CertStoreLocation Cert:\LocalMachine\My ` -KeyUsage DigitalSignature, KeyEncipherment ` -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.1;1.3.6.1.5.5.7.3.2") This creates a certificate for Node1 with the specified DNS names and both ServerAuth/ClientAuth EKUs. Repeat on Node2 (adjusting names accordingly). Alternatively, you can generate a temporary root CA certificate and then issue child certificates to each node (PowerShell’s -TestRoot switch simplifies this by generating a root and end-entity cert together). If you created individual self-signed certs per node, export each node’s certificate (without the private key) and import it into the Trusted People or Trusted Root store of the other nodes. (Trusted People works for peer trust of specific certs; Trusted Root works if you created a root CA and issued from it). For example, if Node1 and Node2 each have self-signed certs, import Node1’s cert as a Trusted Root on Node2 and vice versa. This is required because self-signed certs are not automatically trusted. Using CA-issued certs is strongly recommended for production. Self-signed certs should only be used in test environments, and if used, monitor and manually renew them before expiration (since there’s no CA to do it). A lot of problems have occurred in production systems because people used self signed certs and forgot that they expire. Setting Up WinRM over HTTPS for Remote Management With certificates in place, we can configure Windows Remote Management (WinRM) to use them. WinRM is the service behind PowerShell Remoting and many remote management tools. By default, WinRM uses HTTP (port 5985) and authenticates via Kerberos or NTLM. In a workgroup scenario, NTLM over HTTP would be used – we want to avoid that. Instead, we will enable WinRM over HTTPS (port 5986) with our certificates, providing encryption and the ability to use certificate-based authentication for management sessions. Perform these steps on each cluster node: Verify certificate for WinRM: WinRM requires a certificate in the Local Computer Personal store that has a Server Authentication EKU and whose Subject or SAN matches the hostname. We have already enrolled such a certificate for each node. Double-check that the certificate’s Issued To (CN or one of the SAN entries) exactly matches the hostname that clients will use (e.g. the FQDN). If you plan to manage via short name, ensure the short name is in SAN; if via FQDN, that’s covered by CN or SAN. The certificate must not be expired or revoked, and it should be issued by a CA that the clients trust (not self-signed unless the client trusts it). Enable the HTTPS listener: Open an elevated PowerShell on the node and run: winrm quickconfig -transport:https This command creates a WinRM listener on TCP 5986 bound to the certificate. If it says no certificate was found, you may need to specify the certificate manually. You can do so with: # Find the certificate thumbprint (assuming only one with Server Auth) $thumb = (Get-ChildItem Cert:\LocalMachine\My | Where-Object {$_.EnhancedKeyUsageList -match "Server Authentication"} | Select-Object -First 1 -ExpandProperty Thumbprint) New-Item -Path WSMan:\LocalHost\Listener -Transport HTTPS -Address * -CertificateThumbprint $thumb -Force Verify listeners with: winrm enumerate winrm/config/listener You should see an HTTPS listener with hostname, listening on 5986, and the certificate’s thumbprint. WinRM will automatically choose a certificate that meets the criteria (if multiple are present, it picks the one with CN matching machine name, so ideally use a unique cert to avoid ambiguity). Disable unencrypted/HTTP access (optional but recommended): Since we want all remote management encrypted and to eliminate NTLM, you can disable the HTTP listener. Run: Remove-WSManInstance -ResourceURI winrm/config/Listener -SelectorSet @{Address="*", Transport="HTTP"} This ensures WinRM is only listening on HTTPS. Also, you may configure the WinRM service to reject unencrypted traffic and disallow Basic authentication to prevent any fallback to insecure methods: winrm set winrm/config/service '@{AllowUnencrypted="false"}' winrm set winrm/config/service/auth '@{Basic="false"}' (By default, AllowUnencrypted is false anyway when HTTPS is used, and Basic is false unless explicitly enabled.) TrustedHosts (if needed): In a workgroup, WinRM won’t automatically trust hostnames for authentication. However, when using certificate authentication, the usual TrustedHosts requirement may not apply in the same way as for NTLM/Negotiate. If you plan to authenticate with username/password over HTTPS (e.g. using Basic or default CredSSP), you will need to add the other nodes (or management station) to the TrustedHosts list on each node. This isn’t needed for the cluster’s internal communication (which uses certificates via clustering, not WinRM), but it might be needed for your remote PowerShell sessions depending on method. To allow all (not recommended for security), you could do: Set-Item WSMan:\localhost\Client\TrustedHosts -Value "*" Or specify each host: Set-Item WSMan:\localhost\Client\TrustedHosts -Value "Node1,Node2,Cluster1" This setting allows the local WinRM client to talk to those remote names without Kerberos. If you will use certificate-based authentication for WinRM (where the client presents a cert instead of username/password), TrustedHosts is not required – certificate auth doesn’t rely on host trust in the same way. (Optional) Configure certificate authentication for admin access: One of the benefits of HTTPS listener is you can use certificate mapping to log in without a password. For advanced users, you can issue a client certificate for yourself (with Client Authentication EKU), then configure each server to map that cert to a user (for example, map to the local Administrator account). This involves creating a mapping entry in winrm/config/service/certmapping. For instance: # Example: map a client cert by its subject to a local account winrm create winrm/config/service/certmapping @{CertificateIssuer= "CN=YourCA"; Subject="CN=AdminUserCert"; Username="Administrator"; Password="<adminPassword>"; Enabled="true"} Then from your management machine, you can use that certificate to authenticate. While powerful, this goes beyond the core cluster setup, so we won’t detail it further. Without this, you can still connect to the nodes using Enter-PSSession -ComputerName Node1 -UseSSL -Credential Node1\Administrator (which will prompt for the password but send it safely over the encrypted channel). At this point, we have each node prepared with a trusted certificate and WinRM listening securely. Test the connectivity: from one node, try to start a PowerShell remote session to the other using HTTPS. For example, on Node1 run: Test-WsMan Node2 -UseSSL Enter-PSSession -ComputerName Node2 -UseSSL -Credential Node2\Administrator You should connect without credential errors or warnings (you may get a certificate trust prompt if the client machine doesn’t trust the server cert — make sure the CA root is in the client’s trust store as well). Once you can manage nodes remotely over HTTPS, you’re ready to create the cluster. Installing the Hyper-V and Failover Clustering Roles All cluster nodes need the Hyper-V role (for running VMs) and the Failover Clustering feature. We will use PowerShell to install these simultaneously on each server. On each node: Open an elevated PowerShell (locally or via your new WinRM setup) and run: Install-WindowsFeature -Name Failover-Clustering, Hyper-V -IncludeManagementTools -Restart This installs the Hyper-V hypervisor, the clustering feature, and management tools (including the Failover Cluster Manager and Hyper-V Manager GUI, and PowerShell modules). The server will restart if Hyper-V was not previously enabled (we include -Restart for convenience). After reboot, run the command on the next node (if doing it remotely, do one at a time). Alternatively, use the Server Manager GUI or Install-WindowsFeature without -Restart and reboot manually. After all nodes are back up, verify the features: Get-WindowsFeature -Name Hyper-V, Failover-Clustering It should show both as Installed. Also confirm the Failover Clustering PowerShell module is available (Get-Module -ListAvailable FailoverClusters) and the Cluster service is installed (though not yet configured). Cluster service account: Windows Server 2016+ automatically creates a local account called CLIUSR used by the cluster service for internal communication. Ensure this account was created (Computer Management > Users). We won’t interact with it directly, but be aware it exists. Do not delete or disable CLIUSR – the cluster uses it alongside certificates for bootstrapping. (All cluster node communications will now use either Kerberos or certificate auth; NTLM is not needed in WS2019+ clusters.) Now that you've backflipped and shenaniganed with all the certificates, you can actually get around to building the cluster. Creating the Failover Cluster (Using DNS as the Access Point) Here we will create the cluster and add nodes to it using PowerShell. The cluster will use a DNS name for its administrative access point (since there is no Active Directory for a traditional cluster computer object). The basic steps are: Validate the configuration (optional but recommended). Create the cluster (initially with one node to avoid cross-node authentication issues). Join additional node(s) to the cluster. Configure cluster networking, quorum, and storage (CSV). Validate the Configuration (Cluster Validation) It’s good practice to run the cluster validation tests to catch any misconfiguration or hardware issues before creating the cluster. Microsoft supports a cluster only if it passes validation or if any errors are acknowledged as non-critical. Run the following from one of the nodes (this will reach out to all nodes): Test-Cluster -Node Node1.mylocal.net, Node2.mylocal.net Replace with your actual node names (include all 2 or 4 nodes). The cmdlet will run a series of tests (network, storage, system settings). Ensure that all tests either pass or only have warnings that you understand. For example, warnings about “no storage is shared among all nodes” are expected if you haven’t yet configured iSCSI or if using SMB (you can skip storage tests with -Skip Storage if needed). If critical tests fail, resolve those issues (networking, disk visibility, etc.) before proceeding. Create the Cluster (with the First Node) On one node (say Node1), use the New-Cluster cmdlet to create the cluster with that node as the first member. By doing it with a single node initially, we avoid remote authentication at cluster creation time (no need for Node1 to authenticate to Node2 yet): New-Cluster -Name "Cluster1" -Node Node1 -StaticAddress "10.0.0.100" -AdministrativeAccessPoint DNS Here: -Name is the intended cluster name (this will be the name clients use to connect to the cluster, e.g. for management or as a CSV namespace prefix). We use “Cluster1” as an example. -Node Node1 specifies which server to include initially (Node1’s name). -StaticAddress sets the cluster’s IP address (choose one in the same subnet that is not in use; this IP will be brought online as the “Cluster Name” resource). In this example 10.0.0.100 is the cluster IP. -AdministrativeAccessPoint DNS indicates we’re creating a DNS-only cluster (no AD computer object). This is the default in workgroup clusters, but we specify it explicitly for clarity. The command will proceed to create the cluster service, register the cluster name in DNS (if DNS is configured and dynamic updates allowed), and bring the core cluster resources online. It will also create a cluster-specific certificate (self-signed) for internal use if needed, but since we have our CA-issued certs in place, the cluster may use those for node authentication. Note: If New-Cluster fails to register the cluster name in DNS (common in workgroup setups), you might need to create a manual DNS A record for “Cluster1” pointing to 10.0.0.100 in whatever DNS server the nodes use. Alternatively, add “Cluster1” to each node’s hosts file (as we did in prerequisites). This ensures that the cluster name is resolvable. The cluster will function without AD, but it still relies on DNS for name resolution of the cluster name and node names. At this point, the cluster exists with one node (Node1). You can verify by running cluster cmdlets on Node1, for example: Get-Cluster (should list “Cluster1”) and Get-ClusterNode (should list Node1 as up). In Failover Cluster Manager, you could also connect to “Cluster1” (or to Node1) and see the cluster. Add Additional Nodes to the Cluster Now we will add the remaining node(s) to the cluster: On each additional node, run the following (replace “Node2” with the name of that node and adjust cluster name accordingly): Add-ClusterNode -Cluster Cluster1 -Name Node2 Run this on Node2 itself (locally). This instructs Node2 to join the cluster named Cluster1. Because Node2 can authenticate the cluster (Node1) via the cluster’s certificate and vice versa, the join should succeed without prompting for credentials. Under the hood, the cluster service on Node2 will use the certificate (and CLIUSR account) to establish trust with Node1’s cluster service. Repeat the Add-ClusterNode command on each additional node (Node3, Node4, etc. one at a time). After each join, verify by running Get-ClusterNode on any cluster member – the new node should show up and status “Up”. If for some reason you prefer a single command from Node1 to add others, you could use: # Run on Node1: Add-ClusterNode -Name Node2, Node3 -Cluster Cluster1 This would attempt to add Node2 and Node3 from Node1. It may prompt for credentials or require TrustedHosts if no common auth is present. Using the local Add-ClusterNode on each node avoids those issues by performing the action locally. Either way, at the end all nodes should be members of Cluster1. Configure Quorum (Witness) Quorum configuration is critical, especially with an even number of nodes. The cluster will already default to Node Majority (no witness) or may try to assign a witness if it finds eligible storage. Use a witness to avoid a split-brain scenario. If you have a small shared disk (LUN) visible to both nodes, that can be a Disk Witness. Alternatively, use a Cloud Witness (Azure). To configure a disk witness, first make sure the disk is seen as Available Storage in the cluster, then run: Get-ClusterAvailableDisk | Add-ClusterDisk Set-ClusterQuorum -Cluster Cluster1 -NodeAndDiskMajority 0 /disk:<DiskResourceName> (Replace <DiskResourceName> with the name or number of the disk from Get-ClusterResource). Using Failover Cluster Manager, you can run the Configure Cluster Quorum wizard and select “Add a disk witness”. If no shared disk is available, the Cloud Witness is an easy option (requires an Azure Storage account and key). For cloud witness: Set-ClusterQuorum -Cluster Cluster1 -CloudWitness -AccountName "<StorageAccount>" -AccessKey "<Key>" Do not use a File Share witness – as noted earlier, file share witnesses are not supported in workgroup clusters because the cluster cannot authenticate to a remote share without AD. A 4-node cluster can sustain two node failures if properly configured. It’s recommended to also configure a witness for even-number clusters to avoid a tie (2–2) during a dual-node failure scenario. A disk or cloud witness is recommended (same process as above). With 4 nodes, you would typically use Node Majority + Witness. The cluster quorum wizard can automatically choose the best quorum config (typically it will pick Node Majority + Witness if you run the wizard and have a witness available). You can verify the quorum configuration with Get-ClusterQuorum. Make sure it lists the witness you configured (if any) and that the cluster core resources show the witness online. Add Cluster Shared Volumes (CSV) or Configure VM Storage Next, prepare storage for Hyper-V VMs. If using a shared disk (Block storage like iSCSI/SAN), after adding the disks to the cluster (they should appear in Storage > Disks in Failover Cluster Manager), you can enable Cluster Shared Volumes (CSV). CSV allows all nodes to concurrently access the NTFS/ReFS volume, simplifying VM placement and live migration. To add available cluster disks as CSV volumes: Get-ClusterDisk | Where-Object IsClustered -eq $true | Add-ClusterSharedVolume This will take each clustered disk and mount it as a CSV under C:\ClusterStorage\ on all nodes. Alternatively, right-click the disk in Failover Cluster Manager and choose Add to Cluster Shared Volumes. Once done, format the volume (if not already formatted) with NTFS or ReFS via any node (it will be accessible as C:\ClusterStorage\Volume1\ etc. on all nodes). Now this shared volume can store all VM files, and any node can run any VM using that storage. If using an SMB 3 share (NAS or file server), you won’t add this to cluster storage; instead, each Hyper-V host will connect to the SMB share directly. Ensure each node has access credentials for the share. In a workgroup, that typically means the NAS is also in a workgroup and you’ve created a local user on the NAS that each node uses (via stored credentials) – this is outside the cluster’s control. Each node should be able to New-SmbMapping or simply access the UNC path. Test access from each node (e.g. Dir \\NAS\HyperVShare). In Hyper-V settings, you might set the Default Virtual Hard Disk Path to the UNC or just specify the UNC when creating VMs. Note: Hyper-V supports storing VMs on SMB 3.0 shares with Kerberos or certificate-based authentication, but in a workgroup you’ll likely rely on a username/password for the share (which is a form of local account usage at the NAS). This doesn’t affect cluster node-to-node auth, but it’s a consideration for securing the NAS. Verify Cluster Status At this stage, run some quick checks to ensure the cluster is healthy: Get-Cluster – should show the cluster name, IP, and core resources online. Get-ClusterNode – all nodes should be Up. Get-ClusterResource – should list resources (Cluster Name, IP Address, any witness, any disks) and their state (Online). The Cluster Name resource will be of type “Distributed Network Name” since this is a DNS-only cluster. Use Failover Cluster Manager (you can launch it on one of the nodes or from RSAT on a client) to connect to “Cluster1”. Ensure you can see all nodes and storage. When prompted to connect, use <clustername> or <clusterIP> – with our certificate setup, it may be best to connect by cluster name (make sure DNS/hosts is resolving it to the cluster IP). If a certificate trust warning appears, it might be because the management station doesn’t trust the cluster node’s cert or you connected with a name not in the SAN. As a workaround, connect directly to a node in cluster manager (e.g. Node1), which then enumerates the cluster. Now you have a functioning cluster ready for Hyper-V workloads, with secure authentication between nodes. Next, we configure Hyper-V specific settings like Live Migration. Configuring Hyper-V for Live Migration in the Workgroup Cluster One major benefit introduced in Windows Server 2025 is support for Live Migration in workgroup clusters (previously, live migration required Kerberos and thus a domain). In WS2025, cluster nodes use certificates to mutually authenticate for live migration traffic. This allows VMs to move between hosts with no downtime even in the absence of AD. We will enable and tune live migration for our cluster. By default, the Hyper-V role might have live migration disabled (for non-clustered hosts). In a cluster, it may be auto-enabled when the Failover Clustering and Hyper-V roles are both present, but to ensure it it, run: Enable-VMMigration This enables the host to send/receive live migrations. In PowerShell, no output means success. (In Hyper-V Manager UI, this corresponds to ticking “Enable incoming and outgoing live migrations” in the Live Migrations settings.) In a workgroup, the only choice in UI would be CredSSP (since Kerberos requires domain). CredSSP means you must initiate the migration from a session where you are logged onto the source host so your credentials can be delegated. We cannot use Kerberos here, but the cluster’s internal PKU2U certificate mechanism will handle node-to-node auth for us when orchestrated via Failover Cluster Manager. No explicit setting is needed for cluster-internal certificate usage & Windows will use it automatically for the actual live migration operation. If you were to use PowerShell, the default MigrationAuthenticationType is CredSSP for workgroup. You can confirm (or set explicitly, though not strictly required): Set-VMHost -VirtualMachineMigrationAuthenticationType CredSSP (This can be done on each node; it just ensures the Hyper-V service knows to use CredSSP which aligns with our need to initiate migrations from an authenticated context.) If your cluster nodes were domain-joined, Windows Server 2025 enables Credential Guard which blocks CredSSP by default. In our case (workgroup), Credential Guard is not enabled by default, so CredSSP will function. Just be aware if you ever join these servers to a domain (or they were once joined to a domain before being demoted to a workgroup), you’d need to configure Kerberos constrained delegation or disable Credential Guard to use live migration. For security and performance, do not use the management network for VM migration if you have other NICs. We will designate the dedicated network (e.g. “LMNet” or a specific subnet) for migrations. You can configure this via PowerShell or Failover Cluster Manager. Using PowerShell, run the following on each node: # Example: allow LM only on 10.0.1.0/24 network (where 10.0.1.5 is this node's IP on that network) Set-VMMigrationNetwork 10.0.1.5 Set-VMHost -UseAnyNetworkForMigration $false The Set-VMMigrationNetwork cmdlet adds the network associated with the given IP to the allowed list for migrations. The second cmdlet ensures only those designated networks are used. Alternatively, if you have the network name or interface name, you might use Hyper-V Manager UI: under each host’s Hyper-V Settings > Live Migrations > Advanced Features, select Use these IP addresses for Live Migration and add the IP of the LM network interface. In a cluster, these settings are typically per-host. It’s a good idea to configure it identically on all nodes. Verify the network selection by running: Get-VMHost | Select -ExpandProperty MigrationNetworks. It should list the subnet or network you allowed, and UseAnyNetworkForMigration should be False. Windows can either send VM memory over TCP, compress it, or use SMB Direct (if RDMA is available) for live migration. By default in newer Windows versions, compression is used as it offers a balance of speed without special hardware. If you have a very fast dedicated network (10 Gbps+ or RDMA), you might choose SMB to leverage SMB Multichannel/RDMA for highest throughput. To set this: # Options: TCPIP, Compression, SMB Set-VMHost -VirtualMachineMigrationPerformanceOption Compression (Do this on each node; “Compression” is usually default on 2022/2025 Hyper-V.) If you select SMB, ensure your cluster network is configured to allow SMB traffic and consider enabling SMB encryption if security is a concern (SMB encryption will encrypt the live migration data stream). Note that if you enable SMB encryption or cluster-level encryption, it could disable RDMA on that traffic, so only enable it if needed, or rely on the network isolation as primary protection. Depending on your hardware, you may allow multiple VMs to migrate at once. The default is usually 2 simultaneous live migrations. You can increase this if you have capacity: Set-VMHost -MaximumVirtualMachineMigrations 4 -MaximumStorageMigrations 2 Adjust numbers as appropriate (and consider that cluster-level property (Get-Cluster).MaximumParallelMigrations might override host setting in a cluster). This setting can also be found in Hyper-V Settings UI under Live Migrations. With these configured, live migration is enabled. Test a live migration: Create a test VM (or if you have VMs, pick one) and attempt to move it from one node to another using Failover Cluster Manager or PowerShell: In Failover Cluster Manager, under Roles, right-click a virtual machine, choose Live Migrate > Select Node… and pick another node. The VM should migrate with zero downtime. If it fails, check for error messages regarding authentication. Ensure you initiated the move from a node where you’re an admin (or via cluster manager connected to the cluster with appropriate credentials). The cluster will handle the mutual auth using the certificates (this is transparent – behind the scenes, the nodes use the self-created PKU2U cert or our installed certs to establish a secure connection for VM memory transfer). Alternatively, use PowerShell: Move-ClusterVirtualMachineRole -Name "<VM resource name>" -Node <TargetNode> This cmdlet triggers a cluster-coordinated live migration (the cluster’s Move operation will use the appropriate auth). If the migration succeeds, congratulations – you have a fully functional Hyper-V cluster without AD! Security Best Practices Recap and Additional Hardening Additional best practices for securing a workgroup Hyper-V cluster include: Certificate Security: The private keys of your node certificates are powerful – protect them. They are stored in the machine store (and likely marked non-exportable). Only admins can access them; ensure no unauthorized users are in the local Administrators group. Plan a process for certificate renewal before expiration. If using an enterprise CA, you might issue certificates with a template that allows auto-renewal via scripts or at least track their expiry to re-issue and install new certs on each node in time. The Failover Cluster service auto-generates its own certificates (for CLIUSR/PKU2U) and auto-renews them, but since we provided our own, we must manage those. Stagger renewals to avoid all nodes swapping at once (the cluster should still trust old vs new if the CA is the same). It may be wise to overlap: install new certs on all nodes and only then remove the old, so that at no point a node is presenting a cert the others don't accept (if you change CA or template). Trusted Root and Revocation: All nodes trust the CA – maintain the security of that CA. Do not include unnecessary trust (e.g., avoid having nodes trust public CAs that they don’t need). If possible, use an internal CA that is only used for these infrastructure certs. Keep CRLs (Certificate Revocation Lists) accessible if your cluster nodes need to check revocation for each other’s certs (though cluster auth might not strictly require online revocation checking if the certificates are directly trusted). It’s another reason to have a reasonably long-lived internal CA or offline root. Disable NTLM: Since clustering no longer needs NTLM as of Windows 2019+, you can consider disabling NTLM fallback on these servers entirely for added security (via Group Policy “Network Security: Restrict NTLM: Deny on this server” etc.). However, be cautious: some processes (including cluster formation in older versions, or other services) might break. In our configuration, cluster communications should use Kerberos or cert. If these servers have no need for NTLM (no legacy apps), disabling it eliminates a whole class of attacks. Monitor event logs (Security log events for NTLM usage) if you attempt this. The conversation in the Microsoft tech community indicates by WS2022, cluster should function with NTLM disabled, though a user observed issues when CLIUSR password rotated if NTLM was blocked. WS2025 should further reduce any NTLM dependency. PKU2U policy: The cluster uses the PKU2U security provider for peer authentication with certificates. There is a local security policy “Network security: Allow PKU2U authentication requests to this computer to use online identities” – this must be enabled (which it is by default) for clustering to function properly. Some security guides recommend disabling PKU2U; do not disable it on cluster nodes (or if your organization’s baseline GPO disables it, create an exception for these servers). Disabling PKU2U will break the certificate-based node authentication and cause cluster communication failures. Firewall: We opened WinRM over 5986. Ensure Windows Firewall has the Windows Remote Management (HTTPS-In) rule enabled. The Failover Clustering feature should have added rules for cluster heartbeats (UDP 3343, etc.) and SMB (445) if needed. Double-check that on each node the Failover Cluster group of firewall rules is enabled for the relevant profiles (if your network is Public, you might need to enable the rules for Public profile manually, or set network as Private). Also, for live migration, if using SMB transport, enable SMB-in rules. If you enabled SMB encryption, it uses the same port 445 but encrypts payloads. Secure Live Migration Network: Ideally, the network carrying live migration is isolated (not routed outside of the cluster environment). If you want belt-and-suspenders security, you could implement IPsec encryption on live migration traffic. For example, require IPsec (with certificates) between the cluster nodes on the LM subnet. However, this can be complex and might conflict with SMB Direct/RDMA. Another simpler approach: since we can rely on our certificate mutual auth to prevent unauthorized node communication, focus on isolating that traffic so even if someone tapped it, you can optionally turn on SMB encryption for LM (when using SMB transport) which will encrypt the VM memory stream. At minimum, treat the LM network as sensitive, as it carries VM memory contents in clear text if not otherwise encrypted. Secure WinRM/management access: We configured WinRM for HTTPS. Make sure to limit who can log in via WinRM. By default, members of the Administrators group have access. Do not add unnecessary users to Administrators. You can also use Local Group Policy to restrict WinRM service to only allow certain users or certificate mappings. Since this is a workgroup, there’s no central AD group; you might create a local group for “Remote Management Users” and configure WSMan to allow members of that group (and only put specific admin accounts in it). Also consider enabling PowerShell Just Enough Administration (JEA) if you want to delegate specific tasks without full admin rights, though that’s advanced. Hyper-V host security: Apply standard Hyper-V best practices: enable Secure Boot for Gen2 VMs, keep the host OS minimal (consider using Windows Server Core for fewer attack surface, if feasible), and ensure only trusted administrators can create or manage VMs. Since this cluster is not in a domain, you won’t have AD group-based access control; consider using Authentication Policies like LAPS for unique local admin passwords per node. Monitor cluster events: Monitor the System event log for any cluster-related errors (clustering will log events if authentication fails or if there are connectivity issues). Also monitor the FailoverClustering event log channel. Any errors about “unable to authenticate” or “No logon servers” etc., would indicate certificate or connectivity problems. Test failover and failback: After configuration, test that VMs can failover properly. Shut down one node and ensure VMs move to other node automatically. When the node comes back, you can live migrate them back. This will give confidence that the cluster’s certificate-based auth holds up under real failover conditions. Consider Management Tools: Tools like Windows Admin Center (WAC) can manage Hyper-V clusters. WAC can be configured to use the certificate for connecting to the nodes (it will prompt to trust the certificate if self-signed). Using WAC or Failover Cluster Manager with our setup might require launching the console from a machine that trusts the cluster’s cert and using the cluster DNS name. Always ensure management traffic is also encrypted (WAC uses HTTPS and our WinRM is HTTPS so it is).3.8KViews4likes7CommentsEnable Certificate-Based Authentication for Windows Admin Center Gateway Servers with AD CS
Implementing certificate-based authentication for Windows Admin Center (WAC) involves leveraging smart card login (user certificates) in Active Directory. In a production Active Directory environment, you can require administrators to authenticate with a client certificate. These are typically stored on a smart card or virtual smart card, before the administrator they can access the WAC gateway. This is achieved by using Active Directory Certificate Services (AD CS) to issue logon certificates to users and configuring Authentication Mechanism Assurance (AMA) in Active Directory to tie those certificates to a security group. WAC is then configured to allow access only to users who present the approved certificate (via membership in the special group). The result is that only users who have authenticated with a valid smart card certificate can access WAC, adding a strong second factor beyond passwords. Prerequisites and Environment Setup Before configuring certificate-based auth for WAC, ensure the following prerequisites are in place: Active Directory Domain: WAC and users must reside in an AD domain. AD CS (PKI) Deployment: An enterprise Active Directory Certificate Services Certification Authority should be installed and trusted by the domain. Smart Card Infrastructure: Users will need smart card devices or virtual smart cards. This could be a physical smart card + reader for each admin, or a TPM-backed virtual smart card (VSC) on their device. Each user must have a personal certificate that will be used for logon. Windows Admin Center: WAC should be installed in gateway mode on a domain-joined Windows Server. For production, replace the default self-signed certificate WAC generates with an SSL certificate issued by your CA that matches the WAC gateway’s DNS name. WAC Gateway Access Groups: Decide which AD security group(s) will be allowed as gateway users in WAC. Also create or identify a group to use for the smartcard enforcement. For example, create a group called “WAC-CertAuth-Required” (Global/Universal scope). No members will be directly added to this group. Membership will be assigned dynamically via AMA based on logon method. Domain Controller Certificates: Ensure your domain controllers have valid certificates for Kerberos PKINIT (Domain Controller Authentication certificates). Enterprise CAs usually auto-enroll these. This ensures DCs can accept smart card logons. Also verify DCs can reach the CRL distribution points for your CA certificates to check revocation. Group Policy for Smart Cards: It’s recommended to enforce certain policies: e.g., enable “Interactive logon: Require smart card” on accounts or systems if you want to prevent password logon entirely for those accounts, and enable “Smart card removal behavior: Lock workstation” on client PCs to auto-lock when a smart card is removed. Also consider enabling “Always wait for the network at computer startup and logon” to avoid cached logons interfering with AMA group assignment. Step 1: Configure AD CS Certificate Template for Smart Card Logon First, set up a certificate template in AD CS for your administrators’ logon certificates. You can either use the built-in Smartcard Logon template or create a dedicated one: Create a Dedicated Template: On your CA, open the Certificate Templates console. Duplicate the Smartcard Logon template (or the User template with adjustments) so you can customize it. Give it a name like “IT Admin Smartcard Logon”. In the template’s properties, configure the following key settings: Compatibility: Ensure it’s set for at least Windows Server 2008 R2 / Windows 7 for full smart card support. Cryptography: Choose a strong key length (2048 or higher) and CSP/KSP supporting your smart cards. Enable “Prompt for PIN on use” if available. Subject Name: Set to “Build from this AD information” using the user’s User principal name (UPN). The UPN will be included in the certificate’s subject alternative name. This is critical as the domain controller uses the certificate’s UPN to map to the user account during logon. Extensions: Under Application Policies (Extended Key Usage), ensure Smart Card Logon (OID 1.3.6.1.4.1.311.20.2.2) is present. You may also include Client Authentication (1.3.6.1.5.5.7.3.2) if users might authenticate to other services. Remove any EKUs not needed. Also, ensure “Signature and Smartcard Logon” or similar is selected as the issuance policy if relevant. Security: Assign Enroll (and Read) permissions to the user group that will receive these certificates (e.g. your IT admins group), and to the enrollment agents if using one. Expiration: Set an appropriate validity period (e.g. 1 or 2 years) and publish timely CRLs so expired/revoked certs are recognized. This process will generate a unique Object Identifier (OID) for the new template (visible on the General tab or via certutil -template). Take note of this template OID, as we’ll use it for AMA mapping. (If using the built-in Smartcard Logon template, it has a default OID you can obtain similarly.) Publish the Template: If you created a new template, publish it on the CA (so it’s available for enrollment). In the Certificate Authority MMC, right-click Certificate Templates > New > Certificate Template to Issue, and select your template. Enroll Certificates to Admins: Enroll each administrator for a smart card certificate using this template. Typically, this is done by using the Certificates MMC on a client with a smart card reader: o Have the user insert their smart card and open certmgr.msc (or use a dedicated smart card enrollment tool if available). o Enroll for the “IT Admin Smartcard Logon” certificate. This will generate a private key on the card and issue the certificate to the card. The certificate should now reside in the user’s Personal store and on the card. o Ensure the certificate shows the correct UPN in the Subject Alternative Name and the Smart Card Logon policy in the Application Policies. Verify AD Trust of the Certificate: Because this is an enterprise CA, the issued certificates will automatically be trusted by Active Directory for logon (the CA’s root is in the NTAuth store). Just to be safe, confirm that the CA’s root cert is present in the NTAuthCertificates container in AD (use certutil -viewstore -enterprise NTAuth). If not, publish it using certutil -dspublish -f rootcert.cer NTAuth. This ensures domain controllers trust certificates from this CA for authentication. At this stage, each admin user should have a valid smart card logon certificate issued by AD CS, which includes an OID identifying the template. Next, we’ll configure Active Directory to recognize this OID and link it to a security group via Authentication Mechanism Assurance. Step 2: Enable Authentication Mechanism Assurance (AMA) in Active Directory Authentication Mechanism Assurance (AMA) is an Active Directory feature that adds a user to a security group dynamically when they log on with a certificate that contains a specific issuer policy or template OID. We will use AMA to flag users who authenticated with our smart card certificates. The plan is to map the OID of our “IT Admin Smartcard Logon” certificate template to a special security group (e.g. “WAC-CertAuth-Required”). When a user logs on with that certificate, domain controllers will automatically include this group in the user’s Kerberos token; if they log on with a password or other method, they won’t have this group. Follow these steps to configure AMA: Create a Universal Security Group: If not already created, make a new security group in AD (preferably in the Users container or a dedicated OU) named for example “WAC-CertAuth-Required”. Make it a universal group (recommended for AMA) and set scope to Security. Do not add any members to it as AMA will control membership. Also, do not use this group for any other assignments except this purpose. Find the Certificate Template OID: Locate the OID of the certificate template you are using: o Open the properties of the certificate template in the Certificate Templates console. On the General tab note the Template OID (e.g. 1.3.6.1.4.1.311.x.x.xxxxx.xxxx...). Alternatively, use Get-CATemplate <TemplateName> in PowerShell or certutil -v -dstemplate <TemplateName> to get the OID. o If you used the built-in Smartcard Logon template, its OID can be found similarly (each template has a unique OID). Map the OID to the Group in AD: This step requires editing the AD Configuration partition using ADSI Edit or PowerShell: o Open ADSI Edit (adsiedit.msc) as an enterprise admin. o Right-click ADSI Edit > Connect to.... Select Configuration well-known naming context. o Navigate to CN=Public Key Services,CN=Services,CN=Configuration,<forest DN>. Under this, find CN=OID (Object Identifiers). This container holds objects for certificate template OIDs and issuance policy OIDs. o Look for an object whose msPKI-Cert-Template-OID attribute matches the OID of your certificate template. The objects are often named after the template or have a GUID. You may need to inspect each until you find the matching OID value. o Once found, open the properties of that OID object. There will be an attribute msDS-OIDToGroupLink. This is where we link the OID to a group. o Copy the distinguishedName of the “WAC-CertAuth-Required” group you created (you can find it by connecting ADSI Edit to the Default naming context, locating the group, and copying the DN). o In the OID object’s properties, set msDS-OIDToGroupLink to the DN of your group. Apply the change. This mapping tells AD: for any user logging in with a certificate issued from this template OID, include the specified group in their token. A quick way to confirm the mapping is working is to try adding a member to the “WAC-CertAuth-Required” group in AD Users & Computers. It should prevent you from manually adding any members now, giving an error like “OID mapped groups cannot have members.”. This is expected as the group is now controlled by AMA. Now AMA is configured. When a user authenticates with our smart card cert, the domain controller will evaluate the certificate, see the template OID, and if it matches the mapped OID, will add the “WAC-CertAuth-Required” group SID to the user’s Kerberos token. If the user logs on with username/password, that group will not be present. AMA triggers only during interactive logon (or unlock) when the user actually uses the certificate to log on to Windows. It does not dynamically add/remove groups in the middle of a session. This means the user must log onto their machine with the smart card certificate to get the group. Step 3: Configure Windows Admin Center to Require Certificate Authentication WAC supports two identity providers for gateway access: Active Directory (default) or Microsoft Entra ID. We are using AD with an added smart card requirement. WAC provides a setting to require membership in a “smartcard authentication group” in addition to the normal user group. Do the following on the WAC gateway server (while logged in as a WAC gateway administrator or local admin): Open WAC Access Settings: In a web browser, access the Windows Admin Center portal (e.g. https://<WACServer_FQDN>). Go to the Settings (gear icon) > Access panel. Ensure “Use Active Directory” (or “Use Windows Access Control”) is selected as the identity provider, since we are using AD groups. Configure Gateway Users Group(s): Under User Access, you should see an option to specify who can access the WAC gateway (“Gateway users”). By default, if no group is listed, any authenticated user can access. Add your administrators group (or groups) here to restrict WAC access to only those users. For example, add “IT Admins” or whatever AD group contains the admins that should use WAC. After adding, it will show up in the list of allowed user groups. Enable Smartcard Enforcement: Still in the Access settings, look for the Smartcard authentication option when you add . WAC allows specifying an additional required group that indicates smart card usage. Add the “WAC-CertAuth-Required” (the AMA-linked group) here as the Smartcard-required group. In the WAC UI, this might be done by clicking “+ Add smartcard group” or marking one of the added groups as a smartcard group. (In some versions, you first add the group under Users, then check a box to designate it as a smartcard-enforced group.) o After this configuration, WAC’s effective access check becomes: a user’s AD account must be a member of at least one allowed group and must be a member of the specified smartcard group. This corresponds exactly to requiring certificate logon. According to Microsoft’s documentation: “Once you have added a smartcard-based security group, a user can only access the WAC service if they are a member of any security group AND a smartcard group included in the users list.”. In our case, that means the user must be in (for example) “IT Admins” and in “WAC-CertAuth-Required”. The latter only happens when they’ve logged on with the certificate, so effectively the user must be using their smart card. Configure Gateway Administrators (if needed): If there are others who will administer the WAC gateway settings, you can also add groups/users under the Administrators tab. You can also enforce a smartcard group on administrators similarly. Typically, local Administrators on the server already have admin access to WAC by default. Make sure those accounts also use smart cards or you exclude accounts accordingly for security. Save Settings: Save or apply the Access settings. The WAC gateway service may restart to apply changes. You can verify WAC access settings via PowerShell on the WAC server. Open PowerShell and use: Get-SMEAuthorization (if available) or check the configuration file. WAC stores allowed groups and the smartcard-required group. Ensure the output lists your groups correctly. There is also a PowerShell (Set-SMEAuthorization) to configure these settings if you prefer scripting (documentation covers using -RequiredGroups and -RequiredSmartCardGroups parameters for WAC). At this point, WAC is configured to require certificate-based authentication. The gateway will perform Windows Integrated Authentication (Kerberos/NTLM) as usual, but it will only authorize the session if the user’s token contains the smartcard group SID in addition to an allowed group SID. If the user logged in with a password, the smartcard group SID is missing and WAC will deny access (HTTP 401/403). Step 4: Testing and Validation It’s crucial to test the setup end-to-end to determine if the configuration functions as expected.: Test Case 1. Password login (should be denied): Have an admin user attempt to access WAC without using their smart card. For example, the user can sign out and log on to Windows with just username/password (or disable their smart card login temporarily). Then navigate to the WAC URL. The WAC site will prompt for authentication (the browser will try Integrated Windows Auth). The user may be prompted to authenticate; if so, even entering correct AD credentials should result in access denied at the gateway. The user will see a 401 Unauthorized error from WAC after login, or WAC will keep prompting for credentials. This is expected because although the user is in the allowed admin group, they are not in the AMA smartcard group (since they logged on with a password). WAC will refuse access since the AND condition is not met. This confirms that a password-only login is insufficient. Test Case 2. Smart card login (should be allowed): Now have the user log off and log on to Windows using the smart card. (On the Windows login screen, they should insert the card, choose the smart card login option, and enter the PIN. This uses their certificate to authenticate to AD.) After interactive logon with the smart card, the user’s Kerberos ticket now includes the “WAC-CertAuth-Required” group, courtesy of AMA. Now access the WAC portal again (e.g. via Microsoft Edge or Chrome). The browser will perform Integrated Auth (which will use the logged-on user’s credentials/ticket). The user should be granted access to WAC this time and see the usual WAC interface. No additional prompts occur. WAC sees the user is in both required groups and permits the connection. Confirm Group Presence: On the user’s machine, you can run whoami /groups in a command prompt after logging in with the smart card. You should see the “WAC-CertAuth-Required” group listed in the groups. If you log in with password, that group will not be listed. This is a quick way to verify AMA is working as intended. WAC Logging: In the Windows Admin Center server, check the event log “Microsoft-ServerManagementExperience” (under Applications and Services Logs) for any relevant warnings or errors. When a user is denied due to not meeting group requirements, WAC will often log an event indicating the user’s identity was not authorized. This can help confirm that the smartcard requirement was the reason (versus other failures). Edge/Browser Behavior: If the browser pops up a Windows Security login dialog repeatedly even after using the smart card, make sure the site is in Intranet Zone or Trusted Sites so that Integrated Auth is seamless. Also ensure the user’s certificate authentication to the domain is functioning (they have a Kerberos TGT). In most cases, after a smart card desktop login, the browser should not prompt at all. It should silently use the existing Kerberos ticket. By completing these tests, you validate that the system is correctly distinguishing certificate-based logons from password logons when gating WAC access. Troubleshooting Tips Despite careful setup, you might encounter issues. Here are common problems and their solutions: User not being added to AMA Group: After logging on with a smart card, if whoami /groups does not show the “WAC-CertAuth-Required” group: o Verify the certificate was issued from the correct template (check the certificate’s details: under Details, Certificate Template Information should show your template name/OID). o Verify the OID mapping in ADSI Edit is correct (no typos in the DN, and it’s in the right OID object). o The group must be universal scope if in a multi-domain forest. If it’s global and the user/DC are in another domain, it might not be assigned. Use Universal as recommended. o Ensure domain functional level is 2008 R2 or higher; AMA won’t work below that. o If the user is logging on to a machine that is offline (no DC contact) and using cached credentials, AMA won’t apply since the DC can’t evaluate the certificate. The “Always wait for network at logon” GPO setting (Computer Configuration → System → Logon) should be enabled to force online logon. If the user must logon cached (like laptop off VPN), they won’t get the AMA group until they can contact a DC (which would then happen when they access domain resources). o Check the Event Log on the Domain Controller handling the logon (Security log). Look for event 4768 or 4771 around the logon time: 4771 with Failure Code 0x12 or text about “Encryption type not supported” might indicate a missing DC certificate or Kerberos settings issue. Errors about “The certification authority is not trusted” or “Smartcard logon is not supported for user” indicate trust problems. Make sure the CA cert is in NTAuth and the user cert has the proper UPN. If you see Event 19 in the System log on the DC (KDC event for failed smart card logon), it often gives a reason code. For example, “KDC certificate missing” or “No valid CRL” etc. o One quick check: run on a DC certutil -verify -urlfetch <UserCert.cer> using the exported user certificate. This will test if the DC (or whichever machine you run it on) can validate the cert chain and CRLs. Any errors here need addressing (trust chain, CRL, or missing template OID mapping). o If the user’s certificate does not have the Smart Card Logon EKU and you instead tried using just Client Authentication: domain controllers by default require the specific Smartcard EKU (or the new “Kerberos Authentication” EKU in newer domains). Make sure the template included the correct EKU for smart card logon, otherwise the DC may not treat it as a smart card login attempt at all. User can log in to WAC with password (not expected): If somehow a user was able to access WAC without using the smart card: o Double-check WAC’s Access settings. Perhaps the smartcard-required group wasn’t properly added. On the WAC server, run Get-SMEAcls or check the config to ensure the RequiredSmartcardGroups attribute includes the correct group SID. o Confirm the user’s account isn’t in that smartcard group permanently (no one should be a direct member; AMA groups should have no static members). Use ADUC or PowerShell to ensure the group has no members attribute set. If someone manually added a user to that group, then that user will bypass the need for a cert (they always have the group). Remove any unintended members. “OID mapped groups cannot have members” enforcement should prevent this, but if the mapping was wrong and not actually applied, someone might have populated the group. Fix the mapping and clear members. o Ensure the user didn’t somehow have the AMA group from a previous smart card logon cached. A known caveat: If a user previously logged on with a smart card and then logs off and back on with a password on the same machine without a reboot, Windows might cache the group in the token (due to an optimization). This can happen with “fast logon” or unlock scenarios. The fix is the GPO mentioned (disable fast logon). In practice, a fresh reboot + password logon should drop the group. Warn users that switching from smartcard to password login on a machine without reboot could be inconsistent. It’s safest to always use the smart card, or reboot if they must log in with password for some reason. o If using remote desktop to WAC server or a jump box, ensure the same certificate enforcement is considered there. If someone logs into the jump box with a password and then tries to use WAC, they’ll fail. That’s expected. They should RDP with smart card as well (RDP supports smart card logon pass-through). Repeated credential prompts when accessing WAC: If a user who logged in with a smart card still gets prompted for credentials in the browser: o Ensure the browser is configured for integrated authentication. For Internet Explorer/Edge (IE mode), the WAC URL should be in the Local Intranet zone (which usually allows automatic Windows auth). For modern Edge/Chrome, they typically automatically attempt desktop credentials, but if not, you can go to edge://settings -> Automatic profile switching or edge://flags for integrated auth, or use group policy “Integrated Windows Authentication” to allow the WAC URL. In Chrome, you can run it with --auth-server-whitelist="wacservername.domain.com". o If the browser prompts for a certificate selection (some configuration might cause the site to request client cert at TLS level), that’s not default for WAC. WAC by itself doesn’t use TLS client-cert authentication, so you shouldn’t see a certificate selection popup. If you do, perhaps you or someone configured the HTTP.sys binding on the WAC server to Require Client Certificates. That is not necessary for this solution (and would interfere, as WAC isn’t expecting to parse client certs itself). If enabled, consider disabling that requirement, as our approach uses Kerberos group membership instead. Remove any manual netsh http client cert negotiation settings unless you have a special reason. o Check that the user’s smart card credential was cached in Windows properly. Sometimes after a fresh logon, the first hit to a secure website might trigger a PIN prompt if the browser tries to use the certificate for TLS or something. Ensure the PIN was entered during login and is still valid (some smart cards might require PIN re-entry for signing, but usually not for Kerberos since Kerberos is already obtained at logon). o Lastly, confirm that the user’s Windows session indeed has the AMA group. If not, WAC will keep prompting because it sees the user in allowed group but not in smartcard group, and might treat them as unauthorized (causing the browser to prompt again). This will result in a 401. You might see the prompt come up repeatedly and then a blank page. In WAC’s log, an event or error saying the user is not authorized will confirm it. The solution is to get the AMA group in the token (log in with the card properly, fix AMA if broken). Smart card login fails on Windows: This is more of a PKI/AD issue than WAC issue: o If when inserting card at logon, you get messages like “The system could not log you on” or “No valid logon servers” or “certificate not recognized,” debug the smart card logon itself. Common causes: the user certificate is missing the UPN or has a UPN that doesn’t match the account, the CA that issued it isn’t in NTAuth or not trusted by the client or DC, or the DC’s own certificate is missing (check DC has a cert in its personal store issued by your CA for domain controller authentication). o On the client, when the logon fails, you can sometimes hit “Switch User -> Smart card logon” and see if it lists the certificate. If not, the card middleware might not be installed or working. If it lists it but errors after PIN, then likely an AD trust issue. Domain controller security log will have details. Certificate Revocation issues: If a user’s certificate was revoked or expired, obviously they won’t be able to authenticate with it. The DC will deny the smart card logon (event will indicate revoked or expired cert). The user would fall back to password (if allowed) which then won’t grant WAC access. The fix is to renew their certificate in advance. Always keep track of expiry dates and set reminders. Updating Certificates: When an admin gets issued a new smart card or cert (or their cert is renewed with a new OID template), ensure your AMA mapping covers it. If you created a new template (with a new OID) for any reason, you must map that OID as well. AMA can map multiple OIDs by linking them to possibly different groups. WAC only supports one smartcard group in settings, so ideally you’d keep using the same template OID for all admin certs. If a new OID is needed (say you have multiple CAs or different templates), you could map it to the same group or include multiple groups in WAC (though the UI supports one, you might workaround by nesting groups or adding multiple allowed combos). Simpler is to stick to one cert template for this purpose. Group Policy caching: The AMA group inclusion happens at the Kerberos TGT level. If a user logs on with smart card, gets the group, then later the group mapping is removed or changed, an existing TGT might still have the group until it expires (~10 hours by default). Clearing the Kerberos ticket (by klist purge or logoff) would remove it. Keep this in mind during changes: if you remove the mapping or change group, there could be a latency until all tickets expire or users logoff. Alternate access methods: If someone tries to use PowerShell Remoting (Enter-PSSession) or other tools to connect to the WAC gateway, they will still undergo the same check. Typically WAC is accessed via web, but just know the Windows auth is at play regardless of interface. Known Limitations and Compatibility When using certificate-based authentication for WAC via this method, be aware of the following limitations or considerations: Domain-Joined Clients Required: This solution assumes admins are using domain-joined Windows machines for WAC access (so that their smart card logon yields a Kerberos token with the group). If an admin tries to access WAC from a non-domain system (where they can’t do a Windows integrated logon), they would be prompted for credentials. They could technically insert their smart card and select it in the browser when prompted for credentials, but that would attempt a certificate mapping at WAC which is not configured. WAC does not natively support direct client certificate mapping at the web application layer. The only supported way is via AD group as we’ve done. So in practice, non-domain or external access should be done through a secure method (e.g. VPN into domain or using Azure AD integration as mentioned). This is by design as WAC relies on Windows Authentication, not forms or client-cert web auth. No Native OTP/MFA Prompt: Unlike some web apps, WAC itself doesn’t have a secondary prompt for OTP or similar. The smart card enforcement leverages the Windows login. So there’s no separate UI in WAC for “insert your certificate”. It’s all transparent once set up. As such, you can’t mix password + cert in a single login to WAC as it’s one or the other via how the user logged into Windows. Single Smartcard Group Limit: WAC’s configuration allows only one “smartcard-required” group to be set. If you had different levels of assurance or multiple certificate profiles, you might need to create a common group that all certificate-authenticated users get. For example, if you issue different certs (say some with higher assurance), you may map multiple OIDs to the same AMA group so that any of them will satisfy the WAC check. Plan your AMA mappings accordingly (you can map multiple OIDs to one group by concatenating DNs in the msDS-OIDToGroupLink, or by having multiple template OID objects point to the same group DN). Auditing: Note that when users access WAC with this setup, the logon audit on the WAC server will show a normal Kerberos login by the user. There isn’t an explicit event on the WAC server saying “used certificate”. The evidence of certificate use is in the DC’s logs (Kerberos AS ticket was obtained via smart card). So, auditing wise, you might correlate that if a user accessed WAC and had the AMA group, it means they used a smart card. If auditing that is important, ensure to retain domain security logs. You could also set up a scheduled task and script to log an event on the WAC server when a user lacking the group tries to connect (e.g., monitor WAC error events for unauthorized access).495Views1like1CommentAzure File Sync: A Practical, Tested Deployment Playbook for ITPros.
This post distills that 10‑minute drill into a step‑by‑step, battle‑tested playbook you can run in your own environment, complete with the “gotchas” that trip folks up, why they happen, and how to avoid them. But first... Why Use Azure File Sync? Hybrid File Services: Cloud Meets On-Prem Azure File Sync lets you centralize your organization’s file shares in Azure Files while keeping the flexibility, performance, and compatibility of your existing Windows file servers. You can keep a full copy of your data locally or use your Windows Server as a fast cache for your Azure file share. This means you get cloud scalability and resilience, but users still enjoy local performance and familiar protocols (SMB, NFS, FTPS). Cloud Tiering: Optimize Storage Costs With cloud tiering, your most frequently accessed files are cached locally, while less-used files are tiered to the cloud. You control how much disk space is used for caching, and tiered files can be recalled on-demand. This enables you to reduce on-prem storage costs without sacrificing user experience. Multi-Site Sync: Global Collaboration Azure File Sync is ideal for distributed organizations. You can provision local Windows Servers in each office, and changes made in one location automatically sync to all others. This simplifies file management and enables faster access for cloud-based apps and services. Business Continuity and Disaster Recovery Azure Files provides resilient, redundant storage, so your local server becomes a disposable cache. If a server fails, you simply add a new server to your Azure File Sync deployment, install the agent, and sync. Your file namespace is downloaded first, so users can get back to work quickly. You can also use warm standby servers or Windows Clustering for even faster recovery. Cloud-Side Backup Note: Azure File Sync is NOT a backup solution.... But, you ca reduce on-prem backup costs by taking centralized backups in the cloud using Azure Backup. Azure file shares have native snapshot capabilities, and Azure Backup can automate scheduling and retention. Restores to the cloud are automatically downloaded to your Windows Servers. Seamless Migration Azure File Sync enables seamless migration of on-prem file data to Azure Files. You can sync existing file servers with Azure Files in the background, moving data without disrupting users or changing access patterns. File structure and permissions remain intact, and apps continue to work as expected. Performance, Security, and Compatibility Recent improvements have boosted Azure File Sync’s performance (up to 200 items/sec), and it now supports Windows Server 2025 and integrates with Windows Admin Center for unified management. Managed identities and Active Directory-based authentication are supported for secure, keyless access. Real-World Use Cases Branch Office Consolidation: Multiple sites, each with its own file server, can be consolidated into a central Azure File Share while maintaining local performance. Business Continuity: Companies facing threats like natural disasters use Azure File Sync to improve server recovery times and ensure uninterrupted work. Collaboration: Organizations leverage Azure File Sync for fast, secure collaboration across locations, reducing latency and simplifying IT management. The Quick Troubleshooting TL;DR Insufficient permissions during cloud endpoint creation → “Role assignment creation failed.” You need Owner or the Azure File Sync Administrator built‑in role; Contributor isn’t enough because the workflow must create role assignments. Region mismatches → Your file share and Storage Sync Service must live in the same region as the deployment target. Wrong identity/account → If you’re signed into the wrong tenant or account mid‑portal (easy to do), the wizard fails when it tries to create the cloud endpoint. Switch to the account that actually has the required role and retry. Agent/version issues → An old agent on your Windows Server will cause registration or enumeration problems. Use the latest agent and consider auto‑upgrade to stay current. Networking & access keys → Ensure access keys are enabled on the storage account and required outbound URLs/ports are allowed. Operational expectations → Azure File Sync runs on a roughly 24‑hour change detection cycle by default; for DR drills or immediate needs, trigger change detection via PowerShell. And remember: File Sync is not a backup. Back up the storage account. End‑to‑End Deployment Playbook 1) Prerequisites (don’t skip these) Storage account supporting SMB 3.1.1 (and required authentication settings), with access keys enabled. Create your Azure file share in the same region as your File Sync deployment. Establish a clear naming convention Windows Server for the File Sync agent (example: Windows Server 2019) Identity & Access: Assign either Owner or Azure File Sync Administrator (a least‑privilege built‑in role designed specifically for this scenario). Contributor will let you get partway (storage account, Storage Sync Service) but will fail when creating the cloud endpoint because it can’t create role assignments. 2) Lay down the cloud side In the Azure portal, create the file share in your chosen storage account/region. Create a Storage Sync Service (ideally in a dedicated resource group), again ensuring the region is correct and supported for your needs. 3) Prep the server On your Windows Server, install the Azure File Sync agent (latest version). During setup, consider enabling auto‑upgrade; if the server is down during a scheduled upgrade, it catches up on the next boot, keeping you current with security and bug fixes. Register the server to your Storage Sync Service (select subscription, resource group, and service). If you have multiple subscriptions, the portal can occasionally hide one, PowerShell is an alternative path if needed. 4) Create the sync topology In the Storage Sync Service, create a Sync Group. This is the container for both cloud and server endpoints. Under normal conditions, the cloud endpoint is created automatically when you select the storage account + file share. If you hit “role assignment creation failed” here, verify your signed‑in account and role. Switching back to the account with the proper role resolves it; you can then recreate the cloud endpoint inside the existing Sync Group. Add a server endpoint: pick the registered server (it must show up in the drop‑down, if it doesn’t, registration isn’t complete) and the local path to sync. 5) Cloud tiering & initial sync behavior Cloud tiering keeps hot data locally and stubs colder data to conserve space. If you disable cloud tiering, you’ll maintain a full local copy of all files. If enabled, set the Volume Free Space Policy (how much free space to preserve on the volume) and review recall policy implications. Choose the initial sync mode, merge existing content or overwrite. 6) Ops, monitoring, and DR notes Change detection cadence is approximately 24 hours. For DR tests or urgent cutovers, run the change detection PowerShell command to accelerate discovery of changes. Backups: Azure File Sync is not a backup. Protect your storage account using your standard backup strategy. Networking: Allow required outbound ports/URLs; validate corporate proxies/firewalls. Monitoring: Turn on the logging and monitoring you need for telemetry and auditing. 7) Performance & cost planning Evaluate Provisioned v2 storage accounts to dial in IOPS/throughput to your business needs and gain better pricing predictability. It’s a smart time to decide this up front during a new deployment. 8) Identity options & least privilege You can also set up managed identities for File Sync to reduce reliance on user principals. If you do use user accounts, ensure they carry the Azure File Sync Administrator role or Owner. Keep the agent updated; it’s basic hygiene that prevents a surprising number of issues. 9) Quotas & capacity troubleshooting Hitting quota problems? Revisit your Volume Free Space Policy (cloud tiering) and recall policy. Sometimes the answer is simply adding a disk or increasing its size as data patterns evolve. Key Benefits for Infra Teams Hybrid file services without forklift: Keep your existing Windows file servers while centralizing data in Azure Files, adding elasticity and resiliency with minimal disruption . Right‑sized capacity on‑prem: Cloud tiering preserves local performance for hot data and trims cold data footprint to stretch on‑prem storage further. Operational predictability: Built‑in auto‑upgrade for the agent and a known change detection cycle, with the ability to force change detection for DR/failover testing. Least‑privilege by design: The Azure File Sync Administrator role gives just the rights needed to deploy/manage sync without over‑provisioning. Performance on your terms: Option to choose Provisioned v2 to meet IOPS/throughput targets and bring cost clarity. Available Resources What is Azure File Sync?: https://learn.microsoft.com/azure/storage/file-sync/file-sync-introduction Azure Files: More performance, more control, more value for your file data: https://azure.microsoft.com/blog/azure-files-more-performance-more-control-more-value-for-your-file-data/ Azure File Sync Deployment Guide: https://learn.microsoft.com/azure/storage/file-sync/file-sync-deployment-guide Troubleshooting documentation : https://learn.microsoft.com/troubleshoot/azure/azure-storage/files/file-sync/file-sync-troubleshoot Azure File Sync “copilot” troubleshooting experience: https://learn.microsoft.com/azure/copilot/improve-storage-accounts Next Steps (Run This in Your Lab) Verify roles: On the target subscription/resource group, grant Azure File Sync Administrator (or Owner) to your deployment identity. Confirm in Access control (IAM). Create the file share in the same region as your Storage Sync Service. Enable access keys on the storage account. Install the latest agent on your Windows Server; enable auto‑upgrade. Register the server to your Storage Sync Service. Create a Sync Group, then the cloud endpoint. If you see a role assignment error, re‑check your signed‑in account/role and retry. Add the server endpoint with the right path, decide on cloud tiering, set Volume Free Space Policy, and choose initial sync behavior (merge vs overwrite). Open required egress on your network devices, enable monitoring/logging, and plan backup for the storage account. Optionally evaluate Provisioned v2 for throughput/IOPS and predictable pricing before moving to production. If you’ve got a scenario that behaves differently in the field, I want to hear about it. Drop me a note with what you tried, what failed, and where in the flow it happened. Cheers! Pierre347Views0likes0CommentsStep-By-Step: Migrating The Active Directory Certificate Service From Windows Server 2008 R2 to 2019
End of support for Windows Server 2008 R2 has been slated by Microsoft for January 14th 2020. Said announcement increased interest in a previous post detailing steps on Active Directory Certificate Service migration from server versions older than 2008 R2. Many subscribers of ITOpsTalk.com have reached out asking for an update of the steps to reflect Active Directory Certificate Service migration from 2008 R2 to 2016 / 2019 and of course our team is happy to oblige.374KViews9likes110CommentsEnable Nested Virtualization on Windows Server 2025
Nested virtualization allows you to run Hyper-V inside a VM, opening up incredible flexibility for testing complex infrastructure setups, demos, or learning environments, all without extra hardware. First, ensure you’re running a Hyper-V host capable of nested virtualization and have the Windows Server 2025 VM on which you want to enable as a Hyper-V host ready. To get started, open a PowerShell window on your Hyper-V host and execute: Set-VMProcessor -VMName "<Your-VM-Name>" -ExposeVirtualizationExtensions $true Replace <Your-VM-Name> with the actual name of your VM. This command configures Hyper-V to allow nested virtualization on the target VM. Boot up the Windows Server 2025 VM that you want to configure as a Hyper-V host. In the VM, open Server Manager and attempt to install the Hyper-V role via Add Roles and Features. Most of the time, this should work right away. However in some cases you might hit an error stating: “Hyper-V cannot be installed because virtualization support is not enabled in the BIOS.” To resolve this error run an elevated PowerShell session inside the VM on which you want to enable Hyper-V and run the command: bcdedit /set hypervisorlaunchtype auto This command ensures the Hyper-V hypervisor starts up correctly the next time you boot. Restart your VM to apply the change. After the reboot, head back to Add Roles and Features and try installing Hyper-V again. This time, it should proceed smoothly without the BIOS virtualization error. Once Hyper-V is installed, perform a final reboot if prompted. Open Hyper-V Manager inside your VM and you’re now ready to run test VMs in your nested environment!306Views1like0Comments