windows server
250 TopicsGetting Started with Windows Admin Center Virtualization Mode
Getting Started with Windows Admin Center Virtualization Mode Windows Admin Center (WAC) Virtualization Mode is a new, preview experience for managing large Hyper-V virtualization fabrics—compute, networking, and storage—from a single, web-based console. It’s designed to scale from a handful of hosts up to thousands, centralizing configuration and day-to-day operations. This post walks through: • What Virtualization Mode is and its constraints • How to install it on a Windows Server host • How to add an existing Hyper-V host into a resource group Prerequisites and Constraints Before you begin, note the current preview limitations: • The WAC Virtualization Mode server and the Hyper-V hosts it manages must be in the same Active Directory domain. • You cannot install Virtualization Mode side-by-side with a traditional WAC deployment on the same server. • Do not install Virtualization Mode directly on a Hyper-V host you plan to manage. – You can install it on a VM running on that host. • Plan for at least 8 GB RAM on the WAC Virtualization Mode server. For TLS, the walkthrough assumes you have an Enterprise CA and are deploying domain-trusted certificates to servers, so browsers automatically trust the HTTPS endpoint. You can use a self signed certificate, but you’ll end up with all the fun that entails when you use WAC-V from a host on which the self signed cert isn’t installed. Given the domain requirements of WAC-V and the hosts it manages, going the Enterprise CA method seemed the path of least resistance. Step 1 – Install the C++ Redistributable On your Windows Server 2025 host that will run WAC Virtualization Mode: 1. Open Windows Terminal or PowerShell. 2. Use winget to search for the VC++ redistributable: powershell winget search "VC Redist" 3. Identify the package corresponding to “Microsoft Visual C++ 2015–2022 Redistributable” (or equivalent). 4. Install it with winget, for example: powershell winget install "Microsoft.VC++2015-2022Redist-x64" This fulfills the runtime dependency for the WAC Virtualization Mode installer. Step 2 – Install Windows Admin Center Virtualization Mode 1. Download the installer 1. Download the Windows Admin Center Virtualization Mode installer from the Windows Insider Preview location provided in the official documentation. Save it to a local folder on the WAC host. 2. Run the setup wizard 1. Double-click the downloaded binary. 2. Approve the UAC prompt. 3. In the Welcome page, proceed as with traditional WAC setup. 3. Accept the license and choose setup type 1. Accept the license agreement. 2. Choose Express setup (suitable for most lab and PoC deployments). 4. Select a TLS certificate 1. When prompted for a TLS certificate: 1. Select a certificate issued by your Enterprise CA that matches the server name. 2. Using CA-issued certs ensures all domain-joined clients will trust the site without manual certificate import. 5. Configure PostgreSQL for WAC 1. Virtualization Mode uses PostgreSQL as its configuration and state database. 2. When prompted: 1. Provide a strong password for the database account WAC will use. 2. Record this securely if required by your org standards. 6. Configure update and diagnostic settings 1. Choose how WAC should be updated (manual/automatic). 2. Set diagnostic data preferences according to your policy. 7. Complete the installation 1. Click Install to deploy: 1. The WAC Virtualization Mode web service 2. The PostgreSQL database instance 2. When installation completes, click Finish. Step 3 – Sign In to Virtualization Mode 1. Open a browser on a domain-joined machine and browse to the WAC URL (for example, https://wac-vmode01.contoso.internal). 2. Sign in with your domain credentials that have appropriate rights to manage Hyper-V hosts (for example, DOMAIN\adminuser). 3. You’ll see the new Virtualization Mode UI, which differs significantly from traditional WAC and is optimized for fabric-wide management. Step 4 – Create a Resource Group Resource groups help you logically organize Hyper-V servers you’ll manage (for example, by site, function, or cluster membership). 1. In the Virtualization Mode UI, select Resource groups. 2. Click Create resource group. 3. Provide a name, such as Zava-Nested-Vert. 4. Save the resource group. You now have a logical container ready for one or more Hyper-V hosts. Step 5 – Prepare the Hyper-V Host Before adding an existing Hyper-V host: 1. Ensure the host is: 1. Running Hyper-V and reachable by FQDN (for example, zava-hvA.zavaops.internal). 2. In the same AD domain as the WAC Virtualization Mode server. 2. Temporarily open File and Printer Sharing from the Hyper-V host’s firewall to the WAC Virtualization Mode server: 1. This is required for initial onboarding. 2. After onboarding, you can re-lock firewall rules according to your security baseline. Step 6 – Add a Hyper-V Host to the Resource Group 1. In the WAC Virtualization Mode UI, go to your resource group. 2. Click the ellipsis (…) and choose Add resource. 3. On the Add resource page, select Compute (you’re adding a Hyper-V server, not a storage fabric resource). 4. Enter the Hyper-V host’s FQDN (for example, zava-hvA.zavaops.internal). 5. Confirm the host resolves correctly and proceed. Configure Networking Template 1. On the Networking page, assign fabric roles to NICs using the network template model: 1. Each NIC can be tagged for one or more roles: 1. Compute 2. Management 3. Storage 2. In a simple, single-NIC lab scenario, you may assign Compute, Management, and Storage all to Ethernet0. 3. All three roles must be fully assigned across available adapters before you can proceed. Configure Storage 1. On the Storage page, specify the storage model: 1. For an existing host using local disks, choose Use existing storage. 2. In future, you can select SAN or file server storage when those options are available and configured in your environment. Configure Compute Properties 1. On the Compute page, configure host-level defaults: 1. Enable or disable Enhanced Session Mode. 2. Set the maximum concurrent live migrations. 3. Confirm or update the default VM storage path. 2. Review the configuration, click Next, then Submit. 3. The Hyper-V host is registered into the resource group and becomes manageable via Virtualization Mode. Step 7 – Verify Host and VM Management With the host onboarded: 1. Open the resource group and select the Hyper-V host. 2. You’ll see a streamlined view similar to traditional WAC, with nodes for: 1. Event logs 2. Files 3. Networks 4. Storage 5. Windows Update 6. Virtual Machines 3. To validate functionality, create a test VM: 1. Go to Virtual Machines → Add. 2. Provide a VM name (for example, WS25-temp). 3. Set vCPUs (for example, 2). 4. Optionally enable nested virtualization. 5. Select the appropriate virtual switch. 6. Click Create, then attach an ISO or existing VHDX and complete OS setup. ▶️ Public Preview: https://aka.ms/WACDownloadvMode ▶️ Documentation: https://aka.ms/WACvModeDocs696Views2likes1CommentMicrosoft Entra Domain Services: Deploy, Join a VM, and Use Classic AD Tools
Microsoft Entra Domain Services (Entra DS) provides you with the functionality of managed domain controllers in Azure. This allows you to domain-join Windows Server VMs, use Group Policy, and manage DNS on a specially prepared vNet subnet without deploying and patching your own DC VMs. This post walks through: • Preparing your virtual network • Deploying Entra DS • Configuring DNS • Joining a Windows Server VM to the managed domain • Using AD DS and Windows Server DNS tools from that VM Prerequisites • An Azure subscription. • A Microsoft Entra tenant with a custom DNS domain verified (for example, zava.support). Entra DS uses this custom domain as the managed domain name. • Permission to create resource groups, VNets, and Entra DS. • Permission to manage Entra groups in the tenant (add administrators/configure RBAC). Step 1 – Create a resource group and virtual network 1. Create a new resource group in your chosen region to hold all Entra DS resources and VMs. 2. Create a virtual network (for example, zava-entra-dsvn) in that resource group (for example, address space: 172.16.0.0/16 (or a range that fits your environment). 3. Add a subnet dedicated to the Entra DS domain controllers (for example, zava-entra-dc). This subnet will host the managed domain controller resources created by Entra DS and you won’t actually deploy VMs there. Important Keep this DC subnet separate from your workload subnets. You can use NSGs, but avoid blocking Entra DS management traffic. Step 2 – Add a workload subnet for VMs 1. In the same virtual network, create a second subnet (for example, zava-domain-vms) for domain-joined workloads such as IIS VMs. This special subnet is where you’ll deploy the Windows Server VM that joins the Entra DS domain. Step 3 – Deploy Microsoft Entra Domain Services In the Azure portal, create a new Microsoft Entra Domain Services managed domain by performing the following steps: 1. Select the resource group you created earlier. 2. Confirm the DNS domain name (for example, zava.support)—this comes from your Entra tenant’s custom domain. 3. Choose the region (same region as the virtual network). 4. Keep the default Enterprise SKU unless you have a specific need for another. 5. On the Networking page: · Select the virtual network you created. · Select the DC subnet for the managed domain controllers. 6. On the Administration page note that the AAD DC Administrators group (legacy name shown in the portal) is effectively the Domain Admins equivalent for the managed domain. Any user you add to this group in Entra becomes a domain admin in Entra DS. 7. Configure synchronization scope between Entra and Entra DS. · All accounts (default) – synchronizes both cloud-only and synchronized users. · Cloud-only accounts – useful when you’re already syncing on-prem identities and you only want specific cloud accounts in Entra DS. 8. Review the Security settings page. By default: · NTLMv1 disabled. · You can enable/disable NTLM password sync, or effectively disable NTLM entirely. · RC4 encryption disabled by default. · Kerberos armoring enabled by default. · LDAP signing and LDAP channel binding enabled by default. 9. Review your configuration and create the Entra DS managed domain. Note after deployment, you cannot change: • The managed domain DNS name • Subscription • Resource group • Virtual network and subnet used by Entra DS Step 4 – Fix virtual network DNS with Entra DS health checks 1. Once deployment completes, open the Entra DS resource and go to View health. 2. Run the health checks. If the diagnostic reports that the virtual network DNS servers are not set to the Entra DS managed DC IPs, select Fix to automatically configure the VNet’s DNS servers. · In Entra DS, note the DNS server IPs (for example, 172.16.0.4 and 172.16.0.5). · In the virtual network’s DNS settings, confirm these IPs are configured as custom DNS servers. Tip Any VM in this virtual network that needs to join the managed domain must use these Entra DS DNS addresses. Step 5 – Add administrators to the AAD DC Administrators group 1. In the Entra admin center, go to Groups > All groups and locate AAD DC Administrators. 2. Open the group and add your primary admin account (for example, prime@zava.support) and add a dedicated domain admin–style account (for example, adds.prime@zava.support) to be the primary administrator for the managed domain. Important note: You’ll need to change the password of any Entra account you want to use in the managed AD DS domain after deploying Entra DS. This will configure password synchronization between Entra and Entra DS, allowing you to use the Entra account. If you don’t change the password, you’ll be unable to use the account with Entra DS even though it will function normally in other parts of Azure. This trips a lot of people up. Step 6 – Create a Windows Server IaaS VM on the workload subnet 1. In the Azure portal, create a new Windows Server VM (for example, an IIS server): 1. Place it in the same resource group. 2. Select the virtual network you created earlier. 3. Attach it to the workload subnet (for example, zava-domain-vms). 4. Configure a local administrator account (for example, username prime with a strong password). 2. On the Management blade, note the option “Login with Microsoft Entra ID”: 1. This enables direct Entra login to the VM but does not join the VM to the Entra DS domain. 2. For this walkthrough, you’ll join the VM to Entra DS using classic domain join so don’t need to enable this option. 3. Complete the wizard and deploy the VM. Step 7 – Connect to the VM and verify DNS 1. Once the VM is deployed, open the VM in the portal and select Connect > RDP. 1. Request a JIT RDP port opening if required. 2. Download the RDP file and open it with Remote Desktop Connection. 2. Sign in with the local administrator account you configured when deploying the VM and not your Entra account. 3. In the VM, open a command prompt and run: ipconfig /all 1. Confirm that the DNS servers are the Entra DS managed IPs (for example, 172.16.0.4 and 172.16.0.5). If DNS is wrong Double-check the VNet’s DNS settings and ensure the VM is attached to the correct virtual network and subnet, then restart the VM. Step 8 – Join the VM to the Entra DS domain 1. On the VM, open Server Manager and select Local Server. 2. Next to Workgroup, select the workgroup name to open System Properties (Computer Name tab). 3. Select Change… and then: · Under Member of, select Domain. · Enter the Entra DS domain name (for example, zava.support). 4. When prompted for credentials, use an account that’s a member of AAD DC Administrators, such as adds.prime@zava.support, and enter the password. 5. When you receive the confirmation that the computer has joined the domain, restart the VM. Step 9 – Sign in with an Entra DS domain account 1. After the VM restarts, reconnect via RDP using the VM’s public IP and: · Username: your domain UPN (for example, adds.prime@zava.support). · Password: the account’s password. 2. Confirm that you are signed in as a domain user in the Entra DS managed domain. Step 10 – Use AD DS and DNS tools on the domain-joined VM 1. Install and open Active Directory Users and Computers (RSAT) on the VM. · Browse the managed domain structure. · Notice containers such as AADDC Computers, AADDC Users, and groups like Domain Admins that map back to Entra groups. 2. Create an organizational unit (OU), for example IIS Servers, to contain IIS VMs. 3. Open Group Policy Management and: · Create a Group Policy Object targeting the IIS Servers OU. · Link and configure settings as required (hardening, IIS config, etc.). 4. Open the DNS Manager console on the VM, which now connects to the Entra DS–managed DNS servers. 5. Create a new Host (A) record, for example: · Name: iis3 · FQDN: iis3.zava.support · IP address: the appropriate internal address. 6. Open a command prompt and verify DNS resolution with: nslookup iis3.zava.support • Confirm it returns the correct IP address. Entra DS gives you familiar AD capabilities—domain join, Group Policy, and DNS—without the overhead of running and maintaining your own DC VMs in Azure. You can find out more at: https://learn.microsoft.com/en-us/entra/identity/domain-services/overview401Views1like0CommentsWindows Server 2025 Hyper-V Workgroup Cluster with Certificate-Based Authentication
In this guide, we will walk through creating a 2-node or 4-node Hyper-V failover cluster where the nodes are not domain-joined, using mutual certificate-based authentication instead of NTLM or shared local accounts. Here we are going to leverage X.509 certificates for node-to-node authentication. If you don't use certificates, you can do this with NTLM, but we're avoiding that as NTLM is supported, but the general recommendation is that you deprecate it where you can. We can't use Kerberos because our nodes won't be domain joined. It's a lot easier to do Windows Server Clusters if everything is domain joined, but that's not what we're doing here because there are scenarios where people want each cluster node to be a standalone (probably why you are reading this article). Prerequisites and Environment Preparation Before diving into configuration, ensure the following prerequisites and baseline setup: Server OS and Roles: All cluster nodes must be running Windows Server 2025 (same edition and patch level). Install the latest updates and drivers on each node. Each node should have the Hyper-V role and Failover Clustering feature available (we will install these via PowerShell shortly). Workgroup configuration: Nodes must be in a workgroup. The nodes should be in the same workgroup name. All nodes should share a common DNS suffix so that they can resolve each other’s FQDNs. For example, if your chosen suffix is mylocal.net, ensure each server’s FQDN is NodeName.mylocal.net. Name Resolution: Provide a way for nodes to resolve each other’s names (and the cluster name). If you have no internal DNS server, use the hosts file on each node to map hostnames to IPs. At minimum, add entries for each node’s name (short and FQDN) and the planned cluster name (e.g. Cluster1 and Cluster1.mylocal.net) pointing to the cluster’s management IP address. Network configuration: Ensure a reliable, low-latency network links all nodes. Ideally use at least two networks or VLANs: one for management/cluster communication and one dedicated for Live Migration traffic. This improves performance and security (live migration traffic can be isolated). If using a single network, ensure it is a trusted, private network since live migration data is not encrypted by default. Assign static IPs (or DHCP reservations) on the management network for each node and decide on an unused static IP for the cluster itself. Verify that necessary firewall rules for clustering are enabled on each node (Windows will add these when the Failover Clustering feature is installed, but if your network is classified Public, you may need to enable them or set the network location to Private). Time synchronization: Consistent time is important for certificate trust. Configure NTP on each server (e.g. pointing to a reliable internet time source or a local NTP server) so that system clocks are in sync. Shared storage: Prepare the shared storage that all nodes will use for Hyper-V. This can be an iSCSI target or an SMB 3.0 share accessible to all nodes. For iSCSI or SAN storage, connect each node to the iSCSI target (e.g. using the Microsoft iSCSI Initiator) and present the same LUN(s) to all nodes. Do not bring the disks online or format them on individual servers – leave them raw for the cluster to manage. For an SMB 3 file share, ensure the share is configured for continuous availability. Note: A file share witness for quorum is not supported in a workgroup cluster, so plan to use a disk witness or cloud witness instead. Administrative access: You will need Administrator access to each server. While we will avoid using identical local user accounts for cluster authentication, you should still have a way to log into each node (e.g. the built-in local Administrator account on each machine). If using Remote Desktop or PowerShell Remoting for setup, ensure you can authenticate to each server (we will configure certificate-based WinRM for secure remote PowerShell). The cluster creation process can be done by running commands locally on each node to avoid passing NTLM credentials. Obtaining and Configuring Certificates for Cluster Authentication The core of our setup is the use of mutual certificate-based authentication between cluster nodes. Each node will need an X.509 certificate that the others trust. We will outline how to use an internal Active Directory Certificate Services (AD CS) enterprise CA to issue these certificates, and mention alternatives for test environments. We are using AD CS even though the nodes aren't domain joined. Just because the nodes aren't members of the domain doesn't mean you can't use an Enterprise CA to issue certificates, you just have to ensure the nodes are configured to trust the CA's certs manually. Certificate Requirements and Template Configuration For clustering (and related features like Hyper-V live migration) to authenticate using certificates, the certificates must meet specific requirements: Key Usage: The certificate should support digital signature and key encipherment (these are typically enabled by default for SSL certificates). Enhanced Key Usage (EKU): It must include both Client Authentication and Server Authentication EKUs. Having both allows the certificate to be presented by a node as a client (when initiating a connection to another node) and as a server (when accepting a connection). For example, in the certificate’s properties you should see Client Authentication (1.3.6.1.5.5.7.3.2) and Server Authentication (1.3.6.1.5.5.7.3.1) listed under “Enhanced Key Usage”. Subject Name and SAN: The certificate’s subject or Subject Alternative Name should include the node’s DNS name. It is recommended that the Subject Common Name (CN) be set to the server’s fully qualified DNS name (e.g. Node1.mylocal.net). Also include the short hostname (e.g. Node1) in the Subject Alternative Name (SAN) extension (DNS entries). If you have already chosen a cluster name (e.g. Cluster1), include the cluster’s DNS name in the SAN as well. This ensures that any node’s certificate can be used to authenticate connections addressed to the cluster’s name or the node’s name. (Including the cluster name in all node certificates is optional but can facilitate management access via the cluster name over HTTPS, since whichever node responds will present a certificate that matches the cluster name in SAN.) Trust: All cluster nodes must trust the issuer of the certificates. If using an internal enterprise CA, this means each node should have the CA’s root certificate in its Trusted Root Certification Authorities store. If you are using a standalone or third-party CA, similarly ensure the root (and any intermediate CA) is imported into each node’s Trusted Root store. Next, on your enterprise CA, create a certificate template for the cluster node certificates (or use an appropriate existing template): Template basis: A good starting point is the built-in “Computer” or “Web Server” template. Duplicate the template so you can modify settings without affecting defaults. General Settings: Give the new template a descriptive name (e.g. “Workgroup Cluster Node”). Set the validity period (e.g. 1 or 2 years – plan a manageable renewal schedule since these certs will need renewal in the future). Compatibility: Ensure it’s set for at least Windows Server 2016 or higher for both Certification Authority and Certificate Recipient to support modern cryptography. Subject Name: Since our servers are not domain-joined (and thus cannot auto-enroll with their AD computer name), configure the template to allow subject name supply in the request. In the template’s Subject Name tab, choose “Supply in request” (this allows us to specify the SAN and CN when we request the cert on each node). Alternatively, use the SAN field in the request – modern certificate requests will typically put the FQDN in the SAN. Extensions: In the Extensions tab, edit Key Usage to ensure it includes Digital Signature and Key Encipherment (these should already be selected by default for Computer templates). Then edit Extended Key Usage and make sure Client Authentication and Server Authentication are present. If using a duplicated Web Server template, add Client Authentication EKU; if using Computer template, both EKUs should already be there. Also enable private key export if your policy requires (though generally private keys should not be exported; here each node will have its own cert so export is not necessary except for backup purposes). Security: Allow the account that will be requesting the certificate to enroll. Since the nodes are not in AD, you might generate the CSR on each node and then submit it via an admin account. One approach is to use a domain-joined management PC or the CA server itself to submit the CSR, so ensure domain users (or a specific user) have Enroll permission on the template. Publish the template: On the CA, publish the new template so it is available for issuing. Obtaining Certificates from the Enterprise CA Now for each cluster node, request a certificate from the CA using the new template. To do this, on each node, create an INF file describing the certificate request. For example, Node1.inf might specify the Subject as CN=Node1.mylocal.net and include SANs for Node1.mylocal.net, Node1, Cluster1.mylocal.net, Cluster1. Also specify in the INF that you want Client and Server Auth EKUs (or since the template has them by default, it might not be needed to list them explicitly). Then run: certreq -new Node1.inf Node1.req This generates a CSR file (Node1.req). Transfer this request to a machine where you can reach the CA (or use the CA web enrollment). Submit the request to your CA, specifying the custom template. For example: certreq -submit -attrib "CertificateTemplate:Workgroup Cluster Node" Node1.req Node1.cer (Or use the Certification Authority MMC to approve the pending request.) This yields Node1.cer. Finally, import the issued certificate on Node1: certreq -accept Node1.cer This will automatically place the certificate in the Local Machine Personal store with the private key. Using Certificates MMC (if the CA web portal is available): On each node, open Certificates (Local Computer) MMC and under Personal > Certificates, initiate New Certificate Request. Use the Active Directory Enrollment Policy if the node can reach the CA’s web enrollment (even if not domain-joined, you can often authenticate with a domain user account for enrollment). Select the custom template and supply the DNS names. Complete the enrollment to obtain the certificate in the Personal store. On a domain-joined helper system: Alternatively, use a domain-joined machine to request on behalf of the node (using the “Enroll on behalf” feature with an Enrollment Agent certificate, or simply request and then export/import). This is more complex and usually not needed unless policy restricts direct enrollment. After obtaining each certificate, verify on the node that it appears in Certificates (Local Computer) > Personal > Certificates. The Issued To should be the node’s FQDN, and on the Details tab you should see the required EKUs and SAN entries. Also import the CA’s Root CA certificate into Trusted Root Certification Authorities on each node (the certreq -accept step may do this automatically if the chain is provided; if not, manually import the CA root). A quick check using the Certificates MMC or PowerShell can confirm trust. For example, to check via PowerShell: Get-ChildItem Cert:\LocalMachine\My | Where-Object {$_.Subject -like "*Node1*"} | Select-Object Subject, EnhancedKeyUsageList, NotAfter Make sure the EnhancedKeyUsageList shows both Client and Server Authentication and that NotAfter (expiry) is a reasonable date. Also ensure no errors about untrusted issuer – the Certificate status should show “This certificate is OK”. Option: Self-Signed Certificates for Testing For a lab or proof-of-concept (where an enterprise CA is not available), you can use self-signed certificates. The key is to create a self-signed cert that includes the proper names and EKUs, and then trust that cert across all nodes. Use PowerShell New-SelfSignedCertificate with appropriate parameters. For example, on Node1: $cert = New-SelfSignedCertificate -DnsName "Node1.mylocal.net", "Node1", "Cluster1.mylocal.net", "Cluster1" ` -CertStoreLocation Cert:\LocalMachine\My ` -KeyUsage DigitalSignature, KeyEncipherment ` -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.1;1.3.6.1.5.5.7.3.2") This creates a certificate for Node1 with the specified DNS names and both ServerAuth/ClientAuth EKUs. Repeat on Node2 (adjusting names accordingly). Alternatively, you can generate a temporary root CA certificate and then issue child certificates to each node (PowerShell’s -TestRoot switch simplifies this by generating a root and end-entity cert together). If you created individual self-signed certs per node, export each node’s certificate (without the private key) and import it into the Trusted People or Trusted Root store of the other nodes. (Trusted People works for peer trust of specific certs; Trusted Root works if you created a root CA and issued from it). For example, if Node1 and Node2 each have self-signed certs, import Node1’s cert as a Trusted Root on Node2 and vice versa. This is required because self-signed certs are not automatically trusted. Using CA-issued certs is strongly recommended for production. Self-signed certs should only be used in test environments, and if used, monitor and manually renew them before expiration (since there’s no CA to do it). A lot of problems have occurred in production systems because people used self signed certs and forgot that they expire. Setting Up WinRM over HTTPS for Remote Management With certificates in place, we can configure Windows Remote Management (WinRM) to use them. WinRM is the service behind PowerShell Remoting and many remote management tools. By default, WinRM uses HTTP (port 5985) and authenticates via Kerberos or NTLM. In a workgroup scenario, NTLM over HTTP would be used – we want to avoid that. Instead, we will enable WinRM over HTTPS (port 5986) with our certificates, providing encryption and the ability to use certificate-based authentication for management sessions. Perform these steps on each cluster node: Verify certificate for WinRM: WinRM requires a certificate in the Local Computer Personal store that has a Server Authentication EKU and whose Subject or SAN matches the hostname. We have already enrolled such a certificate for each node. Double-check that the certificate’s Issued To (CN or one of the SAN entries) exactly matches the hostname that clients will use (e.g. the FQDN). If you plan to manage via short name, ensure the short name is in SAN; if via FQDN, that’s covered by CN or SAN. The certificate must not be expired or revoked, and it should be issued by a CA that the clients trust (not self-signed unless the client trusts it). Enable the HTTPS listener: Open an elevated PowerShell on the node and run: winrm quickconfig -transport:https This command creates a WinRM listener on TCP 5986 bound to the certificate. If it says no certificate was found, you may need to specify the certificate manually. You can do so with: # Find the certificate thumbprint (assuming only one with Server Auth) $thumb = (Get-ChildItem Cert:\LocalMachine\My | Where-Object {$_.EnhancedKeyUsageList -match "Server Authentication"} | Select-Object -First 1 -ExpandProperty Thumbprint) New-Item -Path WSMan:\LocalHost\Listener -Transport HTTPS -Address * -CertificateThumbprint $thumb -Force Verify listeners with: winrm enumerate winrm/config/listener You should see an HTTPS listener with hostname, listening on 5986, and the certificate’s thumbprint. WinRM will automatically choose a certificate that meets the criteria (if multiple are present, it picks the one with CN matching machine name, so ideally use a unique cert to avoid ambiguity). Disable unencrypted/HTTP access (optional but recommended): Since we want all remote management encrypted and to eliminate NTLM, you can disable the HTTP listener. Run: Remove-WSManInstance -ResourceURI winrm/config/Listener -SelectorSet @{Address="*", Transport="HTTP"} This ensures WinRM is only listening on HTTPS. Also, you may configure the WinRM service to reject unencrypted traffic and disallow Basic authentication to prevent any fallback to insecure methods: winrm set winrm/config/service '@{AllowUnencrypted="false"}' winrm set winrm/config/service/auth '@{Basic="false"}' (By default, AllowUnencrypted is false anyway when HTTPS is used, and Basic is false unless explicitly enabled.) TrustedHosts (if needed): In a workgroup, WinRM won’t automatically trust hostnames for authentication. However, when using certificate authentication, the usual TrustedHosts requirement may not apply in the same way as for NTLM/Negotiate. If you plan to authenticate with username/password over HTTPS (e.g. using Basic or default CredSSP), you will need to add the other nodes (or management station) to the TrustedHosts list on each node. This isn’t needed for the cluster’s internal communication (which uses certificates via clustering, not WinRM), but it might be needed for your remote PowerShell sessions depending on method. To allow all (not recommended for security), you could do: Set-Item WSMan:\localhost\Client\TrustedHosts -Value "*" Or specify each host: Set-Item WSMan:\localhost\Client\TrustedHosts -Value "Node1,Node2,Cluster1" This setting allows the local WinRM client to talk to those remote names without Kerberos. If you will use certificate-based authentication for WinRM (where the client presents a cert instead of username/password), TrustedHosts is not required – certificate auth doesn’t rely on host trust in the same way. (Optional) Configure certificate authentication for admin access: One of the benefits of HTTPS listener is you can use certificate mapping to log in without a password. For advanced users, you can issue a client certificate for yourself (with Client Authentication EKU), then configure each server to map that cert to a user (for example, map to the local Administrator account). This involves creating a mapping entry in winrm/config/service/certmapping. For instance: # Example: map a client cert by its subject to a local account winrm create winrm/config/service/certmapping @{CertificateIssuer= "CN=YourCA"; Subject="CN=AdminUserCert"; Username="Administrator"; Password="<adminPassword>"; Enabled="true"} Then from your management machine, you can use that certificate to authenticate. While powerful, this goes beyond the core cluster setup, so we won’t detail it further. Without this, you can still connect to the nodes using Enter-PSSession -ComputerName Node1 -UseSSL -Credential Node1\Administrator (which will prompt for the password but send it safely over the encrypted channel). At this point, we have each node prepared with a trusted certificate and WinRM listening securely. Test the connectivity: from one node, try to start a PowerShell remote session to the other using HTTPS. For example, on Node1 run: Test-WsMan Node2 -UseSSL Enter-PSSession -ComputerName Node2 -UseSSL -Credential Node2\Administrator You should connect without credential errors or warnings (you may get a certificate trust prompt if the client machine doesn’t trust the server cert — make sure the CA root is in the client’s trust store as well). Once you can manage nodes remotely over HTTPS, you’re ready to create the cluster. Installing the Hyper-V and Failover Clustering Roles All cluster nodes need the Hyper-V role (for running VMs) and the Failover Clustering feature. We will use PowerShell to install these simultaneously on each server. On each node: Open an elevated PowerShell (locally or via your new WinRM setup) and run: Install-WindowsFeature -Name Failover-Clustering, Hyper-V -IncludeManagementTools -Restart This installs the Hyper-V hypervisor, the clustering feature, and management tools (including the Failover Cluster Manager and Hyper-V Manager GUI, and PowerShell modules). The server will restart if Hyper-V was not previously enabled (we include -Restart for convenience). After reboot, run the command on the next node (if doing it remotely, do one at a time). Alternatively, use the Server Manager GUI or Install-WindowsFeature without -Restart and reboot manually. After all nodes are back up, verify the features: Get-WindowsFeature -Name Hyper-V, Failover-Clustering It should show both as Installed. Also confirm the Failover Clustering PowerShell module is available (Get-Module -ListAvailable FailoverClusters) and the Cluster service is installed (though not yet configured). Cluster service account: Windows Server 2016+ automatically creates a local account called CLIUSR used by the cluster service for internal communication. Ensure this account was created (Computer Management > Users). We won’t interact with it directly, but be aware it exists. Do not delete or disable CLIUSR – the cluster uses it alongside certificates for bootstrapping. (All cluster node communications will now use either Kerberos or certificate auth; NTLM is not needed in WS2019+ clusters.) Now that you've backflipped and shenaniganed with all the certificates, you can actually get around to building the cluster. Creating the Failover Cluster (Using DNS as the Access Point) Here we will create the cluster and add nodes to it using PowerShell. The cluster will use a DNS name for its administrative access point (since there is no Active Directory for a traditional cluster computer object). The basic steps are: Validate the configuration (optional but recommended). Create the cluster (initially with one node to avoid cross-node authentication issues). Join additional node(s) to the cluster. Configure cluster networking, quorum, and storage (CSV). Validate the Configuration (Cluster Validation) It’s good practice to run the cluster validation tests to catch any misconfiguration or hardware issues before creating the cluster. Microsoft supports a cluster only if it passes validation or if any errors are acknowledged as non-critical. Run the following from one of the nodes (this will reach out to all nodes): Test-Cluster -Node Node1.mylocal.net, Node2.mylocal.net Replace with your actual node names (include all 2 or 4 nodes). The cmdlet will run a series of tests (network, storage, system settings). Ensure that all tests either pass or only have warnings that you understand. For example, warnings about “no storage is shared among all nodes” are expected if you haven’t yet configured iSCSI or if using SMB (you can skip storage tests with -Skip Storage if needed). If critical tests fail, resolve those issues (networking, disk visibility, etc.) before proceeding. Create the Cluster (with the First Node) On one node (say Node1), use the New-Cluster cmdlet to create the cluster with that node as the first member. By doing it with a single node initially, we avoid remote authentication at cluster creation time (no need for Node1 to authenticate to Node2 yet): New-Cluster -Name "Cluster1" -Node Node1 -StaticAddress "10.0.0.100" -AdministrativeAccessPoint DNS Here: -Name is the intended cluster name (this will be the name clients use to connect to the cluster, e.g. for management or as a CSV namespace prefix). We use “Cluster1” as an example. -Node Node1 specifies which server to include initially (Node1’s name). -StaticAddress sets the cluster’s IP address (choose one in the same subnet that is not in use; this IP will be brought online as the “Cluster Name” resource). In this example 10.0.0.100 is the cluster IP. -AdministrativeAccessPoint DNS indicates we’re creating a DNS-only cluster (no AD computer object). This is the default in workgroup clusters, but we specify it explicitly for clarity. The command will proceed to create the cluster service, register the cluster name in DNS (if DNS is configured and dynamic updates allowed), and bring the core cluster resources online. It will also create a cluster-specific certificate (self-signed) for internal use if needed, but since we have our CA-issued certs in place, the cluster may use those for node authentication. Note: If New-Cluster fails to register the cluster name in DNS (common in workgroup setups), you might need to create a manual DNS A record for “Cluster1” pointing to 10.0.0.100 in whatever DNS server the nodes use. Alternatively, add “Cluster1” to each node’s hosts file (as we did in prerequisites). This ensures that the cluster name is resolvable. The cluster will function without AD, but it still relies on DNS for name resolution of the cluster name and node names. At this point, the cluster exists with one node (Node1). You can verify by running cluster cmdlets on Node1, for example: Get-Cluster (should list “Cluster1”) and Get-ClusterNode (should list Node1 as up). In Failover Cluster Manager, you could also connect to “Cluster1” (or to Node1) and see the cluster. Add Additional Nodes to the Cluster Now we will add the remaining node(s) to the cluster: On each additional node, run the following (replace “Node2” with the name of that node and adjust cluster name accordingly): Add-ClusterNode -Cluster Cluster1 -Name Node2 Run this on Node2 itself (locally). This instructs Node2 to join the cluster named Cluster1. Because Node2 can authenticate the cluster (Node1) via the cluster’s certificate and vice versa, the join should succeed without prompting for credentials. Under the hood, the cluster service on Node2 will use the certificate (and CLIUSR account) to establish trust with Node1’s cluster service. Repeat the Add-ClusterNode command on each additional node (Node3, Node4, etc. one at a time). After each join, verify by running Get-ClusterNode on any cluster member – the new node should show up and status “Up”. If for some reason you prefer a single command from Node1 to add others, you could use: # Run on Node1: Add-ClusterNode -Name Node2, Node3 -Cluster Cluster1 This would attempt to add Node2 and Node3 from Node1. It may prompt for credentials or require TrustedHosts if no common auth is present. Using the local Add-ClusterNode on each node avoids those issues by performing the action locally. Either way, at the end all nodes should be members of Cluster1. Configure Quorum (Witness) Quorum configuration is critical, especially with an even number of nodes. The cluster will already default to Node Majority (no witness) or may try to assign a witness if it finds eligible storage. Use a witness to avoid a split-brain scenario. If you have a small shared disk (LUN) visible to both nodes, that can be a Disk Witness. Alternatively, use a Cloud Witness (Azure). To configure a disk witness, first make sure the disk is seen as Available Storage in the cluster, then run: Get-ClusterAvailableDisk | Add-ClusterDisk Set-ClusterQuorum -Cluster Cluster1 -NodeAndDiskMajority 0 /disk:<DiskResourceName> (Replace <DiskResourceName> with the name or number of the disk from Get-ClusterResource). Using Failover Cluster Manager, you can run the Configure Cluster Quorum wizard and select “Add a disk witness”. If no shared disk is available, the Cloud Witness is an easy option (requires an Azure Storage account and key). For cloud witness: Set-ClusterQuorum -Cluster Cluster1 -CloudWitness -AccountName "<StorageAccount>" -AccessKey "<Key>" Do not use a File Share witness – as noted earlier, file share witnesses are not supported in workgroup clusters because the cluster cannot authenticate to a remote share without AD. A 4-node cluster can sustain two node failures if properly configured. It’s recommended to also configure a witness for even-number clusters to avoid a tie (2–2) during a dual-node failure scenario. A disk or cloud witness is recommended (same process as above). With 4 nodes, you would typically use Node Majority + Witness. The cluster quorum wizard can automatically choose the best quorum config (typically it will pick Node Majority + Witness if you run the wizard and have a witness available). You can verify the quorum configuration with Get-ClusterQuorum. Make sure it lists the witness you configured (if any) and that the cluster core resources show the witness online. Add Cluster Shared Volumes (CSV) or Configure VM Storage Next, prepare storage for Hyper-V VMs. If using a shared disk (Block storage like iSCSI/SAN), after adding the disks to the cluster (they should appear in Storage > Disks in Failover Cluster Manager), you can enable Cluster Shared Volumes (CSV). CSV allows all nodes to concurrently access the NTFS/ReFS volume, simplifying VM placement and live migration. To add available cluster disks as CSV volumes: Get-ClusterDisk | Where-Object IsClustered -eq $true | Add-ClusterSharedVolume This will take each clustered disk and mount it as a CSV under C:\ClusterStorage\ on all nodes. Alternatively, right-click the disk in Failover Cluster Manager and choose Add to Cluster Shared Volumes. Once done, format the volume (if not already formatted) with NTFS or ReFS via any node (it will be accessible as C:\ClusterStorage\Volume1\ etc. on all nodes). Now this shared volume can store all VM files, and any node can run any VM using that storage. If using an SMB 3 share (NAS or file server), you won’t add this to cluster storage; instead, each Hyper-V host will connect to the SMB share directly. Ensure each node has access credentials for the share. In a workgroup, that typically means the NAS is also in a workgroup and you’ve created a local user on the NAS that each node uses (via stored credentials) – this is outside the cluster’s control. Each node should be able to New-SmbMapping or simply access the UNC path. Test access from each node (e.g. Dir \\NAS\HyperVShare). In Hyper-V settings, you might set the Default Virtual Hard Disk Path to the UNC or just specify the UNC when creating VMs. Note: Hyper-V supports storing VMs on SMB 3.0 shares with Kerberos or certificate-based authentication, but in a workgroup you’ll likely rely on a username/password for the share (which is a form of local account usage at the NAS). This doesn’t affect cluster node-to-node auth, but it’s a consideration for securing the NAS. Verify Cluster Status At this stage, run some quick checks to ensure the cluster is healthy: Get-Cluster – should show the cluster name, IP, and core resources online. Get-ClusterNode – all nodes should be Up. Get-ClusterResource – should list resources (Cluster Name, IP Address, any witness, any disks) and their state (Online). The Cluster Name resource will be of type “Distributed Network Name” since this is a DNS-only cluster. Use Failover Cluster Manager (you can launch it on one of the nodes or from RSAT on a client) to connect to “Cluster1”. Ensure you can see all nodes and storage. When prompted to connect, use <clustername> or <clusterIP> – with our certificate setup, it may be best to connect by cluster name (make sure DNS/hosts is resolving it to the cluster IP). If a certificate trust warning appears, it might be because the management station doesn’t trust the cluster node’s cert or you connected with a name not in the SAN. As a workaround, connect directly to a node in cluster manager (e.g. Node1), which then enumerates the cluster. Now you have a functioning cluster ready for Hyper-V workloads, with secure authentication between nodes. Next, we configure Hyper-V specific settings like Live Migration. Configuring Hyper-V for Live Migration in the Workgroup Cluster One major benefit introduced in Windows Server 2025 is support for Live Migration in workgroup clusters (previously, live migration required Kerberos and thus a domain). In WS2025, cluster nodes use certificates to mutually authenticate for live migration traffic. This allows VMs to move between hosts with no downtime even in the absence of AD. We will enable and tune live migration for our cluster. By default, the Hyper-V role might have live migration disabled (for non-clustered hosts). In a cluster, it may be auto-enabled when the Failover Clustering and Hyper-V roles are both present, but to ensure it it, run: Enable-VMMigration This enables the host to send/receive live migrations. In PowerShell, no output means success. (In Hyper-V Manager UI, this corresponds to ticking “Enable incoming and outgoing live migrations” in the Live Migrations settings.) In a workgroup, the only choice in UI would be CredSSP (since Kerberos requires domain). CredSSP means you must initiate the migration from a session where you are logged onto the source host so your credentials can be delegated. We cannot use Kerberos here, but the cluster’s internal PKU2U certificate mechanism will handle node-to-node auth for us when orchestrated via Failover Cluster Manager. No explicit setting is needed for cluster-internal certificate usage & Windows will use it automatically for the actual live migration operation. If you were to use PowerShell, the default MigrationAuthenticationType is CredSSP for workgroup. You can confirm (or set explicitly, though not strictly required): Set-VMHost -VirtualMachineMigrationAuthenticationType CredSSP (This can be done on each node; it just ensures the Hyper-V service knows to use CredSSP which aligns with our need to initiate migrations from an authenticated context.) If your cluster nodes were domain-joined, Windows Server 2025 enables Credential Guard which blocks CredSSP by default. In our case (workgroup), Credential Guard is not enabled by default, so CredSSP will function. Just be aware if you ever join these servers to a domain (or they were once joined to a domain before being demoted to a workgroup), you’d need to configure Kerberos constrained delegation or disable Credential Guard to use live migration. For security and performance, do not use the management network for VM migration if you have other NICs. We will designate the dedicated network (e.g. “LMNet” or a specific subnet) for migrations. You can configure this via PowerShell or Failover Cluster Manager. Using PowerShell, run the following on each node: # Example: allow LM only on 10.0.1.0/24 network (where 10.0.1.5 is this node's IP on that network) Set-VMMigrationNetwork 10.0.1.5 Set-VMHost -UseAnyNetworkForMigration $false The Set-VMMigrationNetwork cmdlet adds the network associated with the given IP to the allowed list for migrations. The second cmdlet ensures only those designated networks are used. Alternatively, if you have the network name or interface name, you might use Hyper-V Manager UI: under each host’s Hyper-V Settings > Live Migrations > Advanced Features, select Use these IP addresses for Live Migration and add the IP of the LM network interface. In a cluster, these settings are typically per-host. It’s a good idea to configure it identically on all nodes. Verify the network selection by running: Get-VMHost | Select -ExpandProperty MigrationNetworks. It should list the subnet or network you allowed, and UseAnyNetworkForMigration should be False. Windows can either send VM memory over TCP, compress it, or use SMB Direct (if RDMA is available) for live migration. By default in newer Windows versions, compression is used as it offers a balance of speed without special hardware. If you have a very fast dedicated network (10 Gbps+ or RDMA), you might choose SMB to leverage SMB Multichannel/RDMA for highest throughput. To set this: # Options: TCPIP, Compression, SMB Set-VMHost -VirtualMachineMigrationPerformanceOption Compression (Do this on each node; “Compression” is usually default on 2022/2025 Hyper-V.) If you select SMB, ensure your cluster network is configured to allow SMB traffic and consider enabling SMB encryption if security is a concern (SMB encryption will encrypt the live migration data stream). Note that if you enable SMB encryption or cluster-level encryption, it could disable RDMA on that traffic, so only enable it if needed, or rely on the network isolation as primary protection. Depending on your hardware, you may allow multiple VMs to migrate at once. The default is usually 2 simultaneous live migrations. You can increase this if you have capacity: Set-VMHost -MaximumVirtualMachineMigrations 4 -MaximumStorageMigrations 2 Adjust numbers as appropriate (and consider that cluster-level property (Get-Cluster).MaximumParallelMigrations might override host setting in a cluster). This setting can also be found in Hyper-V Settings UI under Live Migrations. With these configured, live migration is enabled. Test a live migration: Create a test VM (or if you have VMs, pick one) and attempt to move it from one node to another using Failover Cluster Manager or PowerShell: In Failover Cluster Manager, under Roles, right-click a virtual machine, choose Live Migrate > Select Node… and pick another node. The VM should migrate with zero downtime. If it fails, check for error messages regarding authentication. Ensure you initiated the move from a node where you’re an admin (or via cluster manager connected to the cluster with appropriate credentials). The cluster will handle the mutual auth using the certificates (this is transparent – behind the scenes, the nodes use the self-created PKU2U cert or our installed certs to establish a secure connection for VM memory transfer). Alternatively, use PowerShell: Move-ClusterVirtualMachineRole -Name "<VM resource name>" -Node <TargetNode> This cmdlet triggers a cluster-coordinated live migration (the cluster’s Move operation will use the appropriate auth). If the migration succeeds, congratulations – you have a fully functional Hyper-V cluster without AD! Security Best Practices Recap and Additional Hardening Additional best practices for securing a workgroup Hyper-V cluster include: Certificate Security: The private keys of your node certificates are powerful – protect them. They are stored in the machine store (and likely marked non-exportable). Only admins can access them; ensure no unauthorized users are in the local Administrators group. Plan a process for certificate renewal before expiration. If using an enterprise CA, you might issue certificates with a template that allows auto-renewal via scripts or at least track their expiry to re-issue and install new certs on each node in time. The Failover Cluster service auto-generates its own certificates (for CLIUSR/PKU2U) and auto-renews them, but since we provided our own, we must manage those. Stagger renewals to avoid all nodes swapping at once (the cluster should still trust old vs new if the CA is the same). It may be wise to overlap: install new certs on all nodes and only then remove the old, so that at no point a node is presenting a cert the others don't accept (if you change CA or template). Trusted Root and Revocation: All nodes trust the CA – maintain the security of that CA. Do not include unnecessary trust (e.g., avoid having nodes trust public CAs that they don’t need). If possible, use an internal CA that is only used for these infrastructure certs. Keep CRLs (Certificate Revocation Lists) accessible if your cluster nodes need to check revocation for each other’s certs (though cluster auth might not strictly require online revocation checking if the certificates are directly trusted). It’s another reason to have a reasonably long-lived internal CA or offline root. Disable NTLM: Since clustering no longer needs NTLM as of Windows 2019+, you can consider disabling NTLM fallback on these servers entirely for added security (via Group Policy “Network Security: Restrict NTLM: Deny on this server” etc.). However, be cautious: some processes (including cluster formation in older versions, or other services) might break. In our configuration, cluster communications should use Kerberos or cert. If these servers have no need for NTLM (no legacy apps), disabling it eliminates a whole class of attacks. Monitor event logs (Security log events for NTLM usage) if you attempt this. The conversation in the Microsoft tech community indicates by WS2022, cluster should function with NTLM disabled, though a user observed issues when CLIUSR password rotated if NTLM was blocked. WS2025 should further reduce any NTLM dependency. PKU2U policy: The cluster uses the PKU2U security provider for peer authentication with certificates. There is a local security policy “Network security: Allow PKU2U authentication requests to this computer to use online identities” – this must be enabled (which it is by default) for clustering to function properly. Some security guides recommend disabling PKU2U; do not disable it on cluster nodes (or if your organization’s baseline GPO disables it, create an exception for these servers). Disabling PKU2U will break the certificate-based node authentication and cause cluster communication failures. Firewall: We opened WinRM over 5986. Ensure Windows Firewall has the Windows Remote Management (HTTPS-In) rule enabled. The Failover Clustering feature should have added rules for cluster heartbeats (UDP 3343, etc.) and SMB (445) if needed. Double-check that on each node the Failover Cluster group of firewall rules is enabled for the relevant profiles (if your network is Public, you might need to enable the rules for Public profile manually, or set network as Private). Also, for live migration, if using SMB transport, enable SMB-in rules. If you enabled SMB encryption, it uses the same port 445 but encrypts payloads. Secure Live Migration Network: Ideally, the network carrying live migration is isolated (not routed outside of the cluster environment). If you want belt-and-suspenders security, you could implement IPsec encryption on live migration traffic. For example, require IPsec (with certificates) between the cluster nodes on the LM subnet. However, this can be complex and might conflict with SMB Direct/RDMA. Another simpler approach: since we can rely on our certificate mutual auth to prevent unauthorized node communication, focus on isolating that traffic so even if someone tapped it, you can optionally turn on SMB encryption for LM (when using SMB transport) which will encrypt the VM memory stream. At minimum, treat the LM network as sensitive, as it carries VM memory contents in clear text if not otherwise encrypted. Secure WinRM/management access: We configured WinRM for HTTPS. Make sure to limit who can log in via WinRM. By default, members of the Administrators group have access. Do not add unnecessary users to Administrators. You can also use Local Group Policy to restrict WinRM service to only allow certain users or certificate mappings. Since this is a workgroup, there’s no central AD group; you might create a local group for “Remote Management Users” and configure WSMan to allow members of that group (and only put specific admin accounts in it). Also consider enabling PowerShell Just Enough Administration (JEA) if you want to delegate specific tasks without full admin rights, though that’s advanced. Hyper-V host security: Apply standard Hyper-V best practices: enable Secure Boot for Gen2 VMs, keep the host OS minimal (consider using Windows Server Core for fewer attack surface, if feasible), and ensure only trusted administrators can create or manage VMs. Since this cluster is not in a domain, you won’t have AD group-based access control; consider using Authentication Policies like LAPS for unique local admin passwords per node. Monitor cluster events: Monitor the System event log for any cluster-related errors (clustering will log events if authentication fails or if there are connectivity issues). Also monitor the FailoverClustering event log channel. Any errors about “unable to authenticate” or “No logon servers” etc., would indicate certificate or connectivity problems. Test failover and failback: After configuration, test that VMs can failover properly. Shut down one node and ensure VMs move to other node automatically. When the node comes back, you can live migrate them back. This will give confidence that the cluster’s certificate-based auth holds up under real failover conditions. Consider Management Tools: Tools like Windows Admin Center (WAC) can manage Hyper-V clusters. WAC can be configured to use the certificate for connecting to the nodes (it will prompt to trust the certificate if self-signed). Using WAC or Failover Cluster Manager with our setup might require launching the console from a machine that trusts the cluster’s cert and using the cluster DNS name. Always ensure management traffic is also encrypted (WAC uses HTTPS and our WinRM is HTTPS so it is).5.6KViews4likes10CommentsUsing OSConfig to manage Windows Server 2025 security baselines
OSConfig is a security configuration and compliance management tool introduced as a PowerShell module for use with Windows Server 2025. It enables you to enforce security baselines, automate compliance, and prevent configuration drift on Windows Server 2025 computers. OSConfig has the following requirements: Windows Server 2025 (OSConfig is not supported on earlier versions) PowerShell version 5.1 or higher Administrator privileges OSConfig is available as a module from the PowerShell Gallery. You install it using the following command Install-Module -Name Microsoft.OSConfig -Scope AllUsers -Repository PSGallery -Force If prompted to install or update the NuGet provider, type Y and press Enter. You can verify that the module is installed with: Get-Module -ListAvailable -Name Microsoft.OSConfig You can ensure that you have an up-to-date version of the module and the baselines by running the following command: Update-Module -Name Microsoft.OSConfig To check which OSConfig cmdlets are available, run: Get-Command -Module Microsoft.OSConfig Applying Security Baselines OSConfig includes predefined security baselines tailored for different server roles: Domain Controller, Member Server, and Workgroup Member. These baselines enforce over 300 security settings, such as TLS 1.2+, SMB 3.0+, credential protections, and more. Server Role Command Domain Controller Set-OSConfigDesiredConfiguration -Scenario SecurityBaseline/WS2025/DomainController -Default Member Server Set-OSConfigDesiredConfiguration -Scenario SecurityBaseline/WS2025/MemberServer -Default Workgroup Member Set-OSConfigDesiredConfiguration -Scenario SecurityBaseline/WS2025/WorkgroupMember -Default Secured Core Set-OSConfigDesiredConfiguration -Scenario SecuredCore -Default Defender Antivirus Set-OSConfigDesiredConfiguration -Scenario Defender/Antivirus -Default To view compliance from a PowerShell session, run the following command, specifying the appropriate baseline: Get-OSConfigDesiredConfiguration -Scenario SecurityBaseline/WS2025/MemberServer | ft Name, @{ Name = "Status"; Expression={$_.Compliance.Status} }, @{ Name = "Reason"; Expression={$_.Compliance.Reason} } -AutoSize -Wrap Whilst this PowerShell output gets the job done, you might find it easier to parse the report by using Windows Admin Center. You can access the security baseline compliance report by connecting to the server you’ve configured using OSConfig by selecting the Security Baseline tab of the Security blade. Another feature of OSConfig is drift control. It helps ensure that the system starts and remains in a known good security state. When you turn it on, OSConfig automatically corrects any system changes that deviate from the desired state. OSConfig makes the correction through a refresh task. This task runs every 4 hours by default which you can verify with the Get-OSConfigDriftControl cmdlet. You can reset how often drift control runs using the Set-OSConfigDriftControl cmdlet. For example, to set it to 45 minutes run the command: Set-OSConfigDriftControl -RefreshPeriod 45 Rather than just using the default included baselines, you can also customize baselines to suit your organizational needs. That’s more detail that I want to cover here, but if you want to know more, check out the information available in the GitHub repo associated with OSConfig. Find out more about OSConfig at the following links: https://learn.microsoft.com/en-us/windows-server/security/osconfig/osconfig-overview https://learn.microsoft.com/en-us/windows-server/security/osconfig/osconfig-how-to-configure-security-baselines1.8KViews3likes5CommentsAzUpdate: Sysinternal Updates, MS Certs renewal, App service on Kubernetes on Azure Arc and more
A plethora of announcements was released at Microsoft Build 2021 as expected. Here is what the team will be reporting on this week: certain certifications will require yearly renewal, Sysinternal Tools Updates Announced, App Service Managed Certificates now generally available, Run App Service on Kubernetes or anywhere with Azure Arc and of course the Microsoft Learn module of the week.3.5KViews0likes1CommentAzure File Sync with ARC... Better together.
Hello Folks! Managing file servers across on-premises datacenters and cloud environments can be challenging for IT professionals. Azure File Sync (AFS) has been a game-changer by centralizing file shares in Azure while keeping your on-premises Windows servers in play. With AFS, a lightweight agent on a Windows file server keeps its files synced to an Azure file share, effectively turning the server into a cache for the cloud copy. This enables classic file server performance and compatibility, cloud tiering of cold data to save local storage costs, and capabilities like multi-site file access, backups, and disaster recovery using Azure’s infrastructure. Now, with the introduction of Azure Arc integration for Azure File Sync, it gets even better. Azure Arc, which allows you to project on-prem and multi-cloud servers into Azure for unified management, now offers an Azure File Sync agent extension that dramatically simplifies deployment and management of AFS on your hybrid servers. In this post, I’ll explain how this new integration works and how you can leverage it to streamline hybrid file server management, enable cloud tiering, and improve performance and cost efficiency. You can see the E2E 10-Minute Drill - Azure File sync with ARC, better together episode on YouTube below. Azure File Sync + Azure Arc: Better Together Azure File Sync has already enabled a hybrid cloud file system for many organizations. You install the AFS agent on a Windows Server (2016 or later) and register it with an Azure Storage Sync Service. From that point, the server’s designated folders continuously sync to an Azure file share. AFS’s hallmark feature is cloud tiering, older, infrequently used files can be transparently offloaded to Azure storage, while your active files stay on the local server cache. Users and applications continue to see all files in their usual paths; if someone opens a file that’s tiered, Azure File Sync pulls it down on-demand. This means IT pros can drastically reduce expensive on-premises storage usage without limiting users’ access to files. You also get multi-site synchronization (multiple servers in different locations can sync to the same Azure share), which is great for branch offices sharing data, and cloud backup/DR by virtue of having the data in Azure. In short, Azure File Sync transforms your traditional file server into a cloud-connected cache that combines the performance of local storage with the scalability and durability of Azure. Azure Arc comes into play to solve the management side of hybrid IT. Arc lets you project non-Azure machines (whether on-prem or even in other Clouds) into Azure and manage them alongside Azure VMs. An Arc-enabled server appears in the Azure portal and can have Extensions installed, which are components or agents that Azure can remotely deploy to the machine. Prior to now, installing or updating the Azure File Sync agent on a bunch of file servers meant handling each machine individually (via Remote Desktop, scripting, or System Center). This is where the Azure File Sync Agent Extension for Windows changes the game. Using the new Arc extension, deploying Azure File Sync is as easy as a few clicks. In the Azure Portal, if your Windows server is Arc-connected (i.e. the Azure Arc agent is installed and the server is registered in Azure), you can navigate to that server resource and simply Add the “Azure File Sync Agent for Windows” extension. The extension will automatically download and install the latest Azure File Sync agent (MSI) on the server. In other words, Azure Arc acts like a central deployment tool: you no longer need to manually log on or run separate install scripts on each server to set up or update AFS. If you have 10, 50, or 100 Arc-connected file servers, you can push Azure File Sync to all of them in a standardized way from Azure – a huge time saver for large environments. The extension also supports configuration options (like proxy settings or automatic update preferences) that you can set during deployment, ensuring the agent is installed with the right settings for your environment Note: The Azure File Sync Arc extension is currently Windows-only. Azure Arc supports Linux servers too, but the AFS agent (and thus this extension) works only on Windows Server 2016 or newer. So, you’ll need a Windows file server to take advantage of this feature (which is usually the case, since AFS relies on NTFS/Windows currently). Once the extension installs the agent, the remaining steps to fully enable sync are the same as a traditional Azure File Sync deployment: you register the server with your Storage Sync Service (if not done automatically) and then create a sync group linking a local folder (server endpoint) to an Azure file share (cloud endpoint). This can be done through the Azure portal, PowerShell, or CLI. The key point is that Azure Arc now handles the heavy lifting of agent deployment, and in the future, we may see even tighter integration where more of the configuration can be done centrally. For now, IT pros get a much simpler installation process – and once configured, all the hybrid benefits of Azure File Sync are in effect for your Arc-managed servers. Key Benefits for IT Pros: Azure File Sync + Azure Arc Centralized Management Azure Arc provides a single control plane in Azure to manage file services across multiple servers and locations. You can deploy updates or new agents at scale and monitor status from the cloud—reducing overhead and ensuring consistency. Simplified Deployment No manual installs. Azure Arc automates Azure File Sync setup by fetching and installing the agent remotely. Ideal for distributed environments, and easily integrated with automation tools like Azure CLI or PowerShell. Cost Optimization with Cloud Tiering Offload rarely accessed files to Azure storage to free local disk space and extend hardware life. Cache only hot data (10–20%) locally while leveraging Azure’s storage tiers for lower TCO. Improved Performance Cloud tiering keeps frequently used files local for LAN-speed access, reducing WAN latency. Active data stays on-site; inactive data moves to the cloud—delivering a smoother experience for distributed teams. Built-In Backup & DR Azure Files offers redundancy and point-in-time recovery via Azure Backup. If a server fails, you can quickly restore from Azure. Multi-site sync ensures continued access, supporting business continuity and cloud migration strategies. Getting Started with Azure File Sync via Arc Prepare Azure Arc and Servers Connect Windows file servers (Windows Server 2016+) to Azure Arc by installing the Connected Machine agent and onboarding them. Refer to Azure Arc documentation for setup. Deploy Azure File Sync Agent Extension Install the Azure File Sync agent extension on Arc-enabled servers using the Azure portal, PowerShell, or CLI. Verify the Azure Storage Sync Agent is installed on the server. See Microsoft Learn for detailed steps. Complete Azure File Sync Setup In the Azure portal, create or open a Storage Sync Service. Register the server and create a Sync Group to link a local folder (Server Endpoint) with an Azure File Share (Cloud Endpoint). Configure cloud tiering and free space settings as needed. Test and Monitor Allow time for initial sync. Test file access (including tiered files) and monitor sync status in the Azure portal. Use Azure Monitor for health alerts. Explore Advanced Features Enable options like cloud change enumeration, NTFS ACL sync, and Azure Backup for file shares to enhance functionality. Resources and Next Steps For more info and step-by-step guidance, check out these resources: Microsoft Learn – Azure File Sync Agent Extension on Azure Arc: Official documentation on installing and managing the AFS agent via Azure Arc. Azure File Sync Documentation: Comprehensive docs for Azure File Sync, including deployment guides, best practices, and troubleshooting. Azure Arc Documentation: Learn how to connect servers to Azure Arc and manage extensions. This is useful if you’re new to Arc or need to meet prerequisites for using the AFS extension. You, as an IT Pro, can provide your organization with the benefits of cloud storage – scalability, reliability, pay-as-you-go economics – while retaining the performance and control of on-premises file servers. All of this can be achieved with minimal overhead, thanks to the new Arc-delivered agent deployment and the powerful features of Azure File Sync. Check it out if you have not done so before. I highly recommend exploring this integration to modernize your file services. Cheers! Pierre Roman373Views1like0CommentsRequesting and Installing an SSL Certificate for Internet Information Server (IIS)
Generate a Certificate Signing Request (CSR) Generate the request using the Certificates snap-in in Microsoft Management Console (MMC). Step 1: Open the Certificates Snap-In Press Windows + R, type mmc, and press Enter. Go to File > Add/Remove Snap-in. Select Certificates and click Add. Choose Computer account, then click Next. Select Local computer and click Finish. Click OK to close the Add/Remove window. Step 2: Start the CSR Wizard In the left pane, expand Certificates (Local Computer). Right-click Personal and select: All Tasks → Advanced Operations → Create Custom Request Step 3: Configure the Request On the Certificate Enrollment page, click Next. Select Proceed without enrollment policy and click Next. On the “Certificate Information” page, expand Details and click Properties. On the General tab: Enter a friendly name, e.g., WS25-IIS Certificate. On the Subject tab: Under Subject name, choose Common Name. Enter the fully qualified domain name (FQDN), e.g. ws25-iis.windowserver.info. Click Add. Under Alternative name, choose DNS. Enter the same FQDN and click Add. On the Extensions tab: Under Key Usage, ensure Digital Signature and Key Encipherment are selected. Under Extended Key Usage, add Server Authentication. On the Private Key tab: Under Cryptographic Provider, select RSA, Microsoft Software Key Storage Provider. Set Key size to 2048 bits. Check Make private key exportable and Allow private key to be archived. Click Apply, then OK, and then Next. Step 4: Save the Request Choose a location to save the request file (e.g. C:\Temp). Ensure the format is set to Base 64. Provide a filename such as SSLRequest.req. Click Finish. You can open the file in Notepad to verify the Base64-encoded request text. Submit the CSR to a Certification Authority You can use an internal Windows CA or a public CA. The example below assumes a web enrollment interface. Step 1: Open the CA Web Enrollment Page Navigate to your CA’s enrollment site. If the server does not trust the CA, you may receive a warning. You'll need to or install the CA certificate as needed. Step 2: Submit an Advanced Certificate Request Select Request a certificate. Choose advanced certificate request. Open the CSR in Notepad, copy the Base64 text, and paste it into the request form. Click Submit. Step 3: Approve the Request (if required) If your CA requires approval, sign in to the CA server and approve the pending request. Step 4: Download the Issued Certificate Return to the CA web enrollment page. View the status of pending requests. Locate your request and select it. Choose the Base 64 encoded certificate format. Download the certificate. Save it to a known location and rename it meaningfully (e.g. WS25-IIS-Cert.cer). Install the SSL Certificate Double-click the .cer file to open it. Click Install Certificate. Choose Local Machine as the store location. When prompted for the store, select: Place all certificates in the following store Choose Personal Click Next, then Finish. Confirm the success message by clicking OK. The certificate is now imported and available for use by IIS. Bind the Certificate in IIS Step 1: Open IIS Manager Open Server Manager or search for IIS Manager. In the left pane, expand the server and select your website (e.g., Default Web Site). Step 2: Add an HTTPS Binding In the Actions pane, click Bindings. In the Site Bindings window, click Add. Select: Type: https Hostname: the FQDN used in the certificate (e.g., ws25-iis.windowserver.info) SSL Certificate: choose the certificate you installed (e.g. WS25-IIS Certificate) Click OK, then Close. Test the HTTPS Connection Open Microsoft Edge (or your preferred browser). Browse to the site using https:// and the FQDN. Example: https://ws25-iis.windowserver.info Confirm you see the IIS default page (or your site’s content). Click the padlock in the address bar: Verify the certificate is valid. Check the certificate details if desired. If the page loads securely without warnings, the certificate is installed and bound correctly.
607Views1like0CommentsStrengthening Azure File Sync security with Managed Identities
Hello Folks, As IT pros, we’re always looking for ways to reduce complexity and improve security in our infrastructure. One area that’s often overlooked is how our services authenticate with each other. Especially when it comes to Azure File Sync. In this post, I’ll walk you through how Managed Identities can simplify and secure your Azure File Sync deployments, based on my recent conversation with Grace Kim, Program Manager on the Azure Files and File Sync team. Why Managed Identities Matter Traditionally, Azure File Sync servers authenticate to the Storage Sync service using server certificates or shared access keys. While functional, these methods introduce operational overhead and potential security risks. Certificates expire, keys get misplaced, and rotating credentials can be a pain. Managed Identities solve this by allowing your server to authenticate securely without storing or managing credentials. Once enabled, the server uses its identity to access Azure resources, and permissions are managed through Azure Role-Based Access Control (RBAC). Using Azure File Sync with Managed Identities provides significant security enhancements and simpler credential management for enterprises. Instead of relying on storage account keys or SAS tokens, Azure File Sync authenticates using a system-assigned Managed Identity from Microsoft Entra ID (Azure AD). This keyless approach greatly improves security by removing long-lived secrets and reducing the attack surface. Access can be controlled via fine-grained Azure role-based access control (RBAC) rather than a broadly privileged key, enforcing least-privileged permissions on file shares. I believe that Azure AD RBAC is far more secure than managing storage account keys or SAS credentials. The result is a secure-by-default setup that minimizes the risk of credential leaks while streamlining authentication management. Managed Identities also improve integration with other Azure services and support enterprise-scale deployments. Because authentication is unified under Azure AD, Azure File Sync’s components (the Storage Sync Service and each registered server) seamlessly obtain tokens to access Azure Files and the sync service without any embedded secrets. This design fits into common Azure security frameworks and encourages consistent identity and access policies across services. In practice, the File Sync managed identity can be granted appropriate Azure roles to interact with related services (for example, allowing Azure Backup or Azure Monitor to access file share data) without sharing separate credentials. At scale, organizations benefit from easier administration. New servers can be onboarded by simply enabling a managed identity (on an Azure VM or an Azure Arc–connected server) and assigning the proper role, avoiding complex key management for each endpoint. Azure’s logging and monitoring tools also recognize these identities, so actions taken by Azure File Sync are transparently auditable in Azure AD activity logs and storage access logs. Given these advantages, new Azure File Sync deployments now enable Managed Identity by default, underscoring a shift toward identity-based security as the standard practice for enterprise file synchronization. This approach ensures that large, distributed file sync environments remain secure, manageable, and well-integrated with the rest of the Azure ecosystem. How It Works When you enable Managed Identity on your Azure VM or Arc-enabled server, Azure automatically provisions an identity for that server. This identity is then used by the Storage Sync service to authenticate and communicate securely. Here’s what happens under the hood: The server receives a system-assigned Managed Identity. Azure File Sync uses this identity to access the storage account. No certificates or access keys are required. Permissions are controlled via RBAC, allowing fine-grained access control. Enabling Managed Identity: Two Scenarios Azure VM If your server is an Azure VM: Go to the VM settings in the Azure portal. Enable System Assigned Managed Identity. Install Azure File Sync. Register the server with the Storage Sync service. Enable Managed Identity in the Storage Sync blade. Once enabled, Azure handles the identity provisioning and permissions setup in the background. Non-Azure VM (Arc-enabled) If your server is on-prem or in another cloud: First, make the server Arc-enabled. Enable System Assigned Managed Identity via Azure Arc. Follow the same steps as above to install and register Azure File Sync. This approach brings parity to hybrid environments, allowing you to use Managed Identities even outside Azure. Next Steps If you’re managing Azure File Sync in your environment, I highly recommend transitioning to Managed Identities. It’s a cleaner, more secure approach that aligns with modern identity practices. ✅ Resources 📚 https://learn.microsoft.com/azure/storage/files/storage-sync-files-planning 🔐 https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview ⚙️ https://learn.microsoft.com/azure/azure-arc/servers/overview 🎯 https://learn.microsoft.com/azure/role-based-access-control/overview 🛠️ Action Items Audit your current Azure File Sync deployments. Identify servers using certificates or access keys. Enable Managed Identity on eligible servers. Use RBAC to assign appropriate permissions. Let me know how your transition to Managed Identities goes. If you run into any snags or have questions, drop a comment. Cheers! Pierre302Views0likes0CommentsInstalling a Standalone Root Certificate Authority & Web Enrollment on Windows Server 2025
In this post learn how to deploy a standalone root Certificate Authority (CA) on a Windows Server 2025 machine that is not joined to Active Directory. Also learn how to configure the web enrollment interface so clients can request certificates using a browser. A standalone root CA is useful when: You only need certificates trusted by a limited set of machines. You don’t want to obtain certificates from a commercial provider. You’re preparing an offline root CA scenario (covered separately). Install Active Directory Certificate Services (Standalone Root CA) 1. Open Server Manager. 2. Select Manage then Add Roles and Features. 3. Choose Role-based or feature-based installation. 4. Select the local server. 5. Check Active Directory Certificate Services. 6. Click Add Features when prompted. 7. Click Next through the wizard until the **Role Services** page. 8. Select Certification Authority only. 9. Click Install and wait for completion. Configure the Certification Authority 1. In Server Manager, click the notification flag. 2. Select Configure Active Directory Certificate Services. 3. Enter credentials. 4. On Role Services, ensure Certification Authority is selected. 5. For Setup Type, select Standalone CA. 6. Choose Root CA on the CA Type page. 7. Select Create a new private key. 8. Increase the key length to 4096 and accept the other defaults. 9. Accept the default CA name (or customize if desired). 10. Keep the default certificate validity period (5 years). 11. Accept the default database locations. 12. Confirm the configuration and allow it to complete. 13. Open the Certification Authority console from Tools to verify the CA was created. Create an SSL Certificate for Web Enrollment The CA certificate itself doesn’t include subject alternative names (SANs), so you need a separate SSL certificate for the website otherwise web enrollment will throw errors. 1. Open PowerShell and switch to the root directory. 2. Create and enter a temp folder. 3. Use Notepad to create servercert.inf with details such as: [Version] Signature="$Windows NT$" [NewRequest] Subject="CN=ws25-sa-ca" KeyLength=2048 KeySpec=1 KeyUsage=0xA0 MachineKeySet=TRUE ProviderName="Microsoft RSA SChannel Cryptographic Provider" RequestType=PKCS10 FriendlyName="IIS Server Cert" [EnhancedKeyUsageExtension] OID=1.3.6.1.5.5.7.3.1 ; Server Authentication [Extensions] 2.5.29.17 = "{text}" _continue_ = "dns=ws25-sa-ca" ; Add more if needed, e.g., _continue_ = "& " for additional DNS names 4. Save the file. 5. Run certreq -new specifying the INF file and output a .req file. certreq -new C:\temp\servercert.inf C:\temp\servercert.req 6. Submit the request: * Run `certreq -submit` with the request file. certreq -submit -attrib "CertificateTemplate:WebServer" C:\temp\servercert.req C:\temp\servercert.cer * Select the standalone CA when prompted. * The request will show as **Pending**. 7. Open the Certification Authority console. 8. Under Pending Requests, right-click the request and select All Tasks → Issue. 9. Retrieve the certificate: * Use `certreq -retrieve` with the request ID and output a `.cer` file. certreq -retrieve 2 C:\temp\servercert_issued.cer 10. Install the issued certificate with `certreq -accept` or by double-clicking. certreq -accept C:\temp\servercert_issued.cer Install the Web Enrollment Feature 1. Open Add Roles and Features again in Server Manager. 2. Click Next until the Server Roles page. 3. Expand Active Directory Certificate Services. 4. Select Certification Authority Web Enrollment. 5. Click Next and proceed. This also installs IIS automatically. 6. When finished, click Close. 7. Run Configure Active Directory Certificate Services again. 8. Select Certification Authority Web Enrollment and click Configure. Bind the SSL Certificate in IIS 1. Open IIS Manager. 2. Select Default Web Site. 3. In the Actions pane, choose Bindings. 4. Click Add. 5. Set Type to https. 6. Enter the server’s hostname. 7. Select the SSL certificate you issued earlier (e.g., `IIS serviceert`). 8. Click OK and close IIS Manager. Access the Web Enrollment Page 1. Open a browser. 2. Navigate to: `https://<your-server-name>/certsrv` Example: `https://WS25-SA-CA/certsrv` 3. The Certificate Enrollment web interface should now load securely.608Views0likes0CommentsHyper-V Virtual TPMs, Certificates, VM Export and Migration
Virtual Trusted Platform Modules (vTPM) in Hyper-V allow you to run guest operating systems, such as Windows 11 or Windows Server 2025 with security features enabled. One of the challenges of vTPMs is that they rely on certificates on the local Hyper-V server. Great if you’re only running the VM with the vTPM on that server, but a possible cause of issues if you want to move that VM to another server. In this article I’ll show you how to manage the certificates that are associated with vTPMs so that you’ll be able to export or move VMs that use them, such as Windows 11 VMs, to any prepared Hyper-V host you manage. When a vTPM is enabled on a Generation 2 virtual machine, Hyper-V automatically generates a pair of self-signed certificates on the host where the VM resides. These certificates are specifically named: "Shielded VM Encryption Certificate (UntrustedGuardian)(ComputerName)" "Shielded VM Signing Certificate (UntrustedGuardian)(ComputerName)". These certificates are stored in a unique local certificate store on the Hyper-V host named "Shielded VM Local Certificates". By default, these certificates are provisioned with a validity period of 10 years. For a vTPM-enabled virtual machine to successfully live migrate and subsequently start on a new Hyper-V host, the "Shielded VM Local Certificates" (both the Encryption and Signing certificates) from the source host must be present and trusted on all potential destination Hyper-V hosts. Exporting vTPM related certificates. You can transfer certificates from one Hyper-V host to another using the following procedure: On the source Hyper-V host, open mmc.exe. From the "File" menu, select "Add/Remove Snap-in..." In the "Add or Remove Snap-ins" window, select "Certificates" and click "Add." Choose "Computer account" and then "Local Computer". Navigate through the console tree to "Certificates (Local Computer) > Personal > Shielded VM Local Certificates". Select both the "Shielded VM Encryption Certificate" and the "Shielded VM Signing Certificate." Right-click the selected certificates, choose "All Tasks," and then click "Export". In the Certificate Export Wizard, on the "Export Private Key" page, select "Yes, export the private key". The certificates are unusable for their intended purpose without their associated private keys. Select "Personal Information Exchange - PKCS #12 (.PFX)" as the export file format. Select "Include all certificates in the certification path if possible". Provide a strong password to protect the PFX file. This password will be required during the import process. To perform this process using the command line, display details of the certificates in the "Shielded VM Local Certificates" store, including their serial numbers. certutil -store "Shielded VM Local Certificates" Use the serial numbers to export each certificate, ensuring the private key is included. Replace <Serial_Number_Encryption_Cert> and <Serial_Number_Signing_Cert> with the actual serial numbers, and "YourSecurePassword" with a strong password: certutil -exportPFX -p "YourSecurePassword" "Shielded VM Local Certificates" <Serial_Number_Encryption_Cert> C:\Temp\VMEncryption.pfx certutil -exportPFX -p "YourSecurePassword" "Shielded VM Local Certificates" <Serial_Number_Signing_Cert> C:\Temp\VMSigning.pfx Importing vTPM related certificates To import these certificates on a Hyper-V host that you want to migrate a vTPM enabled VM to, perform the following steps: Transfer the exported PFX files to all Hyper-V hosts that will serve as potential live migration targets. On each target host, open mmc.exe and add the "Certificates" snap-in for the "Computer account" (Local Computer). Navigate to "Certificates (Local Computer) > Personal." Right-click the "Personal" folder, choose "All Tasks," and then click "Import". Proceed through the Certificate Import Wizard. Ensure the certificates are placed in the "Shielded VM Local Certificates" store. After completing the wizard, verify that both the Encryption and Signing certificates now appear in the "Shielded VM Local Certificates" store on the new host. You can accomplish the same thing using PowerShell with the following command: Import-PfxCertificate -FilePath "C:\Backup\CertificateName.pfx" -CertStoreLocation "Cert:\LocalMachine\Shielded VM Local Certificates" -Password (ConvertTo-SecureString -String "YourPassword" -Force -AsPlainText) Updating vTPM related certificates. Self signed vTPM certificates automatically expire after 10 years. Resetting the key protector for a vTPM-enabled VM in Hyper-V allows you change or renew the underlying certificates (especially if the private key changes). Here are the requirements and considerations around this process: The VM must be in an off state to change security settings or reset the key protector The host must have the appropriate certificates (including private keys) in the "Shielded VM Local Certificates" store. If the private key is missing, the key protector cannot be set or validated. Always back up the VM and existing certificates before resetting the key protector, as this process can make previously encrypted data inaccessible if not performed correctly. The VM must be at a supported configuration version (typically version 7.0 or higher) to support vTPM and key protector features. To save the Current Key Protector: On the source Hyper-V host, retrieve the current Key Protector for the VM and save it to a file. Get-VMKeyProtector -VMName 'VM001' | Out-File '.\VM001.kp' To reset the key protector with a new local key protector: Set-VMKeyProtector -VMName "<VMNAME>" -NewLocalKeyProtector This command instructs Hyper-V to generate a new key protector using the current local certificates. After resetting, enable vTPM if needed: Enable-VMTPM -VMName "<VMNAME>" It is important to note that if an incorrect Key Protector is applied to the VM, it may fail to start. In such cases, the Set-VMKeyProtector -RestoreLastKnownGoodKeyProtector cmdlet can be used to revert to the last known working Key Protector. More information: Set-VMKeyProtector: https://learn.microsoft.com/en-us/powershell/module/hyper-v/set-vmkeyprotector8.5KViews5likes6Comments