<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Failover Clustering articles</title>
    <link>https://techcommunity.microsoft.com/t5/failover-clustering/bg-p/FailoverClustering</link>
    <description>Failover Clustering articles</description>
    <pubDate>Thu, 30 Apr 2026 22:15:05 GMT</pubDate>
    <dc:creator>FailoverClustering</dc:creator>
    <dc:date>2026-04-30T22:15:05Z</dc:date>
    <item>
      <title>Rack-Local Reads in Storage Spaces Direct Campus Cluster / Rack Aware Cluster</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/rack-local-reads-in-storage-spaces-direct-campus-cluster-rack/ba-p/4505381</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Author: Parsan Saffaie&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Locality‑Aware Read Optimization for Modern Multi‑Rack and Campus Deployments&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;With Windows Server 2025, Storage Spaces Direct (S2D) expanded its resiliency capabilities with &lt;STRONG&gt;support for Campus Clusters&lt;/STRONG&gt;, allowing Failover Cluster deployments to span racks, chassis, and even sites using explicit fault‑tolerance domains. If you haven’t seen it yet, check out Rob’s announcement here:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://techcommunity.microsoft.com/blog/failoverclustering/announcing-support-for-s2d-campus-cluster-on-windows-server-2025/4477075" target="_blank"&gt;&lt;EM&gt;Announcing Support for S2D Campus Cluster on Windows Server 2025&lt;/EM&gt;&lt;/A&gt;&lt;EM&gt;.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Building on that foundation, Windows Server 2025 introduces &lt;STRONG&gt;Rack&lt;/STRONG&gt;‑&lt;STRONG&gt;Local Reads&lt;/STRONG&gt;, a new optimization that allows S2D to use the cluster topology during read selection, ensuring mirrored reads are served from the closest healthy copy whenever possible.&lt;/P&gt;
&lt;H1&gt;Before Rack‑Local Reads&lt;/H1&gt;
&lt;P&gt;Before this feature, when a mirrored space was deployed in an S2D cluster, data copies were placed across fault‑tolerant domains (nodes, racks, or sites) to ensure resiliency.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;During read operations&lt;/STRONG&gt;, all healthy copies were treated as equal. When a read I/O arrived, S2D could satisfy it from &lt;EM&gt;any&lt;/EM&gt; available copy containing the requested data offset — regardless of where that copy was located in the cluster.&lt;/P&gt;
&lt;P&gt;This behavior was &lt;EM&gt;correct&lt;/EM&gt; and &lt;EM&gt;safe&lt;/EM&gt;, but it was &lt;STRONG&gt;topology‑agnostic&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H2&gt;Example: topology-agnostic read selection&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;This diagram shows a conceptual view of read behavior in a multi‑rack cluster, where a workload may read from a remote rack even though a local copy is available.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Read selection flow:&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;A read request arrives on &lt;STRONG&gt;Node&lt;/STRONG&gt;&lt;STRONG&gt; 1&lt;/STRONG&gt; in &lt;STRONG&gt;Rack&lt;/STRONG&gt;&lt;STRONG&gt; 1&lt;/STRONG&gt;, where the workload is running.&lt;/LI&gt;
&lt;LI&gt;Storage Spaces Direct identifies all healthy copies of the requested data offset.&lt;/LI&gt;
&lt;LI&gt;The local copy on &lt;STRONG&gt;Node&lt;/STRONG&gt;&lt;STRONG&gt; 1 (Copy&lt;/STRONG&gt;&lt;STRONG&gt; A on Drive&lt;/STRONG&gt;&lt;STRONG&gt; 1)&lt;/STRONG&gt; is unavailable.&lt;/LI&gt;
&lt;LI&gt;Remaining healthy copies exist on &lt;STRONG&gt;Node&lt;/STRONG&gt;&lt;STRONG&gt; 2, Node&lt;/STRONG&gt;&lt;STRONG&gt; 3, and Node&lt;/STRONG&gt;&lt;STRONG&gt; 4&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Because read selection is topology‑agnostic, rack locality is not considered.&lt;/LI&gt;
&lt;LI&gt;A healthy copy on &lt;STRONG&gt;Node&lt;/STRONG&gt;&lt;STRONG&gt; 3 (Rack&lt;/STRONG&gt;&lt;STRONG&gt; 2)&lt;/STRONG&gt; is selected.&lt;/LI&gt;
&lt;LI&gt;The read is serviced across racks, traversing the inter‑rack fabric.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Why the Old Design Was Suboptimal&lt;/H2&gt;
&lt;P&gt;In larger S2D deployments, this topology‑agnostic read selection can result in unnecessary cross‑rack or cross‑site traffic during steady‑state operations. While resiliency placement is correct, read performance becomes dependent on fabric conditions rather than locality, increasing latency variability and making the network part of the critical read path.&lt;/P&gt;
&lt;H1&gt;After Rack‑Local Reads&lt;/H1&gt;
&lt;P&gt;With the new Rack‑Local Reads, write behavior is unchanged. What changes is read selection. Instead of treating all healthy copies as equal, S2D now prefers the closest healthy copy based on topology. Reads are served from the nearest available copy in the fault domain—node, chassis, rack, or site—with remote locations used only when no closer copy exists.&lt;/P&gt;
&lt;P&gt;As a result, read traffic remains local during steady‑state operations, even in multi‑rack and campus deployments.&lt;/P&gt;
&lt;H2&gt;Example: topology-aware read selection&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;EM&gt;This diagram shows the same cluster topology as the previous example, but with Rack‑Local Reads enabled. Read selection now accounts for rack locality when choosing among healthy copies.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Read selection flow: &lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;A read request arrives on &lt;STRONG&gt;Node&lt;/STRONG&gt;&lt;STRONG&gt; 1 in Rack&lt;/STRONG&gt;&lt;STRONG&gt; 1&lt;/STRONG&gt;, where the workload is running.&lt;/LI&gt;
&lt;LI&gt;Storage Spaces Direct identifies all healthy copies of the requested data offset.&lt;/LI&gt;
&lt;LI&gt;The local copy on &lt;STRONG&gt;Node&lt;/STRONG&gt;&lt;STRONG&gt; 1 (Copy&lt;/STRONG&gt;&lt;STRONG&gt; A on Drive&lt;/STRONG&gt;&lt;STRONG&gt; 1)&lt;/STRONG&gt; is unavailable.&lt;/LI&gt;
&lt;LI&gt;Remaining healthy copies exist on &lt;STRONG&gt;Node&lt;/STRONG&gt;&lt;STRONG&gt; 2, Node&lt;/STRONG&gt;&lt;STRONG&gt; 3, and Node&lt;/STRONG&gt;&lt;STRONG&gt; 4&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Rack‑Local Reads evaluates the proximity of each healthy copy based on the fault domain.&lt;/LI&gt;
&lt;LI&gt;A healthy copy on &lt;STRONG&gt;Node&lt;/STRONG&gt;&lt;STRONG&gt; 2 (Rack&lt;/STRONG&gt;&lt;STRONG&gt; 1)&lt;/STRONG&gt; is selected as the closest available copy.&lt;/LI&gt;
&lt;LI&gt;The read is serviced within the local rack, avoiding cross‑rack traffic.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;Why the New Design Is Better&lt;/H2&gt;
&lt;P&gt;By incorporating topology awareness into the read path, Rack‑Local Reads keep steady‑state read traffic local whenever a healthy copy is available. This preserves existing resiliency guarantees while reducing cross‑rack traffic, lowering latency, and making read performance less dependent on fabric conditions.&lt;/P&gt;
&lt;H1&gt;Summary:&lt;/H1&gt;
&lt;P&gt;Rack‑Local Reads separate resiliency from performance: rack‑aware mirroring determines where data is placed for protection, while Rack‑Local Reads determine which copy is used for performance.&lt;/P&gt;
&lt;P&gt;Nothing changes for writes or resiliency. Data placement remains exactly the same. What changes is read selection—S2D now prefers the closest healthy copy in the fault domain, keeping steady‑state reads local when possible.&lt;/P&gt;
&lt;P&gt;The result is lower and more predictable read latency, reduced east‑west storage traffic, and better scalability in multi‑rack and campus deployments—without sacrificing fault tolerance.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Microsoft runs on trust.&lt;/STRONG&gt; We’d like to thank our customers and MVPs for that trust, and for the feedback that directly shaped Rack‑Local Reads. Your real‑world deployments continue to push Storage Spaces Direct to scale to larger and more complex environments.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note: Posted by Rob for Parsan Saffaie.&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 25 Mar 2026 02:41:48 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/rack-local-reads-in-storage-spaces-direct-campus-cluster-rack/ba-p/4505381</guid>
      <dc:creator>Rob-Hindman</dc:creator>
      <dc:date>2026-03-25T02:41:48Z</dc:date>
    </item>
    <item>
      <title>Using Failover Clustering Cloud Witness with Managed Identity in Windows Server 2025</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/using-failover-clustering-cloud-witness-with-managed-identity-in/ba-p/4504438</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Failover Clustering&lt;/STRONG&gt; has a strong quorum model that is always used to prevent partition in space (AKA Split Brain, Network partition, Cluster partition). We require a cluster quorum resource (cluster witness) to be used on each failover cluster. Using a cluster quorum resource not only adds protection but also means that small two-node clusters can provide high availability for Hyper-V VMs, SQL Server Availability Sets, SQL Server Failover Cluster Instance, Scale-out File Server (SoFS), etc. workloads.&lt;/P&gt;
&lt;P&gt;The &lt;STRONG&gt;Cloud Witness quorum resource&lt;/STRONG&gt; was first introduced in the Failover Clustering feature in Windows Server 2016. It was a low-cost variation of the &lt;STRONG&gt;File Share Witness quorum resource&lt;/STRONG&gt; that enabled effective low-cost two node failover clusters in scenarios where connectivity to Azure is reliable. The implementation of both the &lt;STRONG&gt;Cloud Witness quorum resource&lt;/STRONG&gt; and the &lt;STRONG&gt;File Share Witness quorum resource&lt;/STRONG&gt; is a Paxos tag, the date-time stamp of the Paxos tag, and filename is the GUID for the cluster is used. No other information is needed. While the &lt;STRONG&gt;Disk Witness quorum resource&lt;/STRONG&gt; contains a full copy of the cluster database, the &lt;STRONG&gt;File Share Witness quorum resource&lt;/STRONG&gt; and the &lt;STRONG&gt;Cloud Witness quorum resource&lt;/STRONG&gt; only contain the Paxos tag, which is used as a tiebreaker when there is a &lt;STRONG&gt;partition in space&lt;/STRONG&gt;, so that the cluster can continue to function.&lt;/P&gt;
&lt;P&gt;The Cloud Witness quorum resource is created in an Azure storage account, and originally secured using a SAS token, called the StorageAccountAccessKey:&lt;/P&gt;
&lt;P&gt;Set-ClusterQuorum -CloudWitness -AccountName &amp;lt;StorageAccountName&amp;gt; -AccessKey &amp;lt;StorageAccountAccessKey&amp;gt;&lt;/P&gt;
&lt;P&gt;Previously, the SAS Token (StorageAccountAccessKey) for the Azure Storage Account was stored in the cluster database so that the cluster could access the for storage account. For details, see &lt;A href="https://learn.microsoft.com/en-us/windows-server/failover-clustering/deploy-quorum-witness" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/windows-server/failover-clustering/deploy-quorum-witness&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;As a best practice, we are now recommending that the Azure storage account should be accessed using a Managed Identity&lt;/STRONG&gt; instead of instead of using a SAS token. Only the name of the Managed Identity will be stored in the cluster database, making this practice more secure.&lt;/P&gt;
&lt;H1&gt;Steps to create a Cloud Witness quorum resource using managed identity while creating a new cluster:&lt;/H1&gt;
&lt;OL&gt;
&lt;LI&gt;Before creating the cluster, create a storage account resource, in this case we will create a storage account called &lt;STRONG&gt;cloudwitnessdemo&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Using the Azure Portal, create VMs in Azure IaaS running Windows Server 2025, and add the Failover Clustering feature. For physical on-premises servers running Windows Server 2025, add the Failover Clustering feature.&lt;/LI&gt;
&lt;LI&gt;Install the latest updates for Windows Server 2025 from Windows Update on each server (AKA cluster node).&lt;/LI&gt;
&lt;LI&gt;Connect each server (AKA cluster node) to&amp;nbsp;&lt;STRONG&gt;Azure Arc&lt;/STRONG&gt; – this will create a Managed Identity for the servers:&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Using the Azure Portal Access Control pages, assign the &lt;STRONG&gt;Storage Blob Data Contributor&lt;/STRONG&gt; role to node's managed identities:&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Create the cluster using the New-Cluster cmdlet, for example:&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;New-Cluster -Name ExampleCluster -Node TOAD03H09-VM24,TOAD03H09-VM25,TOAD03H09-VM26,TOAD03H09-VM27 -NOSTORAGE&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Create cloud witness using the cluster nodes’ managed identity:&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;Set-ClusterQuorum -CloudWitness -AccountName cloudwitnessdemo -UseManagedIdentity&amp;nbsp;-Cluster ExampleCluster&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Cluster &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;QuorumResource&lt;/P&gt;
&lt;P&gt;------- &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;--------------&lt;/P&gt;
&lt;P&gt;ExampleCluster &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Cloud Witness&lt;/P&gt;
&lt;H1&gt;Steps to create a Cloud Witness quorum resource using managed identity to an existing cluster:&lt;/H1&gt;
&lt;OL&gt;
&lt;LI&gt;Using the Azure Portal, create a storage account resource, in this case we will create a storage account called &lt;STRONG&gt;cloudwitnessdemo&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Install the latest updates for Windows Server 2025 from Windows Update on each server (AKA cluster node).&lt;/LI&gt;
&lt;LI&gt;Connect each server (AKA cluster node) to &lt;STRONG&gt;Azure Arc&lt;/STRONG&gt; – this will create a Managed Identity for the servers:&lt;/LI&gt;
&lt;LI&gt;Assign &lt;STRONG&gt;Storage Blob Data Contributor&lt;/STRONG&gt; role to node's managed identities.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&amp;nbsp;Create cloud witness using the cluster nodes managed identities. This will delete the existing cloud witness and create the new one with managed identities configuration.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;Set-ClusterQuorum -CloudWitness -AccountName cloudwitnessdemo -UseManagedIdentity&amp;nbsp;-Cluster ExampleCluster&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Cluster &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;QuorumResource&lt;/P&gt;
&lt;P&gt;------- &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;--------------&lt;/P&gt;
&lt;P&gt;ExampleCluster &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Cloud Witness&lt;/P&gt;
&lt;OL start="5"&gt;
&lt;LI&gt;(Optional) Check the witness assignment and the use of Azure Managed Identity:&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Get-ClusterResource -Cluster ExampleCluster -Name "Cloud Witness" | Get-ClusterParameter&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Object &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Value &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Type&lt;/P&gt;
&lt;P&gt;------ &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;---- &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;----- &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;----&lt;/P&gt;
&lt;P&gt;Cloud Witness AccountName &amp;nbsp; &amp;nbsp; &amp;nbsp; cloudwitnessdemo &amp;nbsp;String&lt;/P&gt;
&lt;P&gt;Cloud Witness EndpointInfo &amp;nbsp; &amp;nbsp; &amp;nbsp;core.windows.net &amp;nbsp; String&lt;/P&gt;
&lt;P&gt;Cloud Witness ContainerName &amp;nbsp; &amp;nbsp; msft-cloud-witness String&lt;/P&gt;
&lt;P&gt;Cloud Witness IsManagedIdentity 1 &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;UInt32&lt;/P&gt;</description>
      <pubDate>Sun, 22 Mar 2026 00:41:39 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/using-failover-clustering-cloud-witness-with-managed-identity-in/ba-p/4504438</guid>
      <dc:creator>Rob-Hindman</dc:creator>
      <dc:date>2026-03-22T00:41:39Z</dc:date>
    </item>
    <item>
      <title>Announcing Support for S2D Campus Cluster on Windows Server 2025</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/announcing-support-for-s2d-campus-cluster-on-windows-server-2025/ba-p/4477075</link>
      <description>&lt;P&gt;Applies to: Windows Server 2025 Failover Clustering with the 2025-12 Security Update (KB5072033) (26100.7462) applied on each Failover Cluster node.&lt;/P&gt;
&lt;P&gt;First, I need to &lt;STRONG&gt;THANK &lt;/STRONG&gt;the Microsoft MVPs - external consultants that we work closely with – who proposed this configuration, advocated for it, and helped to validate it! MVPs, we listen to you! Thank You Very Much! Coffee is on me!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;While this article applies to Windows Server 2025, please note that our friends and colleagues on the&amp;nbsp;&lt;STRONG&gt;Azure Local&lt;/STRONG&gt; product team call this configuration “Rack Aware Clustering” – see their articles here: &lt;A href="https://learn.microsoft.com/en-us/azure/azure-local/concepts/rack-aware-cluster-overview?view=azloc-2511" target="_blank" rel="noopener"&gt;Overview of Azure Local rack aware clustering (Preview) - Azure Local | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Note that the&amp;nbsp;&lt;STRONG&gt;S2D Campus Cluster&lt;/STRONG&gt; configuration supports two RACKs and is &lt;STRONG&gt;different &lt;/STRONG&gt;than the &lt;STRONG&gt;S2D Stretch Cluster&lt;/STRONG&gt; configuration that supports two geographically distant SITES. The S2D Stretch Cluster relies on Storage Replica to replicate volumes between sites, while the S2D Campus Cluster does not use Storage Replica and uses S2D replication between the cluster nodes.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;What is a Campus?&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Factories&lt;/LI&gt;
&lt;LI&gt;Business Parks / Office Parks&lt;/LI&gt;
&lt;LI&gt;Hospitals&lt;/LI&gt;
&lt;LI&gt;School Campuses, College Campuses&lt;/LI&gt;
&lt;LI&gt;Vessels / Cruise Ships&lt;/LI&gt;
&lt;LI&gt;Stadiums&lt;/LI&gt;
&lt;LI&gt;Any location with two strands of dark fibre cable between two rooms or buildings that can be defined as RACK fault domains in the failover cluster.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;We think that there are many opportunities - especially in Europe - that require two separate data rooms to meet NIS2 requirements.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Included in the 2025-12 Security Update (KB5072033) (26100.7462) for Windows Server 2025 is the&amp;nbsp;&lt;STRONG&gt;Rack Level Nested Mirror&lt;/STRONG&gt; (RLNM) for S2D, which improves resiliency for the S2D Campus Cluster by keeping the same number of copies in each rack – one copy in each rack for a two-copy volume, and two copies per rack for a four copy volume. With a 2+2 S2D Campus Cluster – meaning two failover clustering nodes in each rack, a new level of resiliency can be achieved with a four copy volume: the RLNM will place a copy of data on each node, increasing resiliency to “Rack + Node” – meaning that you can lose a Rack AND a node and still have one copy of the data. We think that the 2+2 S2D Campus Cluster is a good tradeoff between cost and performance for many applications.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the diagram below, note that seven servers are used: four servers are used for the S2D Campus Cluster (VMs, applications, storage), and there is an AD and DNS server in each rack, and a File Share Witness server.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;In designing S2D Campus Cluster solutions, it is very important to find the correct balance between cost and hardware redundancy. Achieving low Recovery Time Objective (RTO) and Recovery Point Objective (RPO) can require investing in redundant hardware. The business cost of VM and application downtime needs to be clearly understood. Businesses like hotels and colleges will have different RTOs and RPOs than businesses like airports and oil refineries.&lt;/P&gt;
&lt;P&gt;It’s also important to "practice failure" when validating S2D Campus Cluster solutions: test node failure, test rack failure, rack and node failure, test network switch failure, test disk failure – is the failover behavior acceptable? Was data corruption and data loss avoided?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;What are the support requirements for the S2D Campus Cluster on Windows Server 2025?&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Install the Windows Server 2025 12B CU (KB5072033) on every node in the failover cluster.&lt;/LI&gt;
&lt;LI&gt;“Flat” S2D storage – all capacity drives, all flash meaning SSD or NVMe drives – avoid using a caching tier, avoid using HDDs.&lt;/LI&gt;
&lt;LI&gt;Define exactly two RACK cluster fault domains – and place the cluster nodes in these two racks.&lt;/LI&gt;
&lt;LI&gt;Follow hardware OEM guidelines.&lt;/LI&gt;
&lt;LI&gt;Cluster quorum resource (File Share Witness, Disk Witness, Cloud Witness, or USB Witness) should be placed in a third room, separate from the data room containing the racks.&lt;/LI&gt;
&lt;LI&gt;Recommended: Each rack should have a separate network path to the cluster quorum resource.&lt;/LI&gt;
&lt;LI&gt;Recommended: Redundant TOR switches, CORE switches, and dedicated networks for S2D storage traffic to minimize Single Points Of Failure (SPOF) and maximize workload uptime and durability.&lt;/LI&gt;
&lt;LI&gt;Recommended: 1ms latency (or less) between racks.&lt;/LI&gt;
&lt;LI&gt;Recommended: RDMA NICs and switches - because RDMA NICs and switches can achieve 30% CPU savings!&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Frequently Asked Questions (FAQs):&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;How can I measure network latency?&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Answer: We recommend downloading the PsPing utility:&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/sysinternals/downloads/psping" target="_blank" rel="noopener"&gt;PsPing - Sysinternals | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;With the Rack Level Nested Mirror, can I automatically convert an existing campus cluster with a three copy volume to a four copy volume?&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Answer: No, VMs will need to be stopped and data will need to be copied to an external location, RACK fault domains must be defined, and a new storage pool must be created to use the Rack Level Nested Mirror (RLNM). After the new storage pool is created, and volumes created from that storage pool, then data and VMs can be copied onto the new volumes.&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;Will you support 1+1, 2+2, 3+3, 4+4, and 5+5 S2D Campus Clusters configurations?&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Answer: Yes. Note that Rack+Node resiliency is a special case with a 1+1 S2D Campus Cluster using a two-copy volume and a 2+2 S2D Campus Cluster using a four-copy volume.&lt;/P&gt;
&lt;OL start="4"&gt;
&lt;LI&gt;Can I create as many two-copy and four-copy volumes as I need, or are there limits?&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Answer: Yes, you can create as many two-copy and four-copy volumes as you need, given the capacity of the storage pool. There is a trade-off between resiliency and capacity: a two-copy volume delivers 50% capacity, and a four-copy volume delivers 25% capacity. We recommend that valuable VMs, applications, and data should be stored on four-copy volumes.&lt;/P&gt;
&lt;OL start="5"&gt;
&lt;LI&gt;For the S2D Campus Cluster on Windows Server 2025, are redundant core switches required? Are two TOR switches per RACK required?&lt;/LI&gt;
&lt;/OL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Answer: Core switches are recommended but optional. A single TOR switch per RACK is acceptable, but we note that it’s a single point of failure. Network infrastructure investments (switches, NICs, cabling) corresponding to the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) requirements of the business.&lt;/P&gt;
&lt;P&gt;If you would like to see additional scenarios supported, please let us know so that we can prioritize development and validation and address your business needs!&lt;/P&gt;
&lt;P&gt;Please send your questions, comments, and feature requests about S2D and SAN Coexistence to: &lt;A href="mailto:wsfc_s2dcampuscluster@microsoft.com" target="_blank" rel="noopener"&gt;wsfc_s2dcampuscluster@microsoft.com&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;PowerShell Script for Deploying S2D Campus Cluster:&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;#Create a test cluster but do not create storage:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;New-Cluster -Name TestCluster -Node Node1, Node2, Node3, Node4 -NoStorage&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;#Define the fault domains for the cluster – two nodes are in “Room1” and two nodes are in “Room2”:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Set-ClusterFaultDomain -XML @”&amp;lt;Topology&amp;gt;&amp;lt;Site Name=”Redmond”&amp;gt;&amp;lt;Rack Name=”Room1”&amp;gt;&amp;lt;Node Name=”Node1”/&amp;gt;&amp;lt;Node Name=”Node2”/&amp;gt;&amp;lt;/Rack&amp;gt;&amp;lt;Rack Name=”Room2”&amp;gt;&amp;lt;Node Name=”Node3”/&amp;gt; &amp;lt;Node Name=”Node4”/&amp;gt;&amp;lt;/Rack&amp;gt;&amp;lt;/Site&amp;gt;&amp;lt;/Topology&amp;gt;”@@&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;#Alternatively you can define the fault domains using New-ClusterFaultDomain and Set-ClsuterFaultDomain cmdlets:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;#New-ClusterFaultDomain -Name Room1 -FaultDomainType Rack&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;#New-ClusterFaultDomain -Name Room2 -FaultDomainType Rack&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;#Set-ClusterFaultDomain -Name Room1 -FaultDomain Redmond&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;#Set-ClusterFaultDomain -Name Room2 -FaultDomain Redmond&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;#Set-ClusterFaultDomain -Name Node1 -FaultDomain Room1&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;#Set-ClusterFaultDomain -Name Node2 -FaultDomain Room1&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;#Set-ClusterFaultDomain -Name Node3 -FaultDomain Room2&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;#Set-ClusterFaultDomain -Name Node4 -FaultDomain Room2&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;#Note that you can check your fault domains using the Get-ClusterFaultDomain cmdlet.&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;#Add Storage Spaces Direct (S2D) Storage to the cluster – note that Enable-ClusterS2D cmdlet can also be used:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Enable-ClusterStorageSpacesDirect&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;#Update the Storage Pool&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Get-StoragePool S2D* | Update-StoragePool&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;#Check that the StoragePool version is 29:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;(Get-CimInstance -Namespace root/microsoft/windows/storage -ClassName MSFT_StoragePool -Filter 'IsPrimordial = false').CimInstanceProperties['Version'].Value&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;#Check that the Storage Pool’s FaultDomainAwareness property is set to StorageRack:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Get-storagepool -FriendlyName &amp;lt;S2DStoragePool&amp;gt; | fl&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;#Note that resiliency is specified when the volume is created – our idea with the S2D Campus cluster is that IT Admins can create as many two-copy and four-copy volumes as needed. Valuable workload (VMs) and data should go on the 4-copy volumes.&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;#Create a four-copy volume on the storage pool, fixed provisioned:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;New-Volume -FriendlyName “FourCopyVolumeFixed” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 500GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 3 -ProvisioningType Fixed -NumberOfDataCopies 4 –NumberOfColumns 3 &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;#Optional - Create a four-copy volume on the storage pool, thinly provisioned:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;New-Volume -FriendlyName “FourCopyVolumeThin” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 500GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 3 –ProvisioningType Thin -NumberOfDataCopies 4 –NumberOfColumns 3&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;#Optional - Create a two-copy volume on the storage pool, fixed provisioned:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;New-Volume -FriendlyName “TwoCopyVolumeFixed” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 500GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 1 -ProvisioningType Fixed&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-6"&gt;&lt;STRONG&gt;#Optional - Create a two-copy volume on the storage pool, thinly provisioned:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;New-Volume -FriendlyName “TwoCopyVolumeThin” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 500GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 1 -ProvisioningType Thin&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;How can you increase the S2D Campus Cluster from 1+1 to 2+2 to 3+3 to 4+4 to 5+5?&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;Add one node at a time to a rack and then to the cluster - first to the Rack fault domain using New-ClusterFaultDomain, then to the cluster using Add-ClusterNode, for example:&lt;/P&gt;
&lt;P&gt;$nodeName = ‘Node5’&lt;/P&gt;
&lt;P&gt;$rackFaultDomainName = ‘Room1’&lt;/P&gt;
&lt;P&gt;New-ClusterFaultDomain -Type Node -Name $nodeName -FaultDomain $rackFaultDomainName&lt;/P&gt;
&lt;P&gt;Add-ClusterNode -Name $nodeName&lt;/P&gt;
&lt;P&gt;$nodeName = ‘Node6’&lt;/P&gt;
&lt;P&gt;$rackFaultDomainName = ‘Room2’&lt;/P&gt;
&lt;P&gt;New-ClusterFaultDomain -Type Node -Name $nodeName -FaultDomain $rackFaultDomainName&lt;/P&gt;
&lt;P&gt;Add-ClusterNode -Name $nodeName&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Here are some additional articles on the S2D Campus Cluster for WS 2025:&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Luka Manojlovic&lt;/STRONG&gt; (MVP) Blogs: &lt;A href="https://luka.manojlovic.net/2025/12/14/windows-server-2025-storage-spaces-direct-s2d-campus-cluster-part-1-preparation-and-deployment/" target="_blank" rel="noopener"&gt;Windows Server 2025 – Storage Spaces Direct (S2D) Campus Cluster – part 1 – Preparation and deployment | Luka Manojlovic&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;A href="https://luka.manojlovic.net/2026/01/01/windows-server-2025-storage-spaces-direct-s2d-campus-cluster-how-does-it-handle-problems-disaster-scenarios-part-2-sequence-of-failures-n2-in-rack-2-disk-in-n1-in-rack-2/" target="_blank" rel="noopener"&gt;Windows Server 2025 – Storage Spaces Direct (S2D) Campus Cluster – How does it handle problems / disaster scenarios – part 2 – sequence of failures – N2 in Rack 2, disk in N1 in Rack 2, N1 in Rack 2 (whole Rack 2 offline), N2 in Rack 1 – cluster quorum failure … | Luka Manojlovic&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;A href="https://luka.manojlovic.net/2026/01/01/windows-server-2025-storage-spaces-direct-s2d-campus-cluster-how-does-it-handle-problems-disaster-scenarios-part-3-bring-online-the-only-remaining-node-fi/" target="_blank" rel="noopener"&gt;Windows Server 2025 – Storage Spaces Direct (S2D) Campus Cluster – How does it handle problems / disaster scenarios – part 3 – Bring online the only remaining node – FixQuorum, Force Cluster Start … | Luka Manojlovic&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Darryl van der Peijl&lt;/STRONG&gt; (MVP) Blog: &lt;A href="https://splitbrain.com/windows-server-campus-clusters/" target="_blank" rel="noopener"&gt;Windows Server Campus Clusters - Splitbrain&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Weithenn Wang&lt;/STRONG&gt; (MVP) Blog: &lt;A href="https://www.weithenn.org/2025/12/s2d-campus-cluster-on-windows-server-2025.html" target="_blank" rel="noopener"&gt;S2D Campus Cluster on Windows Server 2025 ~ 不自量力 の Weithenn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 29 Jan 2026 17:51:53 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/announcing-support-for-s2d-campus-cluster-on-windows-server-2025/ba-p/4477075</guid>
      <dc:creator>Rob-Hindman</dc:creator>
      <dc:date>2026-01-29T17:51:53Z</dc:date>
    </item>
    <item>
      <title>Announcing Support for S2D and SAN Coexistence</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/announcing-support-for-s2d-and-san-coexistence/ba-p/4475351</link>
      <description>&lt;P&gt;Applies to: Windows Server 2022 and Windows Server 2025 Failover Clustering&lt;/P&gt;
&lt;P&gt;Failover Clustering has supported SAN storage since Windows NT 4.0 Server, Enterprise Edition was released in 1997!&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;In addition, Failover Clustering has supported Storage Spaces Direct (S2D) since Windows Server 2016!&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Many customers have been asking if they can use (or reuse) their existing iSCSI and Fibre Channel SANs together with S2D – and now the answer is &lt;STRONG&gt;Yes&lt;/STRONG&gt;!&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Now Windows Server 2022 and Windows Server 2025 Failover Clusters can use both S2D Cluster Shared Volumes (CSVs) and separate SAN CSVs (Fiber Channel, iSCSI connected LUN’s) together in the same single rack cluster.&lt;/P&gt;
&lt;P&gt;This is more evidence that we listen to your feedback! 😊Customers with &lt;STRONG&gt;existing SANs&lt;/STRONG&gt; – many of them looking for a new virtualization platform – have told us that they want to use their SANs with Hyper-V and S2D.&lt;/P&gt;
&lt;P&gt;Note that Hyper-V &lt;STRONG&gt;VM Storage Live Migration&lt;/STRONG&gt; (“Move Virtual Machine Storage”) works great for moving VMs between S2D and SAN storage without any VM disruption!&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Why are we adding support for S2D and SAN coexistence in the same single rack failover cluster?&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;If you want to use your SAN(s) to &lt;STRONG&gt;migrate data and VMs&lt;/STRONG&gt; to and from S2D storage. For example, over time, workload requirements can change, and you may use your SAN(s) to move data and VMs between clusters, or you might choose to move data and VMs from SAN storage to S2D for faster data access for the workloads depending on usage or demand.&lt;/LI&gt;
&lt;LI&gt;If you want to use your SANs for &lt;STRONG&gt;backup and restore data and VMs&lt;/STRONG&gt;, which can add protection against ransomware.&lt;/LI&gt;
&lt;LI&gt;If you want to use S2D storage and/or SAN storage for &lt;STRONG&gt;data ingestion for AI and ML training and inference&lt;/STRONG&gt;. For example, AI model training may be faster using S2D storage in some cases, but SAN storage could be used to store multiple versions of the model as it’s being developed and validated. More choices for data management!&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;What are the support requirements for using SAN storage (SAN LUNs) (Fibre Channel, iSCSI, and iSCSI Target) together with S2D storage CSVs in the same single rack cluster?&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;SAN disks are &lt;STRONG&gt;not included in a Storage Spaces Direct storage pool&lt;/STRONG&gt;. Ensure that only Direct Attached Storage (DAS) drives are used in S2D:
&lt;OL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;PowerShell: &lt;STRONG&gt;Get-PhysicalDisk -CanPool $True&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;SAN volumes should be formatted as NTFS before being added to CSV.&lt;/LI&gt;
&lt;LI&gt;S2D volumes should be formatted as ReFS before being added to CSV.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;How can I create a single rack failover cluster with both S2D and SAN storage?&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Create a failover cluster that initially doesn’t use storage:
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;PowerShell:&amp;nbsp;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;New-Cluster -Name &amp;lt;clustername&amp;gt; -Node &amp;lt;node1name&amp;gt;,&amp;lt;node2name&amp;gt;,&amp;lt;node3name&amp;gt; -NoStorage&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Enable Storage Spaces Direct – this will add eligible non-boot DAS drives to the S2D Storage Pool:
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;PowerShell:&amp;nbsp;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;Enable-ClusterS2D&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Create volumes from the S2D Storage Pool, format using ReFS, and add the Cluster Shared Volumes:
&lt;OL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;PowerShell:&amp;nbsp;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;New-Volume -StoragePoolFriendlyName S2D* -FriendlyName S2DVDisk01 -FileSystem CSVFS_REFS -Size 200GB&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;Configure MPIO (if required) for SAN LUNs, add them to the failover cluster’s Available Storage (“Add Disk”), and then add them to CSV (“Add to Cluster Shared Volumes”)&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;How can I validate both S2D and SAN storage in the same single rack failover cluster? It’s important to check that all the nodes in the cluster are connected to the SAN LUNs and that S2D is configured correctly. Here’s how you can check:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Schedule a maintenance window when VMs and other roles running on the SAN LUNs can be stopped – OR - move the VMs and other roles off the CSV disks based on SAN LUNs and onto the CSV disks based on S2D.&lt;/LI&gt;
&lt;LI&gt;Take CSV disks based on the SAN LUNs &lt;STRONG&gt;Offline&lt;/STRONG&gt;, I.E., “Take Offline”&lt;/LI&gt;
&lt;LI&gt;CSV disks using S2D can remain online:&lt;img /&gt;&lt;/LI&gt;
&lt;LI&gt;Run cluster validate for both “Storage” and “Storage Spaces Direct” validation suites:
&lt;OL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;PowerShell:&amp;nbsp;&lt;STRONG&gt;Test-Cluster –Include “Storage”, “Storage Spaces Direct”&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;/LI&gt;
&lt;LI&gt;Both SCSI-3 PR tests for SAN storages and S2D tests will be performed and a validation report generated.&lt;/LI&gt;
&lt;LI&gt;Bring SAN LUNs Online, I.E., “Bring Online”, and restart VMs or other roles if they were stopped.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Note – remember to avoid renaming or evicting cluster nodes using S2D storage.&lt;/P&gt;
&lt;P&gt;We recognize that some management features are different between S2D and SAN LUNs, but we think that most users will benefit from being able to combine S2D and SAN storage.&lt;/P&gt;
&lt;P&gt;For more details on Failover Cluster storage architectures, see this article: &lt;A href="https://learn.microsoft.com/en-us/windows-server/failover-clustering/storage-architectures" target="_blank" rel="noopener"&gt;Failover Clustering Storage Architectures in Windows Server | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;To see SAN Storage systems that have qualified for the Windows Server 2022 or Windows Server 2025 logo, browse the Windows Server Catalog Hardware: &lt;A href="https://www.windowsservercatalog.com/hardware?category=8244f780-07fe-4ea7-a3c4-7a3e5888baad%2Cmixedc8a9bd3c-c75a-4bfa-92d0-f4f4cbbd564e&amp;amp;os=9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d%2C85431dac-889a-4c15-aa01-733786b71606" target="_blank" rel="noopener"&gt;Hardware | Windows Server Catalog&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;If you would like to see additional scenarios supported, please let us know so that we can prioritize development and validation and address your business needs!&lt;/P&gt;
&lt;P&gt;Please send your questions, comments, and feature requests about S2D and SAN Coexistence to: &lt;A href="mailto:WSFC_S2DandSAN@microsoft.com" target="_blank" rel="noopener"&gt;WSFC_S2DandSAN@microsoft.com&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;-Rob.&lt;/P&gt;
&lt;P&gt;Pleased to see that &lt;STRONG&gt;Hitachi Vantara&lt;/STRONG&gt; is supporting S2D and SAN Coexistence: &lt;A href="https://community.hitachivantara.com/blogs/shih-chieh-cheng/2026/03/05/wsfc-2025-hyper-v-blended-storage-coexistence-s2d" target="_blank"&gt;WSFC 2025 (Hyper-V) — Blended Storage Coexistence (S2D + Hitachi VSP One Block SAN) &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;/A&gt;(also on LinkedIn here: &lt;A href="https://www.linkedin.com/pulse/wsfc-2025-hyper-v-blended-storage-coexistence-s2d-hitachi-jeff-cheng-4ba2c/" target="_blank"&gt;(33) WSFC 2025 (Hyper-V) — Blended Storage Coexistence (S2D + Hitachi VSP One Block SAN) | LinkedIn&lt;/A&gt;)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;-RH.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 06 Mar 2026 00:09:39 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/announcing-support-for-s2d-and-san-coexistence/ba-p/4475351</guid>
      <dc:creator>Rob-Hindman</dc:creator>
      <dc:date>2026-03-06T00:09:39Z</dc:date>
    </item>
    <item>
      <title>New Cluster-Wide Control For Virtual Machine Live Migrations In Windows Server and Azure Stack HCI</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/new-cluster-wide-control-for-virtual-machine-live-migrations-in/ba-p/3709680</link>
      <description>&lt;P&gt;&lt;EM&gt;Applies to:&amp;nbsp; Windows Server 2022, Azure Stack HCI, version 21H2 and later versions of both&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P aria-level="1"&gt;&lt;EM&gt;Overview:&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;There is a new enhancement in the ability to manage the number of parallel live migrations within a cluster, making it easier to change and ensuring consistency.&amp;nbsp; Previously, changing it required setting it on each node of the cluster, and remembering to set it when a new server is added to the cluster.&amp;nbsp; This meant it was easy to have inconsistencies across the nodes.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Hyper-v has a setting to limit the number of live migrations that a server can participate in. If an administrator wanted to change this value to be optimized for their systems, they would have to go to each node of a failover cluster and change the per-server Hyper-V property.&amp;nbsp; They would also have to remember to set this property for any new node added to the cluster.&amp;nbsp; This meant that it was difficult to ensure consistency over time.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;With the installation of the September 2022 Windows Update package or later, The new cluster property &lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;MaximumParallelMigrations&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; was added that allows an administrator to set the value once and have each node of the cluster inherit the setting. When new servers are added to the cluster, the cluster value will be inherited.&amp;nbsp; This ensures consistency and makes it easy to adjust the system.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P aria-level="1"&gt;&lt;STRONG&gt;Using the new property:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The default when the update is applied is MaximumParallelMigrations=1.&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;To get the current value the cluster property:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;EM&gt;(Get-Cluster).MaximumParallelMigrations&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The value can be changed by using the command:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;EM&gt;(Get-Cluster).MaximumParallelMigrations=&amp;lt;value&amp;gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;To view the nodes property:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;EM&gt;Get-VMHost -ComputerName &amp;lt;name1&amp;gt;,&amp;lt;name2&amp;gt;,&amp;lt;name3&amp;gt;&amp;nbsp; | FT Name, MaximumVirtualMachineMigrations&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P aria-level="1"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P aria-level="1"&gt;&lt;STRONG&gt;More Information&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559738&amp;quot;:240,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The first time a monthly update with this change is applied, the local MaximumParallelMigrations setting will be converted to a cluster level setting and set to 1.&amp;nbsp; Based on testing this is the recommended default with the safest and most reliable value as far as reliability with live migrations across the various types of systems deployed today.&amp;nbsp; If this parameter is changed by an administrator after it is added to the system via a monthly update, the new value will persist.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;We recommend you test and validate the reliability of more than one parallel live migration with your hardware and in your environment before setting to a higher number.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:259}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P aria-level="1"&gt;&lt;STRONG&gt;References&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://techcommunity.microsoft.com/t5/failover-clustering/optimizing-hyper-v-live-migrations-on-an-hyperconverged/ba-p/396609" target="_blank"&gt;Optimizing Hyper-V Live Migrations on an Hyperconverged Infrastructure - Microsoft Tech Community&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/powershell/module/hyper-v/set-vmhost?view=windowsserver2022-ps" target="_blank"&gt;Set-VMHost (Hyper-V) | Microsoft Learn&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://catalog.s.download.windowsupdate.com/c/msdownload/update/software/secu/2022/09/windows10.0-kb5017311-x64_a6259a38b5a1f9728e447b803193b984fac670bf.msu" target="_blank"&gt;2022-09 Cumulative Update for Azure Stack HCI, version 20H2 and Windows Server 2019 Datacenter: Azure Edition for x64-based Systems (KB5017311)&lt;/A&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 05 Jan 2023 17:57:36 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/new-cluster-wide-control-for-virtual-machine-live-migrations-in/ba-p/3709680</guid>
      <dc:creator>Steven Ekren</dc:creator>
      <dc:date>2023-01-05T17:57:36Z</dc:date>
    </item>
    <item>
      <title>New features of Windows Server 2022 Failover Clustering</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/new-features-of-windows-server-2022-failover-clustering/ba-p/2677427</link>
      <description>&lt;P data-unlink="true"&gt;&lt;SPAN&gt;Greetings again Windows Server and Failover Cluster fans!!&amp;nbsp; &lt;A href="https://twitter.com/JohnMarlin_MSFT" target="_self"&gt;John Marlin&lt;/A&gt; here and I own the Failover Clustering feature within the Microsoft product team.&amp;nbsp; In this blog, I will be giving an overview of the new features in Windows Server 2022 Failover Clustering.&amp;nbsp; Some of these will be talked about at the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;upcoming&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://www.microsoft.com/en-us/cloud-platform/windows-server-summit" target="_self"&gt;Windows Server Summit&lt;/A&gt;&lt;SPAN&gt;.&amp;nbsp; One note that I will say is that this particular blog post will not cover the new features for Azure Stack HCI version 21H2.&amp;nbsp; That is another blog for another time.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&lt;SPAN&gt;So let's get this started.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;CENTER&gt;&lt;img /&gt;&lt;/CENTER&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Clustering Affinity and AntiAffinity&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&lt;SPAN&gt;Affinity is a rule you would set up that establishes a relationship between two or more roles (i,e, virtual machines, resource groups, and so on) to keep them together. AntiAffinity is the same but is used to try to keep the specified roles apart from each other.&amp;nbsp;&lt;/SPAN&gt; In Azure Stack HCI version 20H2, we added this and now brought it over to Windows Server as well.&amp;nbsp; In previous versions of Windows Server, we only had AntiAffinity&amp;nbsp;capabilities. This was with the use of &lt;A href="https://docs.microsoft.com/en-us/previous-versions/windows/desktop/mscs/groups-antiaffinityclassnames" target="_self"&gt;AntiAffinityClassNames&lt;/A&gt; and &lt;A href="https://docs.microsoft.com/en-us/previous-versions/windows/desktop/mscs/clusters-clusterenforcedantiaffinity" target="_self"&gt;ClusterEnforcedAntiAffinity&lt;/A&gt;.&amp;nbsp; We took a look at what we were doing and made it better.&amp;nbsp; Now, not only do we have AntiAffinity, but also Affinity.&amp;nbsp; You can configure this new Affinity and AntiAffinity with PowerShell commands and have four options.&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI data-unlink="true"&gt;Same Fault Domain&lt;/LI&gt;
&lt;LI data-unlink="true"&gt;Same Node&lt;/LI&gt;
&lt;LI data-unlink="true"&gt;Different Fault Domain&lt;/LI&gt;
&lt;LI data-unlink="true"&gt;Different Node&lt;/LI&gt;
&lt;/OL&gt;
&lt;CENTER&gt;&lt;img /&gt;&lt;/CENTER&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;The below doc discusses the feature in more detail including how to configure it.&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px" data-unlink="true"&gt;&lt;EM&gt;Cluster Affinity&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px" data-unlink="true"&gt;&lt;EM&gt;&lt;A href="https://docs.microsoft.com/en-us/azure-stack/hci/manage/vm-affinity" target="_blank" rel="noopener"&gt;https://docs.microsoft.com/en-us/azure-stack/hci/manage/vm-affinity&lt;/A&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;For those that still use AntiAffinityClassNames, we will still honor it.&amp;nbsp; Which means, upgrading to Windows Server 2022&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;AutoSites&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;AutoSites is another feature brought over from Azure Stack HCI.&amp;nbsp; AutoSites is basically what is says.&amp;nbsp; When you configure Failover Clustering, it will first look into Active Directory to see if Sites are configured.&amp;nbsp; For example:&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;CENTER&gt;&lt;img /&gt;&lt;/CENTER&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;If they are and the nodes are included in a site, we will automatically create site fault domains and put the nodes in the fault domain they are a member of.&amp;nbsp; For example, if you had two nodes in a Redmond site and two nodes in a Seattle site, it would look like this once the cluster is created.&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;As you can see, we will create the site fault domain name the same as what it is in Active Directory.&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;If sites are not configured within Active Directory, we will then look at the networks to see if there are differences as well as networks common to each other.&amp;nbsp; For example, say you had the nodes with this network configuration:&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;Node1 = 1.0.0.11 with subnet 255.0.0.0&lt;/P&gt;
&lt;P data-unlink="true"&gt;Node2 = 1.0.0.12 with subnet 255.0.0.0&lt;/P&gt;
&lt;P data-unlink="true"&gt;Node3 = 172.0.0.11 with subnet 255.255.0.0&lt;/P&gt;
&lt;P data-unlink="true"&gt;Node1 = 172.0.0.12 with subnet 255.255.0.0&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;We will see this as multiple nodes in one subnet and multiple nodes in another.&amp;nbsp; Therefore, these nodes are in separate sites and it will configure sites for you automatically.&amp;nbsp; With this configuration, it will create the site fault domains with the names of the networks.&amp;nbsp; For example:&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;This will make things easier when you want to create a stretched Failover Cluster.&amp;nbsp; Please note that Storage Spaces Direct cannot be stretched in Windows Server 2022 as it can be in Azure Stack HCI.&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Granular Repair&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;Since we just mentioned Storage Spaces Direct, one of the talked about features of it is repair.&amp;nbsp; As a refresher, as data is written to drives, it is spread throughout all drives on all the nodes.&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;CENTER&gt;&lt;img /&gt;&lt;/CENTER&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;When a node goes down for maintenance, crashes, or whatever the case may be, once it comes back up, there is a "repair" job run where data is moved around and onto the drives, if necessary, of the node that came back.&amp;nbsp; A repair is basically a resync of the data between all the nodes.&amp;nbsp; Depending on the amount of time the node was down, the longer it could take for the repair to complete.&amp;nbsp; A repair in previous versions would take the extent (block of data) that is normally 1 gigabyte or 256 megabyte in size and resync it in its entirety.&amp;nbsp; It did not matter how much of the extent was changed (for example 1 kilobyte), the entire extent is copied.&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;In Windows Server 2022, we have changed this thinking and now work off of "sub-extents".&amp;nbsp; A sub-extent is only a portion of the entire extent.&amp;nbsp; This is normally set at the interleave setting which is 256 kilobytes.&amp;nbsp; Now, when 1 kilobyte of a 1 gigabyte extent is changed, we will only move around the 256 kilobyte sub-extent.&amp;nbsp; This will make repair times much faster and quicker to complete.&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;One other thing we considered was, when a repair/resync occurs, it can affect production due to the CPU resources it must use.&amp;nbsp; To combat that, we also added the capability to throttle the resources up or down, depending on when it may be done.&amp;nbsp; For example, if you need a repair/resync to run during production hours, you need to keep performance of your production needs to remain up.&amp;nbsp; Therefore, you may want to set it on low so it more runs in the background.&amp;nbsp; However, if you were to do it overnight on a weekend, you can afford to crank it up to a higher setting so it completes faster.&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The storage speed repair settings are:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="table-scroll-wrapper has-inner-focus" tabindex="0" role="group" aria-label="Horizontally scrollable data"&gt;
&lt;TABLE class="table" style="border-style: hidden;"&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH width="133px"&gt;&lt;STRONG&gt;Setting&lt;/STRONG&gt;&lt;/TH&gt;
&lt;TH width="80px"&gt;&lt;STRONG&gt;Queue depth&lt;/STRONG&gt;&lt;/TH&gt;
&lt;TH width="270px"&gt;&lt;STRONG&gt;Resource allocation&lt;/STRONG&gt;&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="133px"&gt;
&lt;P&gt;Very low&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="80px" class="lia-align-center"&gt;1&lt;/TD&gt;
&lt;TD width="270px"&gt;Most resources to active workloads&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="133px"&gt;Low&lt;/TD&gt;
&lt;TD width="80px" class="lia-align-center"&gt;2&lt;/TD&gt;
&lt;TD width="270px"&gt;More resources to active workloads&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="133px"&gt;Medium (default)&lt;/TD&gt;
&lt;TD width="80px" class="lia-align-center"&gt;4&lt;/TD&gt;
&lt;TD width="270px"&gt;Balances workloads and repairs&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="133px"&gt;High&lt;/TD&gt;
&lt;TD width="80px" class="lia-align-center"&gt;8&lt;/TD&gt;
&lt;TD width="270px"&gt;More resources to resync and repairs&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="133px"&gt;Very high&lt;/TD&gt;
&lt;TD width="80px" class="lia-align-center"&gt;16&lt;/TD&gt;
&lt;TD width="270px"&gt;Most resources to resync and repairs&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For more information regarding resync speeds, please refer to the below article:&lt;/P&gt;
&lt;/DIV&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&lt;EM&gt;Adjustable storage repair speed in Azure Stack HCI and Windows Server&lt;/EM&gt;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&lt;EM&gt;&lt;A href="https://docs.microsoft.com/en-us/azure-stack/hci/manage/storage-repair-speed" target="_blank" rel="noopener"&gt;https://docs.microsoft.com/en-us/azure-stack/hci/manage/storage-repair-speed&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&lt;FONT size="5"&gt;&lt;STRONG&gt;Cluster Shared Volumes and Bitlocker&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;Cluster Shared Volumes (CSV) enable multiple nodes in a Windows Server Failover Cluster or Azure Stack HCI to simultaneously have read-write access to the same LUN (disk) that is provisioned as an NTFS volume.&amp;nbsp;&amp;nbsp;BitLocker Drive Encryption is a data protection feature that integrates with the operating system and addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned computers.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-unlink="true"&gt;BitLocker on volumes within a cluster are managed based on how the cluster service "views" the volume to be protected.&amp;nbsp; BitLocker will unlock protected volumes without user intervention by attempting protectors in the following order:&lt;/P&gt;
&lt;P data-unlink="true"&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Clear Key&lt;/LI&gt;
&lt;LI&gt;Driver-based auto-unlock key&lt;/LI&gt;
&lt;LI&gt;ADAccountOrGroup protector
&lt;UL&gt;
&lt;LI&gt;Service context protector&lt;/LI&gt;
&lt;LI&gt;User protector&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Registry-based auto-unlock key&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Failover Cluster requires the Active Directory-based protector option (#3 above) for a cluster disk resource or CSV resources.&amp;nbsp; The encryption protector is a SID-based protector where the account being used is Cluster Name Object (CNO) that is created in Active Directory.&amp;nbsp; Because it is Active Directory-based, a domain controller must be available in order to obtain the key protector to mount the drive.&amp;nbsp; If a domain controller is not available or slow in responding, the clustered drive is not going to mount.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With this thinking, we needed to have a "backup" plan.&amp;nbsp; With Windows Server 2022, when a drive is enabled for Bitlocker encryption while it is a part of Failover Cluster, we will now create an additional key protector just for cluster itself.&amp;nbsp; By doing this, it will still go out to a domain controller first to get the key.&amp;nbsp; If the domain controller is not available, it will then use the locally kept additional key to mount the drive.&amp;nbsp; The default will always be to go to the domain controller first.&amp;nbsp; We have also built in the ability to manually mount a cluster drive using new PowerShell cmdlets and passing the locally kept recovery key.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Another thing about this is that it now opens up the ability to Bitlocker drives that are a part of a workgroup or cross-domain cluster where a Cluster Name Object does not exist.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;SMB Encryption&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Windows Server 2022 SMB Direct now supports encryption.&amp;nbsp; Previously, enabling SMB encryption disabled direct data placement, making RDMA performance as slow as TCP.&amp;nbsp; Now data is encrypted before placement, leading to relatively minor performance degradation while adding AES-128 and AES-256 protected packet privacy.&amp;nbsp; You can enable encryption using &lt;A href="https://docs.microsoft.com/windows-server/manage/windows-admin-center/overview" target="_self"&gt;Windows Admin Center&lt;/A&gt;, &lt;A href="https://docs.microsoft.com/powershell/module/smbshare/set-smbserverconfiguration" target="_self"&gt;Set-SmbServerConfiguration&lt;/A&gt;, or a Universal Naming Convention (UNC) Hardening group policy.&amp;nbsp; Furthermore, Windows Server Failover Clusters now support granular control of encrypting intra-node storage communications for Cluster Shared Volumes (CSV) and the storage bus layer (SBL). This means that when using Storage Spaces Direct and SMB Direct, you can decide to encrypt the east-west communications within the cluster itself for higher security.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;New Cluster Resource Types&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Cluster&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/previous-versions/windows/desktop/mscs/resources" target="_blank" rel="noopener" data-linktype="relative-path"&gt;resources&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;are categorized by type. Failover Clustering defines several types of resources and provides&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://docs.microsoft.com/en-us/previous-versions/windows/desktop/mscs/resource-dlls" target="_blank" rel="noopener" data-linktype="relative-path"&gt;resource DLLs&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;to manage these types.&amp;nbsp; In Windows Server 2022, we have added three new resource types.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;HCS Virtual Machine&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Building a great management API for Docker was important for Windows Server Containers. There's a ton of really cool low-level technical work that went into enabling containers on Windows, and we needed to make sure they were easy to use. This seems very simple, but figuring out the right approach was surprisingly tricky. Our first thought was to extend our existing management technologies (e.g. WMI, PowerShell) to containers. After investigating, we concluded that they weren’t optimal for Docker, and started looking at other options.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;After a bit of thinking, we decided to go with a third option. We created a new management service called the Host Compute Service (HCS), which acts as a layer of abstraction above the low level functionality. The HCS was a stable API Docker could build upon, and it was also easier to use. Making a Windows Server Container with the HCS is just a single API call. Making a Hyper-V Container instead just means adding a flag when calling into the API.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Looking at the architecture in Linux:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;CENTER&gt;&lt;img /&gt;&lt;/CENTER&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Looking at the architecture in Windows:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;CENTER&gt;&lt;img /&gt;&lt;/CENTER&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;HCS Virtual Machine lets you create a virtual machine using the HCS APIs rather than the Virtual Machine Management Service (VMMS). &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;NFS Multi Server Namespace&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you are not familiar with an NFS Multi Server Namespace, think of a tree with several branches.&amp;nbsp; An NFS Multi Server Namespace allows for the single namespace to extend out to multiple servers by the use of a referral.&amp;nbsp; With this referral, you can integrate data from multiple NFS Servers into a single namespace.&amp;nbsp; NFS Clients would connect to this namespace and be referred to a selected NFS Server from one of its branches.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Storage Bus Cache&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In all things transparent, this one is a little bit of a reach, but Failover Clustering is needed (sort of).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The storage bus cache for Storage Spaces on standalone servers can significantly improve read and write performance, while maintaining storage efficiency and keeping the operational costs low. Similar to its implementation for Storage Spaces Direct, this feature binds together faster media (for example, SSD) with slower media (for example, HDD) to create tiers. By default, only a portion of the faster media tier is reserved for the cache.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;What makes this the bit of a reach is that in order to use storage bus cache, the Failover Clustering feature must be installed but the machine cannot be a member of a Cluster.&amp;nbsp; I.E. Add the feature and move on.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;More information on Storage Bus Cache can be found here:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Tutorial: Enable storage bus cache with Storage Spaces on standalone servers&lt;/P&gt;
&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/storage-spaces-storage-bus-cache" target="_blank" rel="noopener"&gt;https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/storage-spaces-storage-bus-cache&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks&lt;/P&gt;
&lt;P&gt;John Marlin&lt;/P&gt;
&lt;P&gt;Senior Program Manager&lt;/P&gt;
&lt;P&gt;Twitter:&amp;nbsp;@Johnmarlin_MSFT&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 01 Sep 2021 20:02:49 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/new-features-of-windows-server-2022-failover-clustering/ba-p/2677427</guid>
      <dc:creator>John Marlin</dc:creator>
      <dc:date>2021-09-01T20:02:49Z</dc:date>
    </item>
    <item>
      <title>Failover Clustering in Azure</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/failover-clustering-in-azure/ba-p/2554341</link>
      <description>&lt;P&gt;Azure is a cloud computing platform with an ever-expanding set of services to help you build solutions to meet your business goals. Azure services range from simple web services for hosting your business presence in the cloud to running fully virtualized computers for you to run your custom software solutions.&amp;nbsp; With over 60 regions globally, 200+ products, and over 17,000 services and applications, Azure has everything you need in a cloud.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;One of the products that can server as the compute infrastructure for our service or application is Failover Clustering.&amp;nbsp; Failover Clustering can be a traditional cluster or it can be running Storage Spaces Direct.&amp;nbsp; No matter the choice, there are a few configuration changes that must be made post cluster creation to ensure connectivity can be made.&amp;nbsp; Starting in Windows Server 2019, and moving forward, we have added detection into the cluster creation process that will automatically do some of this configuration for you.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let's first talk about the Cluster Network Name.&amp;nbsp;&amp;nbsp;The Cluster Network Name is used to provide an alternate computer name for an entity that exists on a network. When it is created, it will also create a Cluster IP Address resource that provides an identity to the group, allowing the group to be accessed by network clients.&amp;nbsp; When in Azure, an additional Azure Load Balancer must be created with separate a IP Address so that it can be reached.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In Windows Server 2019, and moving forward, we have added detection during the cluster creation process to look to see if it is being created in Azure.&amp;nbsp; A new parameter has been added to Clustering to help you determine what we have detected.&amp;nbsp; To view it and see the output, the command to run would be:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT color="#0000FF"&gt;Get-Cluster | fl DetectedCloudPlatform&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT color="#0000FF"&gt;DetectedCloudPlatform&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; : Azure&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As a side note, if it detects it is on-premises or any other cloud provider, the response will be &lt;FONT color="#0000FF"&gt;None.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If so, there are several configurations it will add and the first is with the Cluster Name.&amp;nbsp; Instead of the traditional Cluster Name and Cluster IP Address, it will now create the Cluster Name as a distributed network name (DNN) automatically.&amp;nbsp; If you have worked with Scale Out File Servers (SOFS), it is the same type distributed name.&amp;nbsp;&amp;nbsp;A Distributed Network Name is a name in the Cluster that does not use a clustered IP Address.&amp;nbsp; It is a name that is published in DNS using the IP Addresses of all the nodes in the Cluster.&amp;nbsp; Since it uses the IP Addresses of the nodes, a load balancer is not needed.&amp;nbsp; So it would look like this from Failover Cluster Manager.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As a side note, the automatic creation of the name as a DNN is only when the machines are in Azure.&amp;nbsp; However, we have added the ability to create it as a DNN on-premises if it is so desired.&amp;nbsp; When creating the Cluster using Failover Cluster Manager or Windows Admin Center on-premises, it will create it with the name and IP Address.&amp;nbsp; However, using PowerShell, you have a new switch&amp;nbsp;&lt;FONT color="#0000FF"&gt;&lt;STRONG&gt;–ManagementPointNetworkType&lt;/STRONG&gt;&lt;/FONT&gt; that can be used with &lt;FONT color="#0000FF"&gt;&lt;A href="https://docs.microsoft.com/powershell/module/failoverclusters/new-cluster?view=windowsserver2019-ps" target="_self"&gt;&lt;STRONG&gt;New-Cluster&lt;/STRONG&gt;&lt;/A&gt;&lt;/FONT&gt; that will create it as a DNN.&amp;nbsp;&amp;nbsp;&lt;FONT color="#0000FF"&gt;&lt;STRONG&gt;–ManagementPointNetworkType&lt;/STRONG&gt;&lt;/FONT&gt; has several parameters to define the type of name it will be.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT color="#0000FF"&gt;New-Cluster -ManagementPointNetworkType:&lt;EM&gt;x&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;FONT color="#0000FF"&gt;Singleton : &lt;EM&gt;Traditional Cluster Name and Cluster IP Address&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT color="#0000FF"&gt;Distributed : &lt;EM&gt;Create as&amp;nbsp;DNN and use node IP Addresses&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT color="#0000FF"&gt;Automatic :&amp;nbsp;&lt;EM&gt;Detect if on-premises or Azure (default)&lt;/EM&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Moving on, one of the next things we will change is the network communication thresholds.&amp;nbsp; Communication between nodes is crucial in keeping them up and talking to ensure high availability.&amp;nbsp; As a refresher, you have several settings that control the length of wait times and number of failures before we determine a node to be down and it is removed from cluster membership.&amp;nbsp; As a refresher, these are those settings.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE class="lia-align-center" style="height: 258px; border-style: solid; width: 80%;" border="1" width="80%"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="33.333333333333336%" height="84px" class="lia-align-left"&gt;&lt;STRONG&gt;Parameter&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="33.333333333333336%" height="84px"&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;Windows 2019 / Azure Stack HCI&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;Default&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="33.333333333333336%" height="84px"&gt;&lt;STRONG&gt;Maximum&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-left"&gt;SameSubnetDelay&lt;/TD&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-center"&gt;1 second&lt;/TD&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-center"&gt;2 seconds&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-left"&gt;SameSubnetThreshold&lt;/TD&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-center"&gt;10 heartbeats&lt;/TD&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-center"&gt;120 heartbeats&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-left"&gt;CrossSubnetDelay&lt;/TD&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-center"&gt;1 second&lt;/TD&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-center"&gt;4 seconds&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-left"&gt;CrossSubnetThreshold&lt;/TD&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-center"&gt;20 heartbeats&lt;/TD&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-center"&gt;&amp;nbsp;120 heartbeats&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-left"&gt;CrossSiteDelay&lt;/TD&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-center"&gt;1 second&lt;/TD&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-center"&gt;4 seconds&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-left"&gt;CrossSiteThreshold&lt;/TD&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-center"&gt;20 heartbeats&lt;/TD&gt;
&lt;TD width="33.333333333333336%" height="29px" class="lia-align-center"&gt;&amp;nbsp;120 heartbeats&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;It is important to understand that both the delay and threshold have a cumulative effect on the total health detection.&amp;nbsp; For example setting &lt;STRONG&gt;SameSubnetDelay&lt;/STRONG&gt; to send a heartbeat every 1 seconds and setting the&lt;STRONG&gt; SameSubnetThreshold&lt;/STRONG&gt; to 10 heartbeats missed before taking recovery, means that the cluster can have a total network tolerance of 10 seconds before recovery action is taken.&amp;nbsp; The higher the numbers, the longer it will take to detect a node is not responding.&amp;nbsp; In general, continuing to send frequent heartbeats but having greater thresholds is the preferred method.&amp;nbsp; The primary scenario for increasing the Delay, is if there are ingress / egress charges for data sent&amp;nbsp;between nodes.&amp;nbsp; When we have detected that the cluster in in Azure, we will auto increase the thresholds to their maximum values.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Please refer to the &lt;A href="https://techcommunity.microsoft.com/t5/failover-clustering/tuning-failover-cluster-network-thresholds/ba-p/371834" target="_self"&gt;Tuning Failover Cluster Network Thresholds&lt;/A&gt; blog to change these values.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The last thing I want to talk about is Azure host maintenance.&amp;nbsp; Maintenance on an compute host is something you cannot get around as patches, driver/firmware updates, etc need to be done periodically.&amp;nbsp; Same goes for those hosts in Azure or any other cloud provider.&amp;nbsp; So what to do with the virtual machines running on those hosts is something that needs to be considered by the Azure administrators.&amp;nbsp; There is basically only a couple of things that they can do which is leave the VMs where they are or move them off.&amp;nbsp; The decision to move or stay can simply come down to how long is it going to take and does it need a reboot.&amp;nbsp; No matter how quick it may take to apply, if a reboot is needed, the VMs are going to move off.&amp;nbsp; However, if whatever maintenance being done doesn't need a reboot an is quick, simply freezing the virtual machine is done.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As a client, you very well never know anything ever happened and that is the goal.&amp;nbsp; But there could be times when you notice it as you cannot connect, you are hung, a cluster node drops out of membership, etc.&amp;nbsp; From a client perspective there is not a way of knowing what had happened.&amp;nbsp; You must trust that the administrators have no issues and they make the right decisions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;But what if you as an administrator received a heads up of impending host maintenance and you could make the decision.&amp;nbsp; Well, that leads to the other new feature we added.&amp;nbsp; With Windows Server 2019, we added integration and awareness of Azure Host Maintenance and improved experience by monitoring for Azure Scheduled Events.&amp;nbsp; For this to fully be done, all clustered VMs must be in the same Azure Availability Zone..&amp;nbsp; When a host has maintenance scheduled, we will now detect it and throw an event into the virtual machine's FailoverClustering/Operational channel.&amp;nbsp; We have also included actions that you can configure based on the event.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;First, let's talk about the events you could see.&amp;nbsp; This is an example of one of those events.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Log: FailoverClustering/Operational&lt;BR /&gt;Level: Warning&lt;BR /&gt;Event ID: 1139&lt;BR /&gt;symbol="NODE_MAINTENANCE_DETECTED”&lt;BR /&gt;Description: The cluster service has detected an Azure host maintenance event has been scheduled. This maintenance event may cause the node hosting the virtual machine to become unavailable during this time.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Node: VMNode1&lt;BR /&gt;Approximate Time: 2021/07/16-17:30:00.000&lt;BR /&gt;Details: ' EventId = 4FE57A76-7754-48FD-9B45-48387A36CD19 &lt;BR /&gt;EventStatus = Scheduled Event&lt;BR /&gt;Type = Freeze Resource&lt;BR /&gt;Type = VirtualMachine&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As you can see, this event triggered as a host maintenance event has been scheduled.&amp;nbsp; It provides several other things of interest.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;1. The time the event is to occur&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;2. What the event will be someone from the Azure Team could look up if a support ticket were raised&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;3. What it will do with the virtual machine&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There are actually 3 events you could see.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Event ID 1136:&amp;nbsp; Host maintenance is imminent and about to occur&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Event ID 1139:&amp;nbsp; Host maintenance has been detected&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Event ID 1140:&amp;nbsp; Host maintenance has been rescheduled&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now that you have the events, the next thing is to decide if you want to define an action.&amp;nbsp; We have created two new cluster properties of&amp;nbsp;&lt;STRONG&gt;DetectManagedEvents&lt;/STRONG&gt; and&amp;nbsp;&lt;STRONG&gt;DetectManagedEventsThreshold&lt;/STRONG&gt;.&amp;nbsp; &lt;STRONG&gt;DetectManagedEvents&lt;/STRONG&gt; is for the action you wish to have occur when it detects an event is scheduled.&amp;nbsp; &lt;STRONG&gt;DetectManagedEventsThreshold&lt;/STRONG&gt;&amp;nbsp; The options for each of these are as follows:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;DetectManagedEvents&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;0 = Do not Log Azure Scheduled Events &lt;FONT color="#FF0000"&gt;&lt;EM&gt;&amp;lt;-- default for on-premises&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;1 = Log Azure Scheduled Events &lt;FONT color="#FF0000"&gt;&lt;EM&gt;&amp;lt;-- default in Azure&lt;/EM&gt;&lt;/FONT&gt;&lt;BR /&gt;2 = Avoid Placement (don’t move roles to this node)&lt;BR /&gt;3 = Pause and drain when Scheduled Event is detected&lt;BR /&gt;4 = Pause, drain, and failback when Scheduled Event is detected&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;DetectManagedEventsThreshold&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;60 seconds &lt;EM&gt;&lt;FONT color="#FF0000"&gt;&amp;lt;-- default&lt;/FONT&gt;&lt;/EM&gt;&lt;BR /&gt;Amount of time before taking action&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt; &lt;EM&gt;These settings only apply when the virtual machine is in Azure.&amp;nbsp; It does not take effect on any other platform (I.E. a third party cloud provider, Hyper-V, Azure Stack HUB/HCI/Edge, etc).&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In closing, we recognized that there are some configurations needed when a Failover Cluster is in Azure.&amp;nbsp; By adding these new features, we have taken some of the burden away from you as an administrator and automatically making these changes for you.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks&lt;/P&gt;
&lt;P&gt;John Marlin&lt;/P&gt;
&lt;P&gt;Senior Program Manager&lt;/P&gt;
&lt;P&gt;Twitter:&amp;nbsp;@Johnmarlin_MSFT&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Jul 2021 18:38:13 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/failover-clustering-in-azure/ba-p/2554341</guid>
      <dc:creator>John Marlin</dc:creator>
      <dc:date>2021-07-16T18:38:13Z</dc:date>
    </item>
    <item>
      <title>Security Settings for Failover Clustering</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/security-settings-for-failover-clustering/ba-p/2544690</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Security is at the forefront of many administrator's minds and with Failover Clustering, we did some improvements with Windows Server 2019 and Azure Stack HCI with regards to security.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Since the beginning of time, Failover Clustering has always had a dependency on NTLM authentication.&amp;nbsp; As the versions came and went, a little more of this dependency was removed.&amp;nbsp; Now, with Windows Server 2019 Failover Clustering, we have finally removed all of these dependencies.&amp;nbsp;&amp;nbsp;&lt;SPAN&gt;Instead Kerberos and certificate-based authentication is used exclusively. There are no changes required by the user, or deployment tools, to take advantage of this security enhancement. It also allows failover clusters to be deployed in environments where NTLM has been disabled.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This goes for the bootstrapping of the cluster to the starting of the resources and drives.&amp;nbsp; With the bootstrapping process, the need of an Active Directory domain controller is also no longer needed.&amp;nbsp; As explained in this &lt;A href="https://techcommunity.microsoft.com/t5/failover-clustering/so-what-exactly-is-the-cliusr-account/ba-p/388832" target="_self"&gt;blog&lt;/A&gt;, we have a local user account (CLIUSR) that is used for various things now.&amp;nbsp; In conjunction of this account as well as the use of certificates:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Cluster Service starts and forms the cluster&lt;/LI&gt;
&lt;LI&gt;Other nodes will join the cluster&lt;/LI&gt;
&lt;LI&gt;Drives (including Cluster Shared Volumes) will come online&lt;/LI&gt;
&lt;LI&gt;Groups and resources start coming online.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;This is especially beneficial if you have a domain controller is virtualized running on the cluster, preventing the "chicken or the egg" scenario.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Another security concern that administrators have is what is out on the wire.&amp;nbsp; There are a couple of security settings to consider with regards to communications between the nodes and and storage.&amp;nbsp; From a storage perspective, there is Cluster Shared Volume (CSV) traffic for any redirected data and Storage Bus Layer (SBL) traffic, if using Storage Spaces Direct.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let's first talk about cluster communications.&amp;nbsp; Cluster communications could contain any number of things and what an admin would like is to prevent anything from picking it up on the network.&amp;nbsp; As a default, all communication between the nodes are sent signed, making the use of certificates.&amp;nbsp; This may be fine when all the cluster nodes reside in the same rack.&amp;nbsp; However, when nodes are separated in different racks or locations, an admin may wish to have a little more security and make use of encryption.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This setting is controlled by the Cluster property &lt;STRONG&gt;SecurityLevel&lt;/STRONG&gt; and has three different levels.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;0 = Clear Text&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;1 = Signed &lt;EM&gt;(default)&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;2 = Encrypted&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If the desire is to change this to encrypted communications, the command to run would be:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT color="#0000FF"&gt;(Get-Cluster).SecurityLevel = 2&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The other bit of communication between the nodes would be with the storage.&amp;nbsp; Both Cluster Shared Volumes (CSV) has traffic on the wire and if using Storage Spaces Direct, you have the Storage Bus Layer (SBL) traffic.&amp;nbsp; For these bits of traffic, the default is to send everything in clear text.&amp;nbsp; Admins may decide they wish to secure this type of data traffic to lock it down and prevent sniffer traces from picking anything up.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This setting is controlled by the Cluster property &lt;STRONG&gt;SecurityLevelToStorage&lt;/STRONG&gt; and has three different levels.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;0 = Clear Text &lt;EM&gt;(default)&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;1 = Both CSV and SBL traffic are signed&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;2 = Both CSV and SBL traffic are encrypted&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If the desire is to change this to encrypted communications, the command to run would be:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT color="#0000FF"&gt;(Get-Cluster).SecurityLevelToStorage = 2&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#000000"&gt;One caveat to the &lt;STRONG&gt;SecurityLevel&lt;/STRONG&gt; and &lt;STRONG&gt;SecurityLevelToStorage&lt;/STRONG&gt; that must be taken into consideration.&amp;nbsp; These forms of communication are using SMB.&amp;nbsp; When using a form of encryption on the network with SMB, RDMA is not used.&amp;nbsp; Therefore, if you are using this on RDMA network cards, RDMA is not used and can cause a performance impact.&amp;nbsp; Microsoft is aware of this impact and working on correcting this for a later version.&amp;nbsp; For more information on this, please refer to the following document.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#000000"&gt;Reduced networking performance after you enable SMB Encryption or SMB Signing in Windows Server 2016&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#000000"&gt;&lt;A href="https://docs.microsoft.com/en-us/troubleshoot/windows-server/networking/reduced-performance-after-smb-encryption-signing" target="_blank" rel="noopener"&gt;Reduced performance after SMB Encryption or SMB Signing is enabled - Windows Server | Microsoft Docs&lt;/A&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#000000"&gt;Thanks&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#000000"&gt;John Marlin&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#000000"&gt;Senior Program Manager&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#000000"&gt;Twitter:&amp;nbsp;@Johnmarlin_MSFT&lt;/FONT&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Jun 2022 23:19:48 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/security-settings-for-failover-clustering/ba-p/2544690</guid>
      <dc:creator>John Marlin</dc:creator>
      <dc:date>2022-06-06T23:19:48Z</dc:date>
    </item>
    <item>
      <title>Failover Clustering Networking Basics and Fundamentals</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/failover-clustering-networking-basics-and-fundamentals/ba-p/1706005</link>
      <description>&lt;P&gt;My name is &lt;A href="https://twitter.com/JohnMarlin_MSFT" target="_blank" rel="noopener"&gt;John Marlin&lt;/A&gt; and I am with the High Availability and Storage Team.&amp;nbsp; With newer versions of Windows Server and Azure Stack HCI on the horizon, it’s time to head to the archives and dust off some old information as they are in need of updating.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this blog, I want to talk about Failover Clustering and Networking. Networking is a fundamental key with Failover Clustering that sometimes is overlooked but can be the difference in success or failure. In this blog, I will be hitting on all facets from the basics, tweaks, multi-site/stretch, and Storage Spaces Direct.&amp;nbsp; By no means should this be taken as a “this is a networking requirement” blog.&amp;nbsp; Treat this as more of general guidance with some recommendations and things to consider.&amp;nbsp; Specific requirements for any of our operating systems (new or old) will be a part of the documentation (&lt;A href="https://docs.microsoft.com" target="_blank" rel="noopener"&gt;https://docs.microsoft.com&lt;/A&gt;) of the particular OS.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In Failover Clustering, all networking aspects are provided by our Network Fault Tolerant (NetFT) adapter. Our NetFT adapter is a virtual adapter that is created with the Cluster is created. There is no configuration necessary as it is self-configuring. When it is created, it will create its MAC Address based off of a hash of the MAC Address of the first physical network card. It does have conflict detection and resolution built in. For the IP Address scheme, it will create itself an APIPA IPv4 (169.254.*) and IPv6 (fe80::*) address for communication.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;Connection-specific DNS Suffix&amp;nbsp; . :&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;Description . . . . . . . . . . . : Microsoft Failover Cluster Virtual Adapter&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;Physical Address. . . . . . . . . : 02-B8-FA-7F-A5-F3&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;DHCP Enabled. . . . . . . . . . . : No&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;Autoconfiguration Enabled . . . . : Yes&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;Link-local IPv6 Address . . . . . : fe80::80ac:e638:2e8d:9c09%4(Preferred)&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;IPv4 Address. . . . . . . . . . . : 169.254.1.143(Preferred)&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;Subnet Mask . . . . . . . . . . . : 255.255.0.0&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;Default Gateway . . . . . . . . . :&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;DHCPv6 IAID . . . . . . . . . . . : 67287290&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-26-6B-52-A5-00-15-5D-31-8E-86&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="2"&gt;NetBIOS over Tcpip. . . . . . . . : Enabled&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The NetFT adapter provides the communications between all nodes in the cluster from the Cluster Service. To do this, it discovers multiple communication paths between nodes and if the routes are on the same subnet or cross subnet. The way it does this is through “heartbeats” through all network adapters for Cluster use to all other nodes. Heartbeats basically serve multiple purposes.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Is this a viable route between the nodes?&lt;/LI&gt;
&lt;LI&gt;Is this route currently up?&lt;/LI&gt;
&lt;LI&gt;Is the node being connected to up?&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There is more to heartbeats, but will defer to my other blog &lt;A href="https://techcommunity.microsoft.com/t5/failover-clustering/no-such-thing-as-a-heartbeat-network/ba-p/388121" target="_blank" rel="noopener"&gt;No Such Thing as a Heartbeat Network&lt;/A&gt; for more details on it.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For Cluster communication and heartbeats, there are several considerations that must be taken into account.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Traffic uses port 3343. Ensure any firewall rules have this port open for both TCP and UDP&lt;/LI&gt;
&lt;LI&gt;Most Cluster traffic is lightweight.&lt;/LI&gt;
&lt;LI&gt;Communication is sensitive to latency and packet loss. Latency delays could mean performance issues, including removal of nodes from membership.&lt;/LI&gt;
&lt;LI&gt;Bandwidth is not as important as quality of service.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Cluster communication between nodes is crucial so that all nodes are currently in sync. Cluster communication is constantly going on as things progress. The NetFT adapter will dynamically switch intra-cluster traffic to another available Cluster network if it goes down or isn’t responding.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The communications from the Cluster Service to other nodes through the NetFT adapter looks like this.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Cluster Service plumbs network routes over NIC1, NIC2 on NetFT&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;Cluster Service establishes TCP connection over NetFT adapter using the private NetFT IP address (source port 3343)&lt;/LI&gt;
&lt;LI&gt;NetFT wraps the TCP connection inside of a UDP packet (source port 3343)&lt;/LI&gt;
&lt;LI&gt;NetFT sends this UDP packet over one of the cluster-enabled physical NIC adapters to the destination node targeted for destination node’s NetFT adapter&lt;/LI&gt;
&lt;LI&gt;Destination node’s NetFT adapter receives the UDP packet and then sends the TCP connection to the destination node’s Cluster Service&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Heartbeats are always traversing all Cluster enabled adapters and networks. However, Cluster communication will only go through one network at a time. The network it will use is determined by the role of the network and the priority (metric).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There are three roles a Cluster has for networks.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Disabled for Cluster Communications&lt;/STRONG&gt; – Role 0 - This is a network that Cluster will not use for anything.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Enabled for Cluster Communication only&lt;/STRONG&gt; – Role 1 – Internal Cluster Communication and Cluster Shared Volume traffic (more later) are using this type network as a priority.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Enabled for client and cluster communication&lt;/STRONG&gt; – Role 3 – This network is used for all client access and Cluster communications. Items like talking to a domain controller, DNS, DHCP (if enabled) when Network Names and IP Addresses come online. Cluster communication and Cluster Shared Volume traffic could use this network if all Role 1 networks are down.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Based on the roles, the NetFT adapter will create metrics for priority. The metric Failover Cluster uses is not the same as the network card metrics that TCP/IP assigns. Networks are given a “cost” (Metric) to define priority. A lower metric value means a higher priority while a higher metric value means a lower priority.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;These metrics are automatically configured based on Cluster network role setting.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Cluster Network Role of 1 = 40,000 starting value&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Cluster Network Role of 3 = 80,000 starting value&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Things such as Link speed, RDMA, and RSS capabilities will reduce metric value. For example, let’s say I have two networks in my Cluster with one being selected and Cluster communications only and one for both Cluster/Client. I can run the following to see the metrics.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="3" color="#3366FF"&gt;PS &amp;gt; Get-ClusterNetwork | ft Name, Metric&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="3" color="#3366FF"&gt;Name&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;Metric&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="3" color="#3366FF"&gt;----&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;------&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="3" color="#3366FF"&gt;Cluster Network 1&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;70240&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT size="3" color="#3366FF"&gt;Cluster Network 2&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;30240&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The NetFT adapter is also capable of taking advantage of SMB Multichannel and load balance across the networks. For NetFt to take advantage of it, the metrics need to be &amp;lt; 16 metric values apart. In the example above, SMB Multichannel would not be used. But if there were additional cards in the machines and it looked like this:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;FONT color="#3366FF"&gt;PS &amp;gt; Get-ClusterNetwork | ft Name, Metric&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;FONT color="#3366FF"&gt;Name&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;Metric&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;FONT color="#3366FF"&gt;----&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;------&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;FONT color="#3366FF"&gt;Cluster Network 1&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;70240&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;FONT color="#3366FF"&gt;Cluster Network 2&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;30240&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;FONT color="#3366FF"&gt;Cluster Network 3&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 30241&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;FONT color="#3366FF"&gt;Cluster Network 4&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 30245&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;FONT color="#3366FF"&gt;Cluster Network 5&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 30265&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In a configuration such as this, SMB Multichannel would be used over Cluster Networks 2, 3 and 4. From a Cluster communication and heartbeat standpoint, multichannel really isn’t a big deal. However, when a Cluster is using Cluster Shared Volumes or is a Storage Spaces Direct Cluster, storage traffic is going to need higher bandwidth. SMB Multichannel would fit nicely here so an additional network card or higher speed network cards are certainly a consideration.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the beginning of the blog, I mentioned latency and packet loss. If heartbeats cannot get through in a timely fashion, node removals can happen. Heartbeats can be tuned in the case of higher latency networks. The following are default settings for tuning the Cluster networks.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE class=" lia-indent-margin-left-30px"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;&lt;STRONG&gt;Parameter&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;&lt;STRONG&gt;Windows 2012 R2&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;&lt;STRONG&gt;Windows 2016&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;&lt;STRONG&gt;Windows 2019&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;SameSubnetDelay&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;1 second&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;1 second&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;1 second&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;SameSubnetThreshold&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;5 heartbeats&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;10 heartbeats&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;20 heartbeats&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;CrossSubnetDelay&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;1 second&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;1 second&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;1 second&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;CrossSubnetThreshold&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;5 heartbeats&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;10 heartbeats&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;20 heartbeats&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;CrossSiteDelay&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;N/A&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;1 second&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;1 second&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="156"&gt;
&lt;P&gt;CrossSiteThreshold&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;N/A&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;20 heartbeats&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="156"&gt;
&lt;P&gt;20 heartbeats&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For more information on these settings, please refer to the &lt;A href="https://techcommunity.microsoft.com/t5/failover-clustering/tuning-failover-cluster-network-thresholds/ba-p/371834" target="_blank" rel="noopener"&gt;Tuning Failover Cluster Network Thresholds&lt;/A&gt; blog.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Planning networks for Failover Clustering is dependent on how it will be used. Let’s take a look at some of the common network traffics a Cluster would have.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If this were a Hyper-V Cluster running virtual machines and Cluster Shared Volumes, Live Migration is going to occur.&amp;nbsp; Clients are also connecting to the virtual machines.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Cluster Communications and heart beating will always be on the wire.&amp;nbsp; If you are using Cluster Shared Volumes (CSV), there will be some redirection traffic.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If this were Cluster that used ISCSI for its storage, you would have that as a network.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If this was stretched (nodes in multiple sites), you may have the need for an additional network as the considerations for replication (such as Storage Replica) traffic.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If this is a Storage Spaces Direct Cluster, additional traffic for the Storage Bus Layer (SBL) traffic needs to be considered.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As you can see, there is a lot of various network traffic requirements depending on the type of Cluster and the roles running. Obviously, you cannot have a dedicated network or network card for each as that just isn’t always possible.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We do have a blog that will help with the Live Migration traffic to get some of the traffic isolated or limited in the bandwidth it uses. The blog &lt;A href="https://techcommunity.microsoft.com/t5/failover-clustering/optimizing-hyper-v-live-migrations-on-an-hyperconverged/ba-p/396609" target="_blank" rel="noopener"&gt;Optimizing Hyper-V Live Migrations on an Hyperconverged Infrastructure&lt;/A&gt; goes over some tips to set up.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The last thing I wanted to talk about is with stretch/multisite Failover Clusters. I have already mentioned the Cluster specific networking considerations, but now I want to talk about how the virtual machines react in this type environment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let’s say we have two datacenters and a four-node Failover Cluster with 2 nodes in each datacenter. As with most datacenters, they are in their own subnet and would be similar to this:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit;"&gt;The first thing you want to consider is if you want security between the cluster nodes on the wire. As a default, all Cluster communication is signed. That may be fine for some, but for others, they wish to have that extra level of security. We can set the Cluster to encrypt all traffic between the nodes. It is simply a PowerShell command to change it. Once you change it, the Cluster as a whole needs to be restarted.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT color="#3366FF"&gt;PS &amp;gt; (Get-Cluster).SecurityLevel = 2&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT color="#3366FF"&gt;0 = Clear Text&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT color="#3366FF"&gt;1 = Signed (default)&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT color="#3366FF"&gt;2 = Encrypt (slight performance decrease)&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here is a virtual machine (VM1) that has an IP Address on the 1.0.0.0/8 network and clients are connecting to it. If the virtual machine moves over to Site2 that is a different network (172.0.0.0/16), there will not be any connectivity as it stands.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To get around this, there are basically a couple options.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To prevent the virtual machine from moving from a Cluster-initiated move (i.e. drain, node shutdown, etc), consider using &lt;A href="https://docs.microsoft.com/en-us/windows-server/failover-clustering/fault-domains" target="_blank" rel="noopener"&gt;sites&lt;/A&gt;. When you create sites, Cluster now has site awareness. This means that any Cluster-initiated move will always keep resources in the same site. Setting a preferred site will also keep it in the same site. If the virtual machine was to ever move to the second site, it would be due to a user-initiated move (i.e. Move-ClusterGroup, etc) or a site failure.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;But you still have the IP Address of the virtual machine issue to deal with. During a migration of the virtual machine, one of the very last things is to register the name and IP Address with DNS. If you are using a static IP Address for the virtual machine, a script would need to be manually run to change the IP Address to the local site it is on. If you are using DHCP, with DHCP servers in each site, the virtual machine will obtain a new address for the local site and register it. You then have to deal with DNS replication and TTL records a client may have. Instead of waiting for the timeout periods, a forced replication and TTL clearing on the client side would allow them to connect again.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you do not wish to go that route, a virtual LAN (VLAN) could be set up across the routers/switches to be a single IP Address scheme. Doing this will not have the need to change the IP Address of the virtual machine as it will always remain the same. However, stretching a VLAN (not a recommendation by Microsoft) is not always easy to do and the Networking Group within your company may not want to do this for various reasons.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Another consideration is implementing a network device on the network that has a third IP Address that clients connect to and it holds that actual IP Address of the virtual machine so it will route clients appropriately. For example:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;SPAN style="font-family: inherit;"&gt;In the above example, we have a network device that has the IP Address of the virtual machine as 30.0.30.1. It will register this with all DNS and will keep the same IP Address no matter which site it is on. Your Networking Group would need to involved with this and need to control it. The chances of them not doing it is something to also consider if it can even done within your network.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We talked about virtual machines, but what about other resources, say, a file server?&amp;nbsp; Unlike virtual machine roles, roles such as a file server have a Network Name and IP Address resource in the Cluster. In Windows 2008 Failover Cluster, we added he concept of “or” dependencies. Meaning, we can depend on this "or" that.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN style="font-family: inherit;"&gt;In the case of the scenario above, your Network Name could be dependent on 1.0.0.50 “or” 172.0.0.50. As long as one of the IP Address resources is online, the name is online and what is published in DNS. To go a step further for the stretch scenario, we have two parameters that can be used.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;RegisterAllProvidersIP&lt;/STRONG&gt;: (default = 0 for FALSE)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Determines if all IP Addresses for a Network Name will be registered by DNS&lt;/LI&gt;
&lt;LI&gt;TRUE (1): IP Addresses can be online or offline and will still be registered&lt;/LI&gt;
&lt;LI&gt;Ensure application is set to try all IP Addresses, so clients can connect quicker&lt;/LI&gt;
&lt;LI&gt;Not supported by all applications, check with application vendor&lt;/LI&gt;
&lt;LI&gt;Supported by SQL Server starting with SQL Server 2012&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;HostRecordTTL&lt;/STRONG&gt;: (default = 1200 seconds)&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Controls time the DNS record lives on client for a cluster network name&lt;/LI&gt;
&lt;LI&gt;Shorter TTL: DNS records for clients updated sooner&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;Disclaimer: This does not speed up DNS replication&lt;/EM&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;By manipulating these parameters, you will have quicker connection times by a client. For example, I want to enable to register all the IP Addresses with DNS but I want the TTL to be 5 minutes. I would run the commands:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT color="#3366FF"&gt;PS &amp;gt; Get-ClusterResource FSNetworkName | Set-ClusterParameter RegisterAllProvidersIP 1&lt;/FONT&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;FONT color="#3366FF"&gt;PS &amp;gt; Get-ClusterResource FSNetworkName | Set-ClusterParameter HostRecordTTL 300&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When setting the parameters, recycling (offline/online) of the resources is needed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There is more I could go into here with this subject but need to signoff for now. I hope that this gives you some basics to consider when designing your Clusters while thinking of the networking aspects of it. Networking designs and considerations must be carefully thought out.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Happy Clustering !!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;John Marlin&lt;/P&gt;
&lt;P&gt;Senior Program Manager&lt;/P&gt;
&lt;P&gt;High Availability and Storage&lt;/P&gt;
&lt;P&gt;Follow me on Twitter: &lt;A href="https://twitter.com/JohnMarlin_MSFT" target="_blank" rel="noopener"&gt;@johnmarlin_msft&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 24 Sep 2020 01:19:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/failover-clustering-networking-basics-and-fundamentals/ba-p/1706005</guid>
      <dc:creator>John Marlin</dc:creator>
      <dc:date>2020-09-24T01:19:20Z</dc:date>
    </item>
    <item>
      <title>Disaster Recovery in the next version of Azure Stack HCI</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/disaster-recovery-in-the-next-version-of-azure-stack-hci/ba-p/1027898</link>
      <description>&lt;P&gt;Disaster can hit at any time.&amp;nbsp; When thinking about disaster and recovery, I think of 3 things&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Be prepared&lt;/LI&gt;
&lt;LI&gt;Plan on not involving humans&lt;/LI&gt;
&lt;LI&gt;Automatic, not automated&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Having a good strategy is a must.&amp;nbsp; You want to be able to have resources automatically move out of one datacenter to the other and not have to rely on someone to "flip the switch" to get things to move.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As announced at &lt;A href="https://www.microsoft.com/ignite" target="_blank" rel="noopener"&gt;Microsoft Ignite 2019&lt;/A&gt;, we are now going to be able to stretch Azure Stack HCI systems between multiple sites in the next version for disaster recovery purposes.&amp;nbsp; This blog will give you some insight into how you can set it up along with videos showing how it works.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;To set this up, the configuration I am using is basic and common to how many networks are configured.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As you can see from the above, I have:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Two sites (Seattle and Redmond)&lt;/LI&gt;
&lt;LI&gt;Two nodes in each site&lt;/LI&gt;
&lt;LI&gt;A domain controller in each site&lt;/LI&gt;
&lt;LI&gt;Different IP Address schemes at each site&lt;/LI&gt;
&lt;LI&gt;Each site goes through a router to connect to the other site&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When putting this scenario together, we considered multiple things.&amp;nbsp; One of the main considerations is ease of configurations.&amp;nbsp; In the past, when setting up a Failover Cluster in a stretched environment could have some inadvertent misconfigurations.&amp;nbsp; We wanted to ensure, where we could, that misconfigurations can be averted.&amp;nbsp; We will be using Storage Replica as our replication method between the sites.&amp;nbsp; Everything you need from a software perspective for this scenario will be in-box.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;One of the first things we looked at was the sites themselves.&amp;nbsp; What we are doing is detecting if nodes are in different sites when Failover Clustering is first created.&amp;nbsp; We do this with two different methods.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Sites are configured in Active Directory&lt;/LI&gt;
&lt;LI&gt;Nodes being added have different IP Address schemes&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If we see sites are configured in Active Directory, we will create a site fault domain with the name of the site and add nodes to this fault domain.&amp;nbsp; If sites are not configured but the nodes are in different IP Address schemes, we will create a site fault domain with the IP Address scheme as the name with the nodes.&amp;nbsp; So taking the above configuration:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Sites configured in Active Directory would be named &lt;STRONG&gt;SEATTLE&lt;/STRONG&gt; and &lt;STRONG&gt;REDMOND&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;No sites configured in Active Directory, but different IP Address schemes would be named &lt;STRONG&gt;0.0.0/8&lt;/STRONG&gt; and &lt;STRONG&gt;172.0.0.0/16&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this first video, you can see that Active Directory Sites and Services do have sites configured and the creation of the site fault domains are created for you.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/HQr-p1k01BQ" width="987" height="554" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For more information on how to set up sites within Active Directory, please refer to the following blog:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Step-by-Step: Setting Up Active Directory Sites, Subnets, and Site-Links&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://blogs.technet.microsoft.com/canitpro/2015/03/03/step-by-step-setting-up-active-directory-sites-subnets-site-links/" target="_blank" rel="noopener"&gt;&lt;EM&gt;https://blogs.technet.microsoft.com/canitpro/2015/03/03/step-by-step-setting-up-active-directory-sites-subnets-site-links/&lt;/EM&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The next item we took to ease configuration burdens are when Storage Spaces Direct is enabled.&amp;nbsp; In Windows Server 2016 and 2019 Storage Spaces Direct, we supported one storage pool.&amp;nbsp; In the stretch scenario with the next version, we are now supporting a pool per site.&amp;nbsp; When Storage Spaces Direct is now enabled, we are going to automatically create these pools and name them with the site they are created in.&amp;nbsp; I.E.:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Pool for site SEATTLE&lt;/STRONG&gt; and &lt;STRONG&gt;Pool for site REDMOND&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Pool for site 1.0.0.0/8&lt;/STRONG&gt; and &lt;STRONG&gt;Pool for site 172.0.0.0/16&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The other item we are detecting for is the presence of the Storage Replica service.&amp;nbsp; We will go out to each node to detect if Storage Replica has been installed on each of the nodes specified.&amp;nbsp; If it is missing, we will stop and let you know it is missing and let you know which node.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/W3V0YnHgetc" width="987" height="554" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As an FYI, keep in mind that this is a pre-released product at the time of this blog creation.&amp;nbsp; The stopping of the Storage Spaces Direct enablement is subject to change to more of a warning at a later date.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once everything is in place and all services are present, you can see from this video, the successful enablement of Storage Spaces Direct and the separate pools.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/JYM8aG57HNs" width="987" height="526" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To point out here, in order to get this to work, we did a lot of work with the Health Service to allow this.&amp;nbsp; The Health Service keeps track of the health of the entire cluster (resources, nodes, drives, pools, etc).&amp;nbsp; With multiple pools and multiple sites, it needs to keep track of all this.&amp;nbsp; I will go more into this a bit later and show you how it works in a few other scenarios with this configuration.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now that we have pools, we need to create virtual disks at the sites.&amp;nbsp; Since this is a stretch and we want replication with Storage Replica, two disks need to be created at each site (one for data and one for logs).&amp;nbsp; I use New-Volume to create them, but there is a caveat.&amp;nbsp; When the virtual disk is created, Failover Clustering will auto add them into the Available Storage Group.&amp;nbsp; Therefore, you need to ensure Available Storage is on the node you are creating the disks on.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;NOTE: We are aware of this caveat and will be working on getting a better story for this as we go along.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the examples below, I will use:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;site names of &lt;STRONG&gt;SEATTLE&lt;/STRONG&gt; and &lt;STRONG&gt;REDMOND&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;pool names of &lt;STRONG&gt;Pool for site SEATTLE&lt;/STRONG&gt; and &lt;STRONG&gt;Pool for site REDMOND&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;NODE1 and NODE2 are in site &lt;STRONG&gt;SEATTLE&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;NODE3 and NODE4 are in site &lt;STRONG&gt;REDMOND&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So, let's create the disks.&amp;nbsp; The first thing is to ensure the Available Storage group is on NODE1 in SITEA so we can create those disks.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;From Failover Cluster Manager:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Go under &lt;STRONG&gt;Storage&lt;/STRONG&gt; to &lt;STRONG&gt;Disks&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;In the far-right pane, click &lt;STRONG&gt;Move Available Storage&lt;/STRONG&gt; and &lt;STRONG&gt;Select Node&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Select &lt;STRONG&gt;NODE1&lt;/STRONG&gt; and &lt;STRONG&gt;OK&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;From PowerShell:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="padding-left: 30px;"&gt;&lt;FONT color="#3366FF"&gt;Move-ClusterGroup -Name "Available Storage" -Node NODE1&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now we can create the two disks for site SEATTLE.&amp;nbsp; I will be creating a 2-way mirror From PowerShell you can run the commands:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="padding-left: 30px;"&gt;&lt;FONT color="#3366FF"&gt;New-Volume -FriendlyName DATA_SEATTLE -StoragePoolFriendlyName "Pool for Site SEATTLE" -FileSystem ReFS -NumberOfDataCopies 2 -ProvisioningType Fixed -ResiliencySettingName Mirror -Size 125GB&lt;/FONT&gt;&lt;/P&gt;
&lt;P style="padding-left: 30px;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="padding-left: 30px;"&gt;&lt;FONT color="#3366FF"&gt;New-Volume -FriendlyName LOG_SEATTLE -StoragePoolFriendlyName "Pool for Site SEATTLE" -FileSystem ReFS -NumberOfDataCopies 2 -ProvisioningType Fixed -ResiliencySettingName Mirror -Size 10GB&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now, we must create the same drives on the other pool.&amp;nbsp; Before doing so, take the current disks offline and move the Available Storage group is moved to NODE3 in site REDMOND.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="padding-left: 30px;"&gt;&lt;FONT color="#3366FF"&gt;Move-ClusterGroup -Name "Available Storage" -Node NODE1&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once there, the commands would be:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="padding-left: 30px;"&gt;&lt;FONT color="#3366FF"&gt;New-Volume -FriendlyName DATA_REDMOND -StoragePoolFriendlyName "Pool for Site REDMOND" -FileSystem ReFS -NumberOfDataCopies 2 -ProvisioningType Fixed -ResiliencySettingName Mirror -Size 125GB&lt;/FONT&gt;&lt;/P&gt;
&lt;P style="padding-left: 30px;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="padding-left: 30px;"&gt;&lt;FONT color="#3366FF"&gt;New-Volume -FriendlyName LOG_REDMOND -StoragePoolFriendlyName "Pool for Site REDMOND" -FileSystem ReFS -NumberOfDataCopies 2 -ProvisioningType Fixed -ResiliencySettingName Mirror -Size 10GB&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We now have all the disks we want in the cluster.&amp;nbsp; You can then move the Available Storage back to NODE1 and copy your data on it if you have is already. &amp;nbsp;Next thing is to set them up with Storage Replica.&amp;nbsp; I will not go through the steps here.&amp;nbsp; But here is the link to the document on setting it up in the same cluster as well as another video here showing it being set up.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="padding-left: 30px;"&gt;&lt;EM&gt;&lt;A href="https://docs.microsoft.com/en-us/windows-server/storage/storage-replica/stretch-cluster-replication-using-shared-storage" target="_blank" rel="noopener"&gt;https://docs.microsoft.com/en-us/windows-server/storage/storage-replica/stretch-cluster-replication-using-shared-storage&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/mIfICmuVyWM" width="987" height="526" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Everything is all set and resources can be created.&amp;nbsp; If you have not seen what it looks like, here is a video of a site failure.&amp;nbsp; Everything will move over and run.&amp;nbsp; Storage Replica takes care of ensuring the right disks are utilized and halting replication until the site comes back.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;TIP:&amp;nbsp; If you watch the tail end of the video, you will notice that the virtual machine automatically live migrates with the drives.&amp;nbsp; We added this functionality back in Windows 2016 where the VM will chase the CSV so they are not on different sites.&amp;nbsp; Again, nother extra you have to set up, just something we did to ease administration burdens for you.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/I4KWIMFm33g" width="987" height="551" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I talked earlier about all the work we did with the health service.&amp;nbsp; We also did a lot of work with the way we autopool drives together by site.&amp;nbsp; In the next couple of videos, I want to help show you.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This first video is more of the brownfield scenario.&amp;nbsp; Meaning, I have an existing cluster running and want to add nodes from a separate site.&amp;nbsp; In the video, you will see that we add the nodes from the different site into the cluster and create a pool from the drives in that site.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/E8ZW529Bqss" width="987" height="551" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this video, I will show how we react to disk failures.&amp;nbsp; I have removed a disk from each pool to simulate a failure.&amp;nbsp; I then replace the disks with new ones.&amp;nbsp; We detect which drive is in which site and pool them into the proper pool.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/oJLp78ooO68" width="987" height="551" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I hope this gives you a glimpse into what we are doing with the next version of Azure Stack HCI.&amp;nbsp; This scenario should be in the public preview build when it becomes available.&amp;nbsp; Get a preview of what is coming and help us test and stabilize things.&amp;nbsp; You also can suggest how it could be better.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks&lt;/P&gt;
&lt;P&gt;John Marlin&lt;/P&gt;
&lt;P&gt;Senior Program Manager&lt;/P&gt;
&lt;P&gt;High Availability and Storage Team&lt;/P&gt;
&lt;P&gt;Twitter: @johnmarlin_msft&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jul 2020 20:22:30 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/disaster-recovery-in-the-next-version-of-azure-stack-hci/ba-p/1027898</guid>
      <dc:creator>John Marlin</dc:creator>
      <dc:date>2020-07-30T20:22:30Z</dc:date>
    </item>
    <item>
      <title>Talking Failover Clustering and Azure Stack HCI Ignite 2019 Announcements</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/talking-failover-clustering-and-azure-stack-hci-ignite-2019/ba-p/1015497</link>
      <description>&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;Microsoft Ignite 2019 at the Orange County Convention Center in Orlando, Florida was a huge success with approximately 26,000 in attendance. We had a great time meeting and talking with our partners and customers. We also had a few announcements regarding features coming in the next Windows Server LTSC regarding Azure Stack HCI and Failover Clustering that I wanted to highlight.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;In this blog, I will try to separate out each of the features announced. In some of the sessions, there were multiple announcements. Since this is the case, I want to make sure you are aware of each announcement. Due to this, a particular session may be listed in multiple sections.&amp;nbsp; I will also try to point out what things apply only to Windows Server vNext and what would also include using on Windows Server 2016/2019.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;Ensure you go through this entire blog as there are numerous announcements including one of the biggest asks from our customers at the end.&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;I will first start out with this one.&amp;nbsp; It doesn't apply necessarily to Failover Clustering or Azure Stack HCI, but it is an important one as the date is right around the corner.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;Windows 2008/2008R2 End of Life&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;Windows Server 2008/2008R2 are both reaching end of life, these two sessions talk about options of planning for this day.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Plan for Z-Day 2020: Windows Server 2008 end of support is coming&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/82850" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/82850&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;It's 2019 and your servers are 2008&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/89294" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/89294&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;Quick list&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;For a quick list of announcements being made, you should catch this session listing 45 things in 45 minutes.&amp;nbsp; This will give you a brief overview of things so that you can delve a little deeper into the other session announcements below.&amp;nbsp; So if there is only one session you review from this list, this is the one.&amp;nbsp; These will cover items for both Windows Server 2016/2019 as well as Windows Server vNext.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;What's new for Azure Stack HCI: 45 things in 45 minutes&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/82905" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/82905&lt;/A&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;Azure Stack is now one portfolio&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;We have expanded Azure Stack into a PORTFOLIO of products, including Azure Stack Edge, Azure Stack HCI, and Azure Stack Hub.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Discover Azure Stack HCI&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/82907" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/82907&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Get started with Azure Stack HCI&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/89352" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/89352&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;Azure Iaas VM Guest Cluster support for Premium File Shares and Shared Azure Disk&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;This is a preview of file shares that act like shared disks for your Azure IaaS Failover Clusters. This is not Storage Spaces Direct, this is traditional Failover Clusters for SQL Server. If you do not wish to run Storage Spaces Direct, but would rather run a traditional Failover Cluster, Shared Azure Disk is now in limited preview for your shared storage.&amp;nbsp; This one is also for all versions currently released as well as Windows Server vNext.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Windows Server on Azure Overview: Lift-and-Shift migrations for Enterprise Workloads&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/81956" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/81956&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;Windows Admin Center setup wizard for Storage Spaces Direct and Software Defined Networking&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;That's right. With the latest release of Windows Admin Center, you now have a walk through of creating an Azure Stack HCI system as well as SDN. PowerShell not needed !!&amp;nbsp; Note: what is in black is there now while items greyed out are coming.&amp;nbsp; This one is not limited to only vNext, but also works with currently released Windows Server 2016/2019.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Jumpstart your Azure Stack HCI Deployment&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/82906" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/82906&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Get started with Azure Stack HCI&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/89352" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/89352&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;Using Azure Services to manage and monitor on-premises clusters&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;See how various Azure Services can be used from the cloud to your on-premises clusters all using Windows Admin Center. The "hybrid" way of doing things now.&amp;nbsp; When talking about using these services, you can use them against your current Windows 2016/2019 HCI and traditional clusters as well as Windows Server vNext.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Windows Server: What's new and what's next&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/81704" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/81704&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Modernize your retail stores or branch offices with Azure Stack HCI&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/82904" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/82904&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Clustering in the age of HCI and Hybrid&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/83946" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/83946&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;Shut down safeguard&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;Say you are needing to reboot your Azure Stack HCI systems. You first reboot your first node and bring it back up. Once it comes back up, it needs to go through a re-sync of the data. However, if you were to then reboot the second node before it finishes, bad things "could" happen to your data. We will now have a built-in safeguard to prevent this from happening. If here are storage jobs currently running, we will warn you not to.&amp;nbsp; Sorry, but this is where we get into the "it's only available in Windows Server vNext" areas of the list.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Modernize your retail stores or branch offices with Azure Stack HCI&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/82904" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/82904&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;Increase to 16 petabytes raw storage&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;In Windows Server 2019, we announced 4 petabyte raw storage availability. In the next Windows Server LTSC, we have increased it even further to 16 petabytes raw storage. This is a big win for backup server scenarios (and others).&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;What's next for software defined storage and networking for Windows Server&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/81960" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/81960&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;Switchless Clusters&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;Here again, we announced 2-node switchless clusters for Windows Server 2019 Azure Stack HCI and Failover Clustering. For Windows Server vNext, we are now going to support more than 2 nodes with full mesh connectivity. So how many network adapters can you add?&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;What's next for software defined storage and networking for Windows Server&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/81960" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/81960&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Get started with Azure Stack HCI&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/89352" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/89352&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;Repair/Resyc Times are much faster and can throttle it.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;When a reboot is necessary, resync times when it comes back can take a while. We have changed the way we are doing these repairs/resyncs and made it much faster. We have also introduced a way to throttle the repair/rysync. So you can now control if you need it to complete even faster or run more in the background.&amp;nbsp; This is something that will only be available in Windows Server vNext Azure Stack HCI.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;What's next for software defined storage and networking for Windows Server&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/81960" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/81960&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;New Affinity/AntiAffinity rules&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;In the past, we have only had antiaffinity as an option to keep roles apart. Introducing new rules for affinity and antiaffinity was a must. With nodes in different sites, we also had to make it site aware. So now, you can keep roles together or apart as well as storage affinity to keep virtual machines on the same node as the storage.&amp;nbsp; These new rules will only be available in Windows Server vNext Azure Stack HCI and traditional failover clusters.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;What's next for software defined storage and networking for Windows Server&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/81960" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/81960&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;Bitlocker&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;This one had no slides as snuck it in there with some time available. Bitlocker has been available for Clusters for quite some time. The requirement was the cluster nodes must be all in the same domain as the bitlocker key is tied to the Cluster Name Object (CNO). However, for those clusters at the edge, workgroup clusters, and multidomain clusters, Active Directory may not be present. With no Active Directory, there is no CNO. These cluster scenarios had no data at-rest security. With Windows Server vNext, we introduced our own bitlocker key stored locally (encrypted of course) for cluster to use. Now you can feel safer as at-rest security is now in place.&amp;nbsp; This feature will only be available in Windows Server vNext Azure Stack HCI and traditional failover clusters.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;What's next for software defined storage and networking for Windows Server&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/81960" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/81960&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;New Network Validation tests&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;We come up with new things all the time and networking continues to be an innovation. We are not standing pat and continue to add new tests.&amp;nbsp;&amp;nbsp;These new tests will only be available in Windows Server vNext Azure Stack HCI and traditional failover clusters.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;What's next for software defined storage and networking for Windows Server&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/81960" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/81960&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;Windows Admin Center&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;I mentioned in previous descriptions here about Windows Admin Center. The Windows Admin Center Team works tirelessly to continue to add more and more in. For the general availability (GA) announcement of the latest Windows Admin Center, you can see just how much has been added and new. Everything from continued on-premises work, hybrid, third party extensions, and more.&amp;nbsp; The work being done here is not just for the future, but all existing Azure Stack HCI and traditional failover clustering systems.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Windows Server deep dive: Demopalooza&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/81949" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/81949&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Live Q&amp;amp;A: Manage your hybrid server environment with Windows Admin Center&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/89341" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/89341&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Windows Admin Center: Unlock Azure Hybrid value&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/81952" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/81952&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Automatically monitor, secure and update your on-premises servers from Azure with Windows Admin Center&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/83942" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/83942&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Be a Windows Admin Center expert: Best practices for deployment, configuration, and security&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/83943" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/83943&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Get more done with Windows Admin Center third-party extensions&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/83944" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/83944&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Windows Admin Center: Better together with System Center and Microsoft Azure&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/83945" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/83945&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;New Windows Performance Monitor&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;Everyone is concerned with performance. We have in-box Performance Monitor, but it hasn't been updated in a long time (other than some counters with each release). In Windows Admin Center, we have also added a new Performance Monitor that can be run anywhere against any of the different versions of Windows Server. You can have multiple windows running, pause it, and much much more. We have also made it easier to look at.&amp;nbsp; As with all things Windows Admin Center, this will work with current versions of Windows Server as well as future versions.&amp;nbsp; It is also not limited to Azure Stack HCI or traditional failover clusters.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Windows Server: What's new and what's next&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/81704" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/81704&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;Badges&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;Badges? What are those? I don't need no stinkin' badges. Well let me tell you.&amp;nbsp;Our partners have developed solutions optimized for running specific apps/workloads on Azure Stack HCI (both current and future). We are working with our partners to add these "badges" that will be in full view and searchable so you can ensure that the solution you are purchasing is the right solution for you.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;"&gt;&lt;img /&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Windows Server: What's new and what's next&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/81704" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/81704&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&lt;STRONG&gt;Stretch Azure Stack HCI for Disaster Recovery&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;I mentioned was saving our biggest ask for last. In Windows Server vNext, we will now offer Stretch Azure Stack HCI for disaster recovery purposes. You will now be able to have automatic failovers in case of disaster. We are also making it easier as there is less for you to configure for this, we will do a lot of this for you.&amp;nbsp; This is also one of those, "sorry, but this will be in Windows Server vNext".&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in 0in 0.0001pt; text-align: center;" align="center"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;Stretching Azure Stack HCI for disaster recovery: A glimpse into the future&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/83962" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/83962&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;What's next for software defined storage and networking for Windows Server&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/81960" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/81960&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;HCI is the name, futures are the game&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt; text-align: center;" align="center"&gt;&lt;EM&gt;&lt;A href="https://myignite.techcommunity.microsoft.com/sessions/89330" target="_blank" rel="noopener"&gt;https://myignite.techcommunity.microsoft.com/sessions/89330&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;As I have mentioned previously, we are not resting on our laurels. We are working on many many other improvements for Windows Server vNext, we just aren't ready to announce them just yet.&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;Thank you,&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;John Marlin&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;Senior Program Manager&lt;/P&gt;
&lt;P style="margin: 0in; margin-bottom: .0001pt;"&gt;Windows High Availability and Storage&lt;/P&gt;</description>
      <pubDate>Mon, 18 Nov 2019 20:03:53 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/talking-failover-clustering-and-azure-stack-hci-ignite-2019/ba-p/1015497</guid>
      <dc:creator>John Marlin</dc:creator>
      <dc:date>2019-11-18T20:03:53Z</dc:date>
    </item>
    <item>
      <title>Windows Server 2019 Failover Clustering New Features</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/windows-server-2019-failover-clustering-new-features/ba-p/544029</link>
      <description>&lt;P&gt;Greeting Failover Cluster fans!!&amp;nbsp; John Marlin here and I own the Failover Clustering feature within the Microsoft product team.&amp;nbsp; In this blog, I will be giving an overview and demo a lot of the new features in Windows Server 2019 Failover Clustering.&amp;nbsp; I have held off on this to let some things settle down with some of the announcements regarding &lt;A href="https://azure.microsoft.com/en-us/blog/announcing-azure-stack-hci-a-new-member-of-the-azure-stack-family/" target="_self"&gt;Azure Stack HCI&lt;/A&gt;, upcoming &lt;A href="https://www.microsoft.com/en-us/cloud-platform/windows-server-summit" target="_self"&gt;Windows Server Summit&lt;/A&gt;, etc.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I have broken these all down into 7 videos so that you can view them in smaller chunks rather than one massive long video.&amp;nbsp; With each video, I am including a quick description of what features that will be covered.&amp;nbsp; Each of the videos are approximately 15 minutes long.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5" color="#0000ff"&gt;&lt;U&gt;&lt;STRONG&gt;Part 1&lt;/STRONG&gt;&lt;/U&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;In this video, we will take a brief look back at Windows Server 2016 Failover Clustering and preview what we did in regards to Windows Server 2019 to make things better.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/wU5ZJW3u3sc" width="987" height="616" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#0000ff"&gt;&lt;U&gt;&lt;STRONG&gt;&lt;FONT size="5"&gt;Part 2&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/U&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;In Part 2 of the series, we take a look at&amp;nbsp;&lt;STRONG&gt;Windows Admin Center&lt;/STRONG&gt; and how it can make the user experience better, &lt;STRONG&gt;Cluster Performance History&lt;/STRONG&gt; to get a history on how the cluster/nodes are performing, &lt;STRONG&gt;System Insights&lt;/STRONG&gt; using predictive analytics (AI) and machine learning (ML), and &lt;STRONG&gt;Persistent Memory&lt;/STRONG&gt; which is the latest in storage/memory technology.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/2ycqvQP96wA" width="987" height="616" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;&lt;FONT size="5" color="#0000ff"&gt;Part 3&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;In Part 3 of the series, we take a look at &lt;STRONG&gt;Cluster Sets&lt;/STRONG&gt; as our new scaling technology, actual in-place &lt;STRONG&gt;Windows Server Upgrades&lt;/STRONG&gt; which have not been supported in the past, and &lt;STRONG&gt;Microsoft Distributed Transaction Coordinator (MSDTC)&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/ywXZpQLY6yw" width="987" height="616" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5" color="#0000ff"&gt;&lt;U&gt;&lt;STRONG&gt;Part 4&lt;/STRONG&gt;&lt;/U&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;In this video, we take a look at &lt;STRONG&gt;Two-Node Hyperconverged&lt;/STRONG&gt; and the new way of configuring resiliency, &lt;STRONG&gt;File Share Witness&lt;/STRONG&gt; capabilities for achieving quorum at the edge, &lt;STRONG&gt;Split Brain&lt;/STRONG&gt; detection and how we try to lessen chances of nodes running independent of each other, and what we did with &lt;STRONG&gt;Security&lt;/STRONG&gt; in mind.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/DLQTcIFsksE" width="987" height="616" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;&lt;FONT size="5" color="#0000ff"&gt;Part 5&lt;/FONT&gt;&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;This video talks about &lt;STRONG&gt;Scale-Out File Servers&lt;/STRONG&gt; and some of the connectivity enhancements, &lt;STRONG&gt;Cluster Shared Volumes&amp;nbsp;(CSV)&lt;/STRONG&gt; with caching and a security enhancement, &lt;STRONG&gt;Marginal Disk&lt;/STRONG&gt; support and the way we are detecting drives that are starting to go bad, and &lt;STRONG&gt;Cluster Aware Updating&lt;/STRONG&gt; enhancements for when you are patching your Cluster nodes.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/xc5RBEj74Dw" width="987" height="616" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5" color="#0000ff"&gt;&lt;U&gt;&lt;STRONG&gt;Part 6&lt;/STRONG&gt;&lt;/U&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;This video will talk about enhancements we made with the &lt;STRONG&gt;Cluster Network Name&lt;/STRONG&gt;, changes made for when running &lt;STRONG&gt;Failover Clusters in Azure&lt;/STRONG&gt;&amp;nbsp;as IaaS virtual machines, and how &lt;STRONG&gt;Domain Migrations&lt;/STRONG&gt; are no longer a pain point moving between domains.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/JfFK17A1KEM" width="987" height="616" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="5" color="#0000ff"&gt;&lt;U&gt;&lt;STRONG&gt;Part 7&lt;/STRONG&gt;&lt;/U&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;As a wrap up, we will take a look at a couple announcements made and demonstrated at Microsoft Ignite 2018 regarding &lt;STRONG&gt;IOPs&lt;/STRONG&gt; and &lt;STRONG&gt;Capacity&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/G4Opr3IbKoA" width="987" height="616" frameborder="0" allowfullscreen="allowfullscreen" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;My hope is that you enjoy these videos and get a good understanding our roadmap from Windows Server 2016 to Windows Server 2019 from a Failover Clustering standpoint.&amp;nbsp; If there are any questions about any of these features, hit me up on Twitter (below).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks&lt;/P&gt;
&lt;P&gt;John Marlin&lt;/P&gt;
&lt;P&gt;Senior Program Manager&lt;/P&gt;
&lt;P&gt;High Availability and Storage&lt;/P&gt;
&lt;P&gt;Twitter:&amp;nbsp;&lt;A href="https://twitter.com/JohnMarlin_MSFT" target="_self"&gt;@JohnMarlin_MSFT&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 05 Dec 2019 00:59:48 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/windows-server-2019-failover-clustering-new-features/ba-p/544029</guid>
      <dc:creator>John Marlin</dc:creator>
      <dc:date>2019-12-05T00:59:48Z</dc:date>
    </item>
    <item>
      <title>List of Failover Cluster Events in Windows 2016/2019</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/list-of-failover-cluster-events-in-windows-2016-2019/ba-p/447150</link>
      <description>&lt;P&gt;Let me tell you of a story of myself and one of my asks while I was still in support.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We always thought the it would be nice to have a listing of all Failover Clustering events for references.&amp;nbsp; Our customers ask for it, we ask for it.&amp;nbsp;&amp;nbsp; So logically, we approached the Product Group and asked for it.&amp;nbsp; The response back wasn't what we really wanted to hear which was, &lt;EM&gt;"&lt;FONT color="#0000FF"&gt;It's in the code that you can pull out but will take you some time to piece it all together&lt;/FONT&gt;"&lt;/EM&gt;.&amp;nbsp; Again, not what we wanted to hear and had this ask on multiple occasions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Then, little Johnny joined the big boys in the Product Group and became the PM owner of Failover Clustering infrastructure.&amp;nbsp; Now, everyone keeps asking me for it.&amp;nbsp; My response?&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;"&lt;FONT color="#0000FF"&gt;It's in the code that you can pull out but will take you some time to piece it all together&lt;/FONT&gt;"&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Awful hypocritical of myself, but when I did take a look at it, yes, it would take a while to do (not talking about a day or so).&amp;nbsp; I could see why we had the response previously.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here's a bit of trivia I bet you did not know.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Q: How many events are there for Failover Clustering?&lt;/P&gt;
&lt;P&gt;A:&amp;nbsp;Between the System Event and the Microsoft-Windows-FailoverClustering/Operational channel, there are a total of 388 events in Windows 2019&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;While in Support, I did not realize how many people were asking for it.&amp;nbsp; I was getting hit up from all over the place both internally and externally.&amp;nbsp; I was starting to look a lot like:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Finally, I decided that I was going to do it.&amp;nbsp; So rolled up my sleeves and a couple weeks later, it was finally complete.&amp;nbsp; FINALLY!!!!&amp;nbsp; The spreadsheet with the events is attached to this blog.&amp;nbsp; My hope is that you can make good use of it and is what you have been asking for.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To explain it a bit, this list is for Windows 2016 and 2019 Failover Clustering.&amp;nbsp; Many of these same events are in previous versions.&amp;nbsp; We have not removed any events, only added with each version.&amp;nbsp; I have separated it into two tabs, one for Windows 2016 and the other for the Windows 2019 new events.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;There are a few duplicate event IDs.&amp;nbsp; That is by design.&amp;nbsp; The description of the event is going to depend on the call made, so they may differ slightly.&amp;nbsp;&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;I sorted it all by the severity levels.&amp;nbsp; Feel free to sort however you wish.&lt;/LI&gt;
&lt;LI&gt;You may notice '%1', '%2', etc values in the description.&amp;nbsp; When there is an event, we collect the values such as resource, group, etc as a variable and substitute the variable in the actual description.&lt;/LI&gt;
&lt;LI&gt;You may notice '%n' in some of the descriptions.&amp;nbsp; I left those in the spreadsheet and are carriage returns.&amp;nbsp; I didn't want to replace them as it would distort things if you wanted to sort differently.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Is there a moral of the story for little Johnny?&amp;nbsp; Don't know.&amp;nbsp; But now that it is done, he feels much better.&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Happy Clustering !!!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;John Marlin&lt;/P&gt;
&lt;P&gt;Senior Program Manager&lt;/P&gt;
&lt;P&gt;High Availability and Storage&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 26 Sep 2019 23:51:30 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/list-of-failover-cluster-events-in-windows-2016-2019/ba-p/447150</guid>
      <dc:creator>John Marlin</dc:creator>
      <dc:date>2019-09-26T23:51:30Z</dc:date>
    </item>
    <item>
      <title>Optimizing Hyper-V Live Migrations on an Hyperconverged Infrastructure</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/optimizing-hyper-v-live-migrations-on-an-hyperconverged/ba-p/396609</link>
      <description>&lt;P&gt;We have been doing some extensive testing in how to best configure Hyper-V live migration to achieve the best performance and highest level of availability.&lt;/P&gt;
&lt;H2&gt;Recommendations:&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;Configure Live Migration to use SMB. This &lt;STRONG&gt;must be performed on all nodes&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;PRE&gt;&amp;nbsp; &amp;nbsp; &lt;FONT color="#0000ff"&gt;Set-VMHost –VirtualMachineMigratio&lt;SPAN style="font-family: inherit;"&gt;nPerformanceOption SMB&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;Use RDMA enabled NIC’s to offload the CPU and improve network performance&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;Configure SMB Bandwidth Limits to ensure live migrations do not saturate the network and throttle to 750 MB. This &lt;STRONG&gt;must be performed on all nodes&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; First install the SMB Bandwidth Limits feature:&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp;    &lt;FONT color="#0000ff"&gt;Add-WindowsFeature -Name FS-SMBBW&lt;/FONT&gt;&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Throttle to 750 MB&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;FONT color="#0000ff"&gt;Set-SmbBandwidthLimit -Category LiveMigration -BytesPerSecond 750MB&lt;/FONT&gt;&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL start="4"&gt;
&lt;LI&gt;Configure a maximum of 2 simultaneous Live migrations (which is default). This &lt;STRONG&gt;must be performed on all nodes&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Leave at the default value of 2, no changes required&lt;/P&gt;
&lt;PRE&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;FONT color="#0000ff"&gt;Set-VMHost -MaximumVirtualMachineMigrations 2&lt;/FONT&gt;&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Background:&lt;/H2&gt;
&lt;P&gt;For those that want to understand the ‘why’ on the above recommendations, read on!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;What a live migration fundamentally does is take the memory allocated to a virtual machine and copies it over the network from one server to another.&amp;nbsp; Let’s say you allocated 4 GB of memory to a virtual machine, when you invoke a live migration that 4 GB of memory is copied over the network between the source and destination server.&amp;nbsp; Because the VM is running, that means that memory is changing while that 4 GB is copied.&amp;nbsp; Those changes are tracked, and once the initial allocated memory is copied, then a second pass occurs and the changed memory is copied.&amp;nbsp; In the second pass the amount of memory changed will be smaller and takes less time; all the while yet memory is changing while that happens.&amp;nbsp; So a third pass happens, and so on with each pass getting faster and the delta of memory changed getting smaller.&amp;nbsp; Eventually the set of memory gets small enough and the VM is paused and the final set of changes is copied over, then the VM is resumed on the new server.&amp;nbsp; While the VM is paused and the final memory copy occurs the VM is not available, this is referred to as the blackout window.&amp;nbsp; This is not unique to Hyper-V, all virtualization platforms have this.&amp;nbsp; The magic of a live migration is that as long as the blackout window is within the TCP reconnect window, it is completely seamless to the applications.&amp;nbsp; That’s how a live migration achieves zero downtime from an application perspective, even though there is a very small amount of downtime from an infrastructure perspective.&amp;nbsp; Don’t get hung up on the blackout window (if it is within the TCP reconnect window), it’s all about the app!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Live migration supports TCPIP, Compression, and SMB to perform live migration.&amp;nbsp; On nearly all hyperconverged infrastructure (HCI) systems have RDMA enabled network cards, and Server Message Block (SMB) has a feature called SMB Direct which can take advantage of RDMA.&amp;nbsp; By using SMB as the protocol for the memory copy over the network, it will result in drastically reduced CPU overhead to conduct the data copy with the best network performance.&amp;nbsp; This is important to minimize consuming CPU cycles from other running virtual machines and keeping the data copy windows small so that the number of passes to copy changed memory is minimized.&amp;nbsp; Another feature of SMB is SMB Multi-channel, which will stream the live migration across multiple network interfaces to achieve even better performance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;An HCI system is a distributed system that is heavily dependent on reliable networking, as there is cluster communication and data replication also occurring over the network.&amp;nbsp; From a network perspective, a live migration is a sudden burst of heavy network traffic.&amp;nbsp; Using SMB bandwidth limits to achieve Network Quality of Service (QoS) is desired to keep this burst traffic from saturating the network and negatively impacting other aspects on the system.&amp;nbsp; The testing conducted tested different bandwidth limits on a dual 10 Gpbs RDMA enabled NIC and measured failures under stress conditions and found that throttling live migration to 750 MB achieved the highest level of availability to the system.&amp;nbsp; On a system with higher bandwidth, you may be able to throttle to a value higher than 750 MB.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;When draining a node in an HCI system, multiple VMs can be live migrated at the same time.&amp;nbsp; This parallelization can achieve faster overall times when moving large numbers of VMs off a host.&amp;nbsp; As an example, instead of copying just 4 GB for a single machine, it will copy 4 GB for one VM and 4 GB for another VM.&amp;nbsp; But there is a sweet spot, a single live migration at a time serializes and results in longer overall times and having too may simultaneous live migrations can end up taking much longer.&amp;nbsp; Remember that if the network becomes saturated with many large copies, that each one takes longer…&amp;nbsp; which means more memory is changing on each, which means more passes, and results in overall longer times for each. &amp;nbsp;Two simultaneous live migrations were found to deliver the best balance in combination with a 750 MB throttle.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Lastly a live migration will not infinitely continue to make pass after pass copying changed memory on a very busy VM with a slow interconnect that is taking a long time, eventually live migration will give up and freeze the VM and make a final memory copy.&amp;nbsp; This can result in longer blackout windows, and if that final copy exceeds the TCP reconnect window then it can impact the apps.&amp;nbsp; This is why ensuring live migrations can complete in a timely manner is important.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In our internal testing we have found that these recommended settings will achieve the fastest times to drain multiple VMs off of server, will achieve the smallest blackout windows for application availability, and the least impact to production VMs, and will achieve greatest levels of availability to the infrastructure.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Elden Christensen&lt;/P&gt;
&lt;P&gt;Principal Program Manager&lt;/P&gt;
&lt;P&gt;High Availability and Storage Team&lt;/P&gt;</description>
      <pubDate>Thu, 05 Sep 2019 15:12:35 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/optimizing-hyper-v-live-migrations-on-an-hyperconverged/ba-p/396609</guid>
      <dc:creator>John Marlin</dc:creator>
      <dc:date>2019-09-05T15:12:35Z</dc:date>
    </item>
    <item>
      <title>So what exactly is the CLIUSR account</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/so-what-exactly-is-the-cliusr-account/ba-p/388832</link>
      <description>&lt;P&gt;From time to time, people stumble across the local user account called CLIUSR and wonder what it is, while you really don’t need to worry about it; we will cover it for the curious in this blog.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The CLIUSR account is a local user account created by the Failover Clustering feature when it is installed on Windows Server 2012 or later. Well, that’s easy enough, but why is this account here? Taking a step back, let’s take a look at why we are using this account.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the Windows Server 2003 and previous versions of the Cluster Service, a domain user account was used to start the Cluster Service. This Cluster Service Account (CSA) was used for forming the Cluster, joining a node, registry replication, etc. Basically, any kind of authentication that was done between nodes used this user account as a common identity.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;A number of support issues were encountered as domain administrators were pushing down group policies that stripped rights away from domain user accounts, not taking into consideration that some of those user accounts were used to run services. An example of this is the Logon as a Service right. If the Cluster Service account did not have this right, it was not going to be able to start the Cluster Service. If you were using the same account for multiple clusters, then you could incur production downtime across a number of critical systems. You also had to deal with password changes in Active Directory. If you changed the user accounts password in AD, you also needed to change passwords across all Clusters/nodes that use the account.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In Windows Server 2008, we learned and redesigned everything about the way we use start the service to make it more resilient, less error prone, and easier to manage. We started using the built-in Network Service to start the Cluster Service. Keep in mind that this is not the full blown account, just simply a reduced privileged set. Changing it to this reduced account was a solution for the group policy issues.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For authentication purposes, it was switched over to use the computer object associated with the Cluster Name known as the Cluster Name Object (CNO)for a common identity. Because this CNO is a machine account in the domain, it will automatically rotate the password as defined by the domain’s policy for you (which is every 30 days by default).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Great!! No more domain user account and its password changes we have to account for. No more trying to remember which Cluster was using which account. Yes!! Ah, not so fast my friend. While this solved some major pain, it did have some side effects.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Starting in Windows Server 2008 R2, admins started virtualizing everything in their datacenters, including domain controllers. Cluster Shared Volumes (CSV) was also introduced and became the standard for private cloud storage. Some admin’s completely embraced virtualization and virtualized every server in their datacenter, including to add domain controllers as a virtual machine to a Cluster and utilize the CSV drive to hold the VHD/VHDX of the VM.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This created a “chicken or the egg” scenario that many companies ended up in. In order to mount the CSV drive to get to the VMs, you had to contact a domain controller to get the CNO. However, you couldn’t start the domain controller because it was running on the CSV.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Having slow or unreliable connectivity to domain controllers also had effect on I/O to CSV drives. CSV does intra-cluster communication via SMB much like connecting to file shares. To connect with SMB, it needs to authenticate and in Windows Server 2008 R2, that involved authenticating the CNO with a remote domain controller.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For Windows Server 2012, we had to think about how we could take the best of both worlds and get around some of the issues we were seeing. We are still using the reduced Network Service privilege to start the Cluster Service, but now to remove all external dependencies we have a local (non-domain) user account for authentication between the nodes.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This local “user” account is not an administrative account or domain account. This account is automatically created for you on each of the nodes when you create a cluster or on a new node being added to the existing Cluster. This account is completely self-managed by the Cluster Service and handles automatically rotating the password for the account and synchronizing all the nodes for you. The CLIUSR password is rotated at the same frequency as the CNO, as defined by your domain policy (which is every 30 days by default). With it being a local account, it can authenticate and mount CSV so the virtualized domain controllers can start successfully. You can now virtualize all your domain controllers without fear. So we are increasing the resiliency and availability of the Cluster by reducing external dependencies.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This account is the CLIUSR account and is identified by its description.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;One question that we get asked is if the CLIUSR account can be deleted. From a security standpoint, additional local accounts (not default) may get flagged during audits. If the network administrator isn’t sure what this account is for (i.e. they don’t read the description of “Failover Cluster Local Identity”), they may delete it without understanding the ramifications. For Failover Clustering to function properly, this account is necessary for authentication.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Joining node starts the Cluster Service and passes the CLIUSR credentials across.&lt;/LI&gt;
&lt;LI&gt;All passes, so the node is allowed to join.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;LI-WRAPPER&gt;&lt;/LI-WRAPPER&gt;&lt;/P&gt;
&lt;P&gt;There is one extra safe guard we did to ensure continued success. If you accidentally delete the CLIUSR account, it will be recreated automatically when a node tries to join the Cluster.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Short story… the CLIUSR account is an internal component of the Cluster Service. It is completely self-managing and there is nothing you need to worry about regarding configuring and managing it. So leave it alone and let it do its job.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In Windows Server 2016, we will be taking this even a step further by leveraging certificates to allow Clusters to operate without any external dependencies of any kind. This allows you to create Clusters out of servers that reside in different domains or no domains at all. But that’s a blog for another day.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Also, please be aware that there are security guides/blogs out there that can block local non-administrative accounts from doing certain things.&amp;nbsp; For example, this blog explains:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Blocking Remote Use of Local Accounts&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;&lt;A href="https://blogs.technet.microsoft.com/secguide/2014/09/02/blocking-remote-use-of-local-accounts" target="_blank" rel="noopener"&gt;https://blogs.technet.microsoft.com/secguide/2014/09/02/blocking-remote-use-of-local-accounts&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In this blog, it talks about&amp;nbsp;"Deny access to this computer from the network" with the default generic security identifier of S-1-5-113 (NT AUTHORITY\Local account).&amp;nbsp; This will cause the Cluster Service not be able to do what it needs to do (joins, management, etc).&amp;nbsp; CLIUSR is a local account without administrative rights, so it really cannot do anything to the system in a disruptive manner.&amp;nbsp; It goes on to explain what you should put in which is the generic identifier S-1-5-114 (NT AUTHORITY\Local account and member of Administrators group).&amp;nbsp; This way, Cluster can still perform as Cluster should.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;As an FYI, this is not the only group policy.&amp;nbsp; Others could include, but not limited to:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Access this computer over the network&lt;/LI&gt;
&lt;LI&gt;Deny log on locally&lt;/LI&gt;
&lt;LI&gt;Deny log on locally as a service&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hopefully, this answers any questions you have regarding the CLIUSR account and its use.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Happy Clustering !!!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;John Marlin&lt;/P&gt;
&lt;P&gt;Senior Program Manager&lt;/P&gt;
&lt;P&gt;High Availability and Storage Team&lt;/P&gt;
&lt;P&gt;Twitter:&amp;nbsp;@Johnmarlin@MSFT&lt;/P&gt;</description>
      <pubDate>Tue, 19 Nov 2019 22:44:50 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/so-what-exactly-is-the-cliusr-account/ba-p/388832</guid>
      <dc:creator>John Marlin</dc:creator>
      <dc:date>2019-11-19T22:44:50Z</dc:date>
    </item>
    <item>
      <title>No such thing as a Heartbeat Network</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/no-such-thing-as-a-heartbeat-network/ba-p/388121</link>
      <description>&lt;P&gt;This is a blog that has been a long time coming.&amp;nbsp; From time to time, we get a request about how to configure networking in Failover Clusters.&amp;nbsp; One of the questions we get is how should the heartbeat network be configured and that is what the focus is on this blog.&amp;nbsp; I am here to say, there is no such thing, and never was, a heartbeat network.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please allow me to give a little background and explain.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In Windows 2003 and below Failover Clustering, you could define which network was used for Cluster Communication.&amp;nbsp; Below is a picture for reference.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="text-align: center;"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P style="text-align: center;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;In the picture above, we would want to select Private for our Cluster Communication to as to not use the Public which has all WAN traffic.&amp;nbsp; All Cluster Communication between nodes (joins, registry updates/changes, etc) would go only over this network if it is up.&amp;nbsp; As the picture shows, the networks are called Public and Private.&amp;nbsp; As years went by, some started calling the Private network a Heartbeat network.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Heartbeats are small packets (134 bytes) that travel over UDP Port 3343 on all networks configured for Cluster use between all nodes.&amp;nbsp; They serve multiple purposes.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Establishes if a Cluster Network is up or down&lt;/LI&gt;
&lt;LI&gt;Establishes routes between nodes&lt;/LI&gt;
&lt;LI&gt;Ensure the health of a node is good or bad&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;So let's say I have Private set as my priority network for Cluster Communications.&amp;nbsp; If it is up, we are sending our communication through it.&amp;nbsp; But what happens if that network wasn't reliable.&amp;nbsp; If a node tries to join and packets are dropping, then the join could fail.&amp;nbsp; If this is the case, you either determine where the problem is and fix it, or go back into the Cluster properties and set the Public as priority.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Starting in Windows 2008 Failover Clusters, the concept of Public and Private networks went out the window.&amp;nbsp; We will now send Cluster Communication over any of our networks.&amp;nbsp; One of the reasons for this was reliability.&amp;nbsp; With that change, we also gave the heartbeats an additional purpose.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Determine the fastest and reliable routes between nodes&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Since we are now determining the fastest and reliable routes, we could use different networks between nodes for our communication.&amp;nbsp; Take the below as an example.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="text-align: center;"&gt;&lt;img /&gt;&lt;/P&gt;
&lt;P style="text-align: center;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have three individual networks between our nodes:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="text-align: left;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;FONT color="#0000FF"&gt;&lt;SPAN&gt;Blue&lt;/SPAN&gt;&lt;/FONT&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;– 10 gbps used for backups and administration only&lt;/LI&gt;
&lt;LI&gt;&lt;FONT color="#008000"&gt;Green&lt;/FONT&gt;&lt;SPAN style="font-family: inherit;"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-family: inherit;"&gt;– 40 gbps used for communicating out on the WAN to clients&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;FONT color="#FF0000"&gt;&lt;SPAN&gt;Red&lt;/SPAN&gt;&lt;/FONT&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;– 40 gbps used for communicating out on the WAN to clients&lt;/LI&gt;
&lt;/UL&gt;
&lt;P style="text-align: left;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;LI-WRAPPER&gt;&lt;/LI-WRAPPER&gt;&lt;/P&gt;
&lt;P&gt;As a refresher, here is what the heartbeats are doing:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Establishes if a Cluster Network is up or down&lt;/LI&gt;
&lt;LI&gt;Establishes routes between nodes&lt;/LI&gt;
&lt;LI&gt;Ensure the health of a node is good or bad&lt;/LI&gt;
&lt;LI&gt;Determine the fastest and reliable routes between nodes&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;What the heartbeats are going to tell the Cluster is to use one of the faster networks for its communication.&amp;nbsp; With that as the case, it is going to use either Red or Green network.&amp;nbsp; If the heartbeats start detecting that neither of these is as reliable (i.e. dropping a packet, network congested, etc), it will automatically switch and use the Blue network.&amp;nbsp; That's it, nothing for you to configure extra.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So to wrap things up, remember these things about Failover Clusters and Heartbeats.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;There is no such thing as a heartbeat network or a network dedicated to heartbeats&lt;/LI&gt;
&lt;LI&gt;Heartbeat packets are lightweight (134 bytes in size)&lt;/LI&gt;
&lt;LI&gt;Heartbeats are sensitive to latency&lt;/LI&gt;
&lt;LI&gt;Bandwidth is not an important factor, quality of service is.&amp;nbsp; If your network is all teamed, ensure you have set up Network QOS policies for our UDP 3343 traffic.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;For more information regarding configuring networks in a Cluster, please see Microsoft Ignite session:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/MDC-B337#fbid=ZpvM0cLRvyX" target="_self"&gt;Failover Clustering Networking Essentials&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Happy Clustering !!!!&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;John Marlin&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Senior Program Manager&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Microsoft Corporation&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Twitter:&amp;nbsp;@Johnmarlin_MSFT&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 25 Mar 2019 22:49:09 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/no-such-thing-as-a-heartbeat-network/ba-p/388121</guid>
      <dc:creator>John Marlin</dc:creator>
      <dc:date>2019-03-25T22:49:09Z</dc:date>
    </item>
    <item>
      <title>Windows Server 2016/2019 Cluster Resource / Resource Types</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/windows-server-2016-2019-cluster-resource-resource-types/ba-p/372163</link>
      <description>&lt;P&gt;&lt;STRONG&gt; First published on MSDN on Jan 16, 2019 &lt;/STRONG&gt; &lt;BR /&gt;Over the years, we have been asked about what some of the Failover Cluster resources/resource types are and what they do. There are several resources that have been asked about on multiple occasions and we haven't really had a good definition to point you to. Well, not anymore. &lt;BR /&gt;&lt;BR /&gt;What I want to do with this blog is define what they are, what they do, and when they were added (or removed). I am only going to cover the in-box resource types that come with Failover Clustering. But first, I wanted to explain what a cluster "resource" and "resource types" are. &lt;BR /&gt;&lt;BR /&gt;Cluster resources are physical or logical entities, such as a file share, disk, or IP Address managed by the Cluster Service. The operating system does not distinguish between cluster and local resources. Resources may provide a service to clients or be an integral part of the cluster. Examples of resources would be physical hardware devices such as disk drives, or logical items such as IP addresses, network names, applications, and services. They are the basic and smallest configurable unit managed by the Cluster Service. A resource can only run on a single node in a cluster at a time. &lt;BR /&gt;&lt;BR /&gt;Cluster resource types are dynamic library plug-ins. These Resource DLLs are responsible for carrying out most operations on cluster resources. A resource DLL is characterized as follows: &lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;It contains the resource-specific code necessary to provide high availability for instances of one or more resource types.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;It exposes this code through a standard interface consisting of a set of &lt;A href="https://technet.microsoft.com/en-us/aa372244(v=vs.90)" target="_blank" rel="noopener"&gt; entry point functions &lt;/A&gt; .&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;It is registered with the Cluster service to associate one or more resource type names with the name of the DLL.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;It is always loaded into a Resource Monitor's process when in use.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;When the Cluster service needs to perform an operation on a resource, it sends the request to the Resource Monitor assigned to the resource. If the Resource Monitor does not have a DLL in its process that can handle that type of resource, it uses the registration information to load the DLL associated with the resource type. It then passes the Cluster service's request to one of the DLL's entry point functions. The resource DLL handles the details of the operation so as to meet the specific needs of the resource. &lt;BR /&gt;&lt;BR /&gt;You can define your own resource types to provide customized support for cluster-unaware applications, enhanced support for cluster-aware applications, or specialized support for new kinds of devices. For more information, see &lt;A href="https://technet.microsoft.com/en-us/aa369331(v=vs.90)" target="_blank" rel="noopener"&gt; Creating Resource Types &lt;/A&gt; . &lt;BR /&gt;&lt;BR /&gt;All resource types that are available in a Failover Cluster can be seen by right-mouse clicking on the name of the Cluster, choosing Properties, and selecting the Resource Types tab ( &lt;EM&gt; shown below &lt;/EM&gt; ). &lt;BR /&gt;&lt;BR /&gt;&lt;img /&gt; &lt;BR /&gt;&lt;BR /&gt;You can also get a list from running the PowerShell command &lt;A href="https://docs.microsoft.com/powershell/module/failoverclusters/get-clusterresourcetype" target="_blank" rel="noopener"&gt; Get-ClusterResourceType &lt;/A&gt; . Please keep in mind that all resource types may not show up or have access to. For example, if the Hyper-V role is not installed, the virtual machine resource types will not be available. &lt;BR /&gt;&lt;BR /&gt;So enough about this, let's get to the resource types, when they were available and, for some, when they were last seen. &lt;BR /&gt;&lt;BR /&gt;Since there are multiple versions of Windows Clustering, this blog will only focus on the two latest versions (Windows Server 2016 and 2019). &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Windows Server 2016 / 2019 &lt;/STRONG&gt; &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Cloud Witness (clusres.dll): &lt;/STRONG&gt; Cloud Witness is a quorum witness that leverages Microsoft Azure as the arbitration point. It uses Azure Blob Storage to read/write a blob file which is then used as an arbitration point in case of split-brain resolution. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; DFS Replicated Folder (dfsrclus.dll): &lt;/STRONG&gt; Manages a Distributed File System (DFS) replicated folder. When creating a DFS, this resource type is configured to ensure proper replication occurs. For more information regarding this, please refer to the &lt;A href="https://blogs.technet.microsoft.com/filecab/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-i/" target="_blank" rel="noopener"&gt; 3-part blog series &lt;/A&gt; on the topic. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; DHCP Service (clnetres.dll): &lt;/STRONG&gt; The DHCP Service resource type supports the Dynamic Host Configuration Protocol (DHCP) Service as a cluster resource. There can be only one instance of a resource of this type in the cluster (that is, a cluster can support only one DHCP Service). Dynamic Host Configuration Protocol (DHCP) is a client/server protocol that automatically provides an Internet Protocol (IP) host with its IP address and other related configuration information such as the subnet mask and default gateway. RFCs 2131 and 2132 define DHCP as an Internet Engineering Task Force (IETF) standard based on Bootstrap Protocol (BOOTP), a protocol with which DHCP shares many implementation details. DHCP allows hosts to obtain required TCP/IP configuration information from a DHCP server. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Disjoint IPv4 Address (clusres.dll): &lt;/STRONG&gt; IPv4 Resource type that can be used if setting up a site to site VPN Gateway. It can only be configured by PowerShell, not by the Failover Cluster Manager, the GUI tool on Windows Server. We added two IP addresses of this resource type, one for the internal network and one for the external network. &lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;The internal address is plumbed down for the cluster network that is identified by Routing Domain ID and VLAN number. Remember, we mapped them to the internal network adapters on the Hyper-V hosts earlier. It should be noted that this address is the default gateway address for all machines on the internal network that need to connect to Azure.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;The external address is plumbed down for the cluster network that is identified by the network adapter name. Remember, we renamed the external network adapter to “Internet” on both virtual machines.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Disjoint IPv6 Address (clusres.dll): &lt;/STRONG&gt; IPv6 Resource type that can be used if setting up a site to site VPN Gateway. It can only be configured by PowerShell, not by the Failover Cluster Manager, the GUI tool on Windows Server. We added two IP addresses of this resource type, one for the internal network and one for the external network. &lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;The internal address is plumbed down for the cluster network that is identified by Routing Domain ID and VLAN number. Remember, we mapped them to the internal network adapters on the Hyper-V hosts earlier. It should be noted that this address is the default gateway address for all machines on the internal network that need to connect to Azure.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;The external address is plumbed down for the cluster network that is identified by the network adapter name. Remember, we renamed the external network adapter to “Internet” on both virtual machines.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Ras Cluster Resource (rasclusterres.dll): &lt;/STRONG&gt; This resource object specifies where the site-to-site VPN configuration is stored. The file share can be anywhere the two virtual machines have read / write access to. It can only be configured by PowerShell, not by the Failover Cluster Manager, the GUI tool on Windows Server. This resource type is only available after installing the VPN Roles in Windows Server. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Distributed File System (clusres2.dll): &lt;/STRONG&gt; Manages a Distributed File System (DFS) as a cluster resource. When creating a DFS, this resource type is configured to ensure proper replication occurs. For more information regarding this, please refer to the &lt;A href="https://blogs.technet.microsoft.com/filecab/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-i/" target="_blank" rel="noopener"&gt; 3-part blog series &lt;/A&gt; on the topic. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Distributed Transaction Coordinator (mtxclu.dll): &lt;/STRONG&gt; The Distributed Transaction Coordinator (DTC) resource type supports the Microsoft Distributed Transaction Coordinator (MSDTC). MSDTC is a Windows service providing transaction infrastructure for distributed systems, such as SQL Server. In this case, a transaction means a general way of structuring the interactions between autonomous agents in a distributed system. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; File Server (clusres2.dll): &lt;/STRONG&gt; Manages the shares that are created as highly available. A file share is a location on the network where clients connect to access data, including documents, programs, images, etc. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; File Share Witness (clusres.dll): &lt;/STRONG&gt; A File Share Witness is a witness (quorum) resource and is simply a file share created on a completely separate server from the cluster for tie-breaker scenarios when quorum needs to be established. A File Share Witness does not store cluster configuration data like a disk. It does, however, contain information about which version of the cluster configuration database is most recent. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Generic Application (clusres2.dll): &lt;/STRONG&gt; The Generic Application resource type manages &lt;A href="https://technet.microsoft.com/aa369336(v=vs.90)#_wolf_cluster_unaware_application_gly" target="_blank" rel="noopener"&gt; cluster-unaware applications &lt;/A&gt; as cluster resources, as well as &lt;A href="https://technet.microsoft.com/aa369082(v=vs.90)" target="_blank" rel="noopener"&gt; cluster-aware applications &lt;/A&gt; that are not associated with custom resource types. The Generic Application resource DLL provides only very basic application control. For example, it checks for application failure by determining whether the application's process still exists and takes the application offline by terminating the process. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Generic Script (clusres2.dll): &lt;/STRONG&gt; The Generic Script resource type works in conjunction with a script that you must provide to manage an application or service as a highly available cluster resource. In effect, the Generic Script resource type allows you to script your own resource DLL. For more information on how to use the Generic Script resource type, see &lt;A href="https://technet.microsoft.com/aa373089(v=vs.90)" target="_blank" rel="noopener"&gt; Using the Generic Script Resource Type &lt;/A&gt; . &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Generic Service (clusres2.dll): &lt;/STRONG&gt; The Generic Service resource type manages services as cluster resources. Similar to the Generic Application resource type, the Generic Service resource type provides only the most basic functionality. For example, the failure of a Generic Service resource is determined by a query of the Service Control Manager (SCM). If the service is running, it is presumed to be online. To provide greater functionality, you can define a custom resource type (for information, see &lt;A href="https://technet.microsoft.com/aa369331(v=vs.90)" target="_blank" rel="noopener"&gt; Creating Resource Types &lt;/A&gt; ). &lt;BR /&gt;&lt;BR /&gt;A generic service resource type is usually used to manage a stateless service as a cluster resource, which can be failed over. However, generic services don't provide much state information other than their online state, so if they have an issue that doesn't cause the resource to go offline, it is more difficult to detect a service failure. &lt;BR /&gt;&lt;BR /&gt;Generic services should only be used when all of the following conditions are true; otherwise, you should create a resource DLL. &lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;The resource is not a device. The generic resource types are not designed to manage hardware.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;The resource is stateless.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;The resource is not dependent on other resources.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;The resource does not have unique attributes that should be managed with private properties.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;The resource does not have special functionality that should be exposed through control codes.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;The resource can be started and stopped easily without using special procedures.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Health Service (healthres.dll): &lt;/STRONG&gt; The Health Service constantly monitors your Storage Spaces Direct cluster to detect problems and generate "faults". Through either Windows Admin Center or PowerShell, it displays any current faults, allowing you to easily verify the health of your deployment without looking at every entity or feature in turn. Faults are designed to be precise, easy to understand, and actionable. &lt;BR /&gt;&lt;BR /&gt;Each fault contains five important fields: &lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;Severity&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;Description of the problem&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;Recommended next step(s) to address the problem&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;Identifying information for the faulting entity&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;Its physical location (if applicable)&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; IP Address (clusres.dll): &lt;/STRONG&gt; The IP Address resource type is used to manage Internet Protocol (IP) network addresses. When an IP Address resource is included in a group with a Network Name resource, the group can be accessed by network clients as a failover cluster instance (formerly known as a virtual server). &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; IPv6 Address (clusres.dll): &lt;/STRONG&gt; The IPv6 Address resource type is used to manage Internet Protocol version 6 (IPv6) network addresses. When an IPv6 Address resource is included in a group with a Network Name resource, the group can be accessed by network clients as a failover cluster instance (formerly known as a virtual server). &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; IPv6 Tunnel Address (clusres2.dll): &lt;/STRONG&gt; The IPv6 Tunnel Address resource type is used to manage Internet Protocol version 6 (IPv6) network tunnel addresses. When an IPv6 Tunnel Address resource is included in a group with a Network Name resource, the group can be accessed by network clients as a failover cluster instance (formerly known as a virtual server). &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; iSCSI Target Server (wtclusres.dll): &lt;/STRONG&gt; Creates a highly available ISCSI Target server for machines to connect to for drives. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Microsoft iSNS (isnsclusres.dll): &lt;/STRONG&gt; Manages an Internet Storage Name Service (iSNS) server. iSNS provides discovery services for Internet Small Computer System Interface (ISCSI) storage area networks. iSNS processes registration requests, deregistration requests, and queries from iSNS clients. We would recommend not using this resource type moving forward as it is &lt;A href="https://docs.microsoft.com/windows-server/get-started-19/removed-features-19" target="_blank" rel="noopener"&gt; being removed &lt;/A&gt; from the product. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; MSMQ (mqclus.dll): &lt;/STRONG&gt; Message Queuing (MSMQ) technology enables applications running at different times to communicate across heterogeneous networks and systems that may be temporarily offline. Applications send messages to queues and read messages from queues. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; MSMQTriggers (mqtgclus.dll): &lt;/STRONG&gt; Message Queuing triggers allow you to associate the arrival of incoming messages at a destination queue with the functionality of one or more COM components or stand-alone executable programs. These triggers can be used to define business rules that can be invoked when a message arrives at the queue without doing any additional programming. Application developers no longer must write any infrastructure code to provide this kind of message-handling functionality. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Network File System (nfsres.dll): &lt;/STRONG&gt; NFS cluster resource has dependency on one Network Name resource and can also depend on one or more disk resources in a resource group. For a give network name resource there can be only one NFS resource in a resource group. The dependent disk resource hosts one or more of NFS shared paths. The shares hosted on a NFS resource are scoped to the dependent network name resources. Shares scoped to one network name are not visible to clients that mount using other network names or node names residing on the same cluster. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Network Name (clusres.dll): &lt;/STRONG&gt; The Network Name resource type is used to provide an alternate computer name for an entity that exists on a network. When included in a group with an IP Address resource, a Network Name resource provides an identity to the role, allowing the role to be accessed by network clients as a Failover Cluster instance. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Distributed Network Name (clusres.dll): &lt;/STRONG&gt; A Distributed Network Name is a name in the Cluster that does not use a clustered IP Address.&amp;nbsp; It is a name that is published in DNS using the IP Addresses of all the nodes in the Cluster.&amp;nbsp; Client connectivity to this type name is reliant on DNS round robin.&amp;nbsp; In Azure, this type name can be used in leiu of having the need for an Internal Load Balancer (ILB) address.&amp;nbsp; The predominant usage of a Distributed Network Name is with a Scale-Out File Server (discussed next).&amp;nbsp; In Windows Server 2019, we added the ability for the Cluster Name Object (CNO) to use a DNN.&amp;nbsp; For more information on the CNO usage as a Distinguished Network Name, please refer to the&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/t5/Failover-Clustering/Windows-Server-2019-Failover-Clustering-New-Features/ba-p/544029" target="_self"&gt;Windows Server 2019 Failover Clustering New Features&lt;/A&gt; blog.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Scale Out File Server (clusres.dll): &lt;/STRONG&gt; A Scale Out File Server (SOFS) is a clustered file share that can be accessed by any of the nodes.&amp;nbsp; It uses the Distributed Network Name as the client access point and does not use a clustered IP Address.&amp;nbsp; The Distributed Network Name is discussed previously.&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Physical Disk (clusres.dll): &lt;/STRONG&gt; The Physical Disk resource type manages a disk on a shared bus connected to two or more cluster nodes. Some groups may contain one or more Physical Disk resources as dependencies for other resources in the group. On a &lt;A href="https://aka.ms/S2D" target="_blank" rel="noopener"&gt; Storage Spaces Direct &lt;/A&gt; cluster, the disks are local to each of the nodes. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Hyper-V Network Virtualization Provider Address (provideraddressresource.dll): &lt;/STRONG&gt; The IP address assigned by the hosting provider or the datacenter administrators based on their physical network infrastructure. The PA appears in the packets on the network that are exchanged with the server running Hyper-V that is hosting network virtualized virtual machine(s). The PA is visible on the physical network, but not to the virtual machines. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Storage Pool (clusres.dll): &lt;/STRONG&gt; Manages a storage pool resource.&amp;nbsp; It allows for the creation and deletion of storage spaces virtual disks. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Storage QoS Policy Manager (clusres.dll): &lt;/STRONG&gt; A resource type for the Policy Manger that collects the performance of storage resources allocated to the individual highly available virtual machines. It monitors the activity to help ensure storage is used fairly within I/O performance established through any policies that may be configured. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Storage Replica (wvrres.dll): &lt;/STRONG&gt; Storage Replica is Windows Server technology that enables replication of volumes between servers or clusters for disaster recovery. This resource type enables you to create stretch failover clusters that span two sites, with all nodes staying in sync. A Stretch Cluster allows configuration of computers and storage in a single cluster, where some nodes share one set of asymmetric storage and some nodes share another, then synchronously or asynchronously replicate with site awareness. By stretching clusters, workloads can be run in multiple datacenters for quicker data access by local proximity users and applications, as well as better load distribution and use of compute resources. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Task Scheduler (clusres.dll): &lt;/STRONG&gt; Task Scheduler is a resource that is tied to tasks you wish to run against the Cluster. Clustered tasks are not created or shown in Failover Cluster Manager. To create or view a Clustered Scheduled Task, you would need to use PowerShell. &lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/powershell/module/scheduledtasks/set-clusteredscheduledtask" target="_blank" rel="noopener"&gt; Set-ClusteredScheduledTask &lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/powershell/module/scheduledtasks/register-clusteredscheduledtask" target="_blank" rel="noopener"&gt; Register-ClusteredScheduledTask &lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/en-us/powershell/module/scheduledtasks/get-clusteredscheduledtask" target="_blank" rel="noopener"&gt; Get-ClusteredScheduledTask &lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.microsoft.com/powershell/module/scheduledtasks/Unregister-ClusteredScheduledTask" target="_blank" rel="noopener"&gt; Unregister-ClusteredScheduledTask &lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Virtual Machine (vmclusres.dll): &lt;/STRONG&gt; The Virtual Machine resource type is used to control the state of a virtual machine (VM). The following table shows the mapping between the state of the VM (indicated by the EnabledState property of the &lt;A href="https://technet.microsoft.com/cc136822(v=vs.90)" target="_blank" rel="noopener"&gt; Msvm_ComputerSystem &lt;/A&gt; instance representing the VM) and the state of the Virtual Machine resource (indicated by the State property of the &lt;A href="https://technet.microsoft.com/aa371464(v=vs.90)" target="_blank" rel="noopener"&gt; MSCluster_Resource &lt;/A&gt; class or the return of &lt;A href="https://technet.microsoft.com/aa369627(v=vs.90)" target="_blank" rel="noopener"&gt; GetClusterResourceState &lt;/A&gt; function). &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;VM State&lt;/TD&gt;
&lt;TD&gt;Virtual Machine resource state&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Disabled&lt;/TD&gt;
&lt;TD&gt;3&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Offline&lt;/TD&gt;
&lt;TD&gt;3&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Suspended&lt;/TD&gt;
&lt;TD&gt;32769&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Starting&lt;/TD&gt;
&lt;TD&gt;32770&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Online Pending&lt;/TD&gt;
&lt;TD&gt;129&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Online&lt;/TD&gt;
&lt;TD&gt;2&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Stopping&lt;/TD&gt;
&lt;TD&gt;32774&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Offline Pending&lt;/TD&gt;
&lt;TD&gt;130&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Saving&lt;/TD&gt;
&lt;TD&gt;32773&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Enabled&lt;/TD&gt;
&lt;TD&gt;2&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Paused&lt;/TD&gt;
&lt;TD&gt;32768&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Pausing&lt;/TD&gt;
&lt;TD&gt;32776&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Resuming&lt;/TD&gt;
&lt;TD&gt;32777&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt; Virtual Machine Cluster WMI (vmclusres.dll): &lt;/STRONG&gt; The Virtual Machine Cluster WMI resource type is one used when virtual machine grouping (also known as&amp;nbsp;virtual machine sets)&amp;nbsp;has been configured. By grouping virtual machines together, managing the "group" is much easier than all of the virtual machines individually. VM Groups enable checkpoints, backup and replication of VMs that form a guest-cluster and that use a Shared VHDX. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Virtual Machine Configuration (vmclusres.dll): &lt;/STRONG&gt; The Virtual Machine Configuration resource type is used to control the state of a virtual machine configuration. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Virtual Machine Replication Broker (vmclusres.dll): &lt;/STRONG&gt; Replication broker is a prerequisite if you are replicating clusters using Hyper-V replica. It acts the point of contact for any replication requests, and can query into the associated cluster database to decide which node is the correct one to redirect VM specific events such as Live Migration requests etc. The broker also handles authentication requests on behalf of the VMs. A new node can be added or removed from a cluster at any point, without the need to reconfigure the replication as the communication between the primary and recovery clusters is directed to the respective brokers. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Virtual Machine Replication Coordinator (vmclusres.dll): &lt;/STRONG&gt; Coordinator comes into picture when we use the concept of “collection” in Hyper-V replica . This was introduced in Windows Server 2016 and is a prerequisite if you are using few of the latest features for eg: shared virtual hard disks. When VMs are replicated as part of a collection, the replication broker coordinates actions/ events that affect VM group – for eg:, to take &amp;nbsp;a point in time snapshot which is app consistent, &amp;nbsp;applying the replication settings, &amp;nbsp;modifying the interval for replication etc. and propagating the change across all the VMs in the collection. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; WINS Service (clnetres.dll): &lt;/STRONG&gt; The WINS Service resource type supports the Windows Internet Name Service (WINS) as a cluster resource. There can be only one instance of a resource of this type in the cluster; in other words, a cluster can support only one WINS Service. Windows Internet Name Service (WINS) is a legacy computer name registration and resolution service that maps computer NetBIOS names to IP addresses. &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Windows Server 2016 only &lt;/STRONG&gt; &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Cross Cluster Dependency Orchestrator (clusres.dll): &lt;/STRONG&gt; This is a resource type that you can ignore and does not do anything. This was to be a new feature to be introduced. However, it never came to fruition, but the resource type was not removed. It is removed in Windows Server 2019. &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Windows Server 2019 only &lt;/STRONG&gt; &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; SDDC Management (sddcres.dll): &lt;/STRONG&gt; SDDC Management is installed when the cluster is enabled for Storage Spaces Direct. It is the management API that Windows Admin Center uses to connect/manage your Storage Spaces Direct. It is an in-box resource type with Windows Server 2019 and is a download and manual addition to Windows Server 2016. For information regarding this, please refer to the &lt;A href="https://docs.microsoft.com/windows-server/manage/windows-admin-center/use/manage-hyper-converged" target="_blank" rel="noopener"&gt; Manage Hyper-Converged Infrastructure with Windows Admin Center &lt;/A&gt; document. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Scaleout Worker (scaleout.dll): &lt;/STRONG&gt; This is used for Cluster Sets.&amp;nbsp; In a Cluster Set deployment, the CS-Master interacts with a new cluster resource on the member Clusters called “Cluster Set Worker” (CS-Worker). CS-Worker acts as the only liaison on the cluster to orchestrate the local cluster interactions as requested by the CS-Master. Examples of such interactions include VM placement and cluster-local resource inventorying. There is only one CS-Worker instance for each of the member clusters in a Cluster Set. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Scaleout Master (scaleout.dll): This is used when &lt;/STRONG&gt; In a Cluster Set, the communication between the member clusters is loosely coupled, and is coordinated by a new cluster resource called “Cluster Set Master” (CS-Master). Like any other cluster resource, CS-Master is highly available and resilient to individual member cluster failures and/or the management cluster node failures. Through a new Cluster Set WMI provider, CS-Master provides the management endpoint for all Cluster Set manageability interactions. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Infrastructure File Server (clusres.dll): &lt;/STRONG&gt; In hyper-converged configurations, an Infrastructure SOFS allows an SMB client (Hyper-V host) to communicate with guaranteed Continuous Availability (CA) to the Infrastructure SOFS SMB server. This hyper-converged SMB loopback CA is achieved via VMs accessing their virtual disk (VHDx) files where the owning VM identity is forwarded between the client and server. This identity forwarding allows ACL-ing VHDx files just as in standard hyper-converged cluster configurations as before. There can be at most only one Infrastructure SOFS cluster role on a Failover Cluster. Each CSV volume created in the failover automatically triggers the creation of an SMB Share with an auto-generated name based on the CSV volume name. An administrator cannot directly create or modify SMB shares under an Infra SOFS role, other than via CSV volume create/modify operations. This role is commonly used with Cluster Sets. &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks &lt;BR /&gt;John Marlin &lt;BR /&gt;Senior Program Manager &lt;BR /&gt;High Availability and Storage &lt;BR /&gt;&lt;BR /&gt;Follow me on Twitter &lt;A href="https://twitter.com/JohnMarlin_MSFT" target="_blank" rel="noopener"&gt; @JohnMarlin_MSFT &lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 03 Dec 2019 21:16:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/windows-server-2016-2019-cluster-resource-resource-types/ba-p/372163</guid>
      <dc:creator>John Marlin</dc:creator>
      <dc:date>2019-12-03T21:16:08Z</dc:date>
    </item>
    <item>
      <title>Microsoft Ignite 2018 Clustering Sessions available</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/microsoft-ignite-2018-clustering-sessions-available/ba-p/372161</link>
      <description>&lt;HTML&gt;
 &lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;
  &lt;STRONG&gt;
   First published on MSDN on Nov 01, 2018
  &lt;/STRONG&gt;
  &lt;BR /&gt;
  For those who attended Microsoft Ignite 2018 in Orlando, Florida, we thank you for making it another huge success.
  &lt;BR /&gt;
  &lt;BR /&gt;
  &lt;IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/90675i5D9E339C94BFA391" /&gt;
  &lt;BR /&gt;
  &lt;BR /&gt;
  So much fun was had by all.&amp;nbsp; We had the privilege of showing you what is new and coming in Windows Server 2019 with 700+ deep dive sessions and over 100+ workshops.
  &lt;BR /&gt;
  &lt;BR /&gt;
  You got the latest insights and skills from technology leaders and practitioners shaping the future of cloud, data, business intelligence, teamwork, and productivity. &amp;nbsp;As well as immersed yourselves with the latest tools, tech, and experiences that matter, and heard the latest updates and ideas directly from the experts.&amp;nbsp; There were demos galore throughout all the sessions.
  &lt;BR /&gt;
  &lt;BR /&gt;
  Who can forget the demo showing these unheard-of numbers before now running Windows Server 2019 Storage Spaces Direct and
  &lt;A href="https://www.intel.com" target="_blank"&gt;
   Intel's
  &lt;/A&gt;
  Optane DC Persistent Memory:
  &lt;BR /&gt;
  &lt;BR /&gt;
  &lt;A href="https://msdnshared.blob.core.windows.net/media/2018/11/Ignite2018-thanks.png" target="_blank"&gt;
  &lt;/A&gt;
  &lt;IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/90676i87E47C48B629EB2A" /&gt;
  &lt;A href="https://msdnshared.blob.core.windows.net/media/2018/11/Ignite2018-thanks.png" target="_blank"&gt;
  &lt;/A&gt;
  &lt;BR /&gt;
  &lt;BR /&gt;
  Or, the storage limit increase to 4 petabytes.&amp;nbsp; We are not just saying it because it’s a big number, we showed it with the help from our friends at
  &lt;A href="https://qct.io/" target="_blank"&gt;
   Quanta Cloud Technology
  &lt;/A&gt;
  ,
  &lt;A href="https://www.seagate.com/" target="_blank"&gt;
   Seagate
  &lt;/A&gt;
  , and
  &lt;A href="https://www.samsung.com" target="_blank"&gt;
   Samsung
  &lt;/A&gt;
  .
  &lt;BR /&gt;
  &lt;BR /&gt;
  &lt;IMG src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/90677iF666CC6A48A33028" /&gt;
  &lt;BR /&gt;
  &lt;BR /&gt;
  In case you missed Ignite, attended but missed a session, or you wish to view the sessions again, here is the link to all the sessions available for your viewing pleasure both from the Migrosoft Ignite pages as well as YouTube.
  &lt;BR /&gt;
  &lt;BR /&gt;
  To kick it all off, here is Satya Nadella's keynotes to kick off Microsoft Ignite 2018.
  &lt;BR /&gt;
  &lt;P&gt;
   Vision Keynote
   &lt;BR /&gt;
   &lt;A href="https://myignite.techcommunity.microsoft.com/sessions/66931?source=sessions" target="_blank"&gt;
    Ignite
   &lt;/A&gt;
   ,
   &lt;A href="https://www.youtube.com/watch?v=XBpibC4s0bc" target="_blank"&gt;
    YouTube
   &lt;/A&gt;
   &lt;BR /&gt;
   Satya Nadella - Chief Executive Officer of Microsoft
  &lt;/P&gt;
  &lt;BR /&gt;
  Since this is the Failover Clustering blog, I wanted to call out these sessions specifically to what we are doing in the hyper-converged infrastructure (HCI) space.
  &lt;BR /&gt;
  &lt;P&gt;
   BRK2035 - Windows Server 2019: What’s new and what's next
   &lt;BR /&gt;
   &lt;A href="https://myignite.techcommunity.microsoft.com/sessions/64687?source=sessions" target="_blank"&gt;
    Ignite
   &lt;/A&gt;
   ,
   &lt;A href="https://www.youtube.com/watch?v=NAMkIR0G9IM" target="_blank"&gt;
    YouTube
   &lt;/A&gt;
   &lt;BR /&gt;
   Erin Chapple, Vijay Kumar
   &lt;BR /&gt;
   Windows Server is a key component in Microsoft's hybrid and on-premises strategy and in this session, hear what's new in Windows Server 2019. Join us as we discuss the product roadmap, Semi-Annual Channel, and demo some exciting new features.
  &lt;/P&gt;
  &lt;BR /&gt;
  &lt;P&gt;
   BRK2241 - Windows Server 2019 deep dive
   &lt;BR /&gt;
   &lt;A href="https://myignite.techcommunity.microsoft.com/sessions/65977?source=sessions" target="_blank"&gt;
    Ignite
   &lt;/A&gt;
   ,
   &lt;A href="https://www.youtube.com/watch?v=LrCJlMviLrE" target="_blank"&gt;
    YouTube
   &lt;/A&gt;
   &lt;BR /&gt;
   Jeff Woolsey
   &lt;BR /&gt;
   Hybrid at its core. Secure by design. With cloud application innovation and hyper-converged infrastructure built into the platform, backed by the world’s most trusted cloud, Azure, Microsoft presents Windows Server 2019. In this session Jeff Woolsey - Principal Program Manager - dives into the details of what makes Windows Server 2019 an exciting platform for IT pros and developers looking into modernizing their infrastructure and applications.
  &lt;/P&gt;
  &lt;BR /&gt;
  &lt;P&gt;
   BRK2232 - Jumpstart your hyper-converged infrastructure deployment with Windows Server
   &lt;BR /&gt;
   &lt;A href="https://myignite.techcommunity.microsoft.com/sessions/65882" target="_blank"&gt;
    Ignite
   &lt;/A&gt;
   ,
   &lt;A href="https://www.youtube.com/watch?v=G4GEWni4v18" target="_blank"&gt;
    YouTube
   &lt;/A&gt;
   &lt;BR /&gt;
   Elden Christensen, Steven Ekren
   &lt;BR /&gt;
   The time is now to adopt hyper-converged infrastructure and Storage Spaces Direct. Where to start? This session covers design considerations and best practices, how to choose and procure the best hardware, sizing and planning, deployment, and how to validate your cluster is ready for showtime. Get tips and tricks directly from the experts! Applies to Windows Server 2016 and Windows Server 2019.
  &lt;/P&gt;
  &lt;BR /&gt;
  &lt;P&gt;
   BRK2036 - From Hyper-V to hyper-converged infrastructure with Windows Admin Center
   &lt;BR /&gt;
   &lt;A href="https://myignite.techcommunity.microsoft.com/sessions/64692?source=sessions" target="_blank"&gt;
    Ignite
   &lt;/A&gt;
   ,
   &lt;A href="https://www.youtube.com/watch?v=xnfO0R4tWvU" target="_blank"&gt;
    YouTube
   &lt;/A&gt;
   &lt;BR /&gt;
   Cosmos Darwin, Daniel Lee
   &lt;BR /&gt;
   Discover how Windows Admin Center (Formerly Project "Honolulu") makes it easier than ever to manage and monitor Hyper-V. It’s quick to deploy, there’s no additional license, and it’s built from years of feedback – this is YOUR new dashboard! Ready to go hyper-converged? New features like Storage Spaces Direct and Software-Defined Networking (SDN) are built right in, so you get an integrated, seamless experience ready for the future of the software-defined datacenter.
  &lt;/P&gt;
  &lt;BR /&gt;
  &lt;P&gt;
   BRK2231 - Be an IT hero with Storage Spaces Direct in Windows Server 2019
   &lt;BR /&gt;
   &lt;A href="https://myignite.techcommunity.microsoft.com/sessions/65880?source=sessions" target="_blank"&gt;
    Ignite
   &lt;/A&gt;
   ,
   &lt;A href="https://www.youtube.com/watch?v=5kaUiW3qo30" target="_blank"&gt;
    YouTube
   &lt;/A&gt;
   &lt;BR /&gt;
   Cosmos Darwin, Adi Agashe
   &lt;BR /&gt;
   The virtualization wave of datacenter modernization, consolidation, and savings made you an IT hero. Now, the next big wave is here: Hyper-Converged Infrastructure, powered by software-defined storage! Storage Spaces Direct is purpose-built software-defined storage for Hyper-V. Save money, accelerate IO performance, and simplify your infrastructure, from the datacenter to the edge. This packed technical session covers everything that’s new for Storage Spaces Direct in Windows Server 2019.
  &lt;/P&gt;
  &lt;BR /&gt;
  &lt;P&gt;
   BRK2233 - Get ready for Windows Server 2008 and 2008 R2 end of support
   &lt;BR /&gt;
   &lt;A href="https://myignite.techcommunity.microsoft.com/sessions/65884" target="_blank"&gt;
    Ignite
   &lt;/A&gt;
   ,
   &lt;A href="https://www.youtube.com/watch?v=XOr0MqlbLWQ" target="_blank"&gt;
    YouTube
   &lt;/A&gt;
   &lt;BR /&gt;
   Ned Pyle, Jeff Woolsey, Sue Hartford
   &lt;BR /&gt;
   Windows Server 2008 and 2008 R2 were great operating systems at the time, but times have changed. Cyberattacks are commonplace, and you don’t want to get caught running unsupported software. End of support for Windows Server 2008 and 2008 R2 means no more security updates starting on January 14, 2020. Join us for a demo-intensive session to learn about your options for upgrading to the latest OS. Or consider migrating 2008 to Microsoft Azure where you can get three more years of extended security updates at no additional charge.
  &lt;/P&gt;
  &lt;BR /&gt;
  We even had a few of our Microsoft MVP's jump in and deliver some theater sessions.
  &lt;BR /&gt;
  &lt;P&gt;
   THR3127 - Cluster Sets in Windows Server 2019: What is it and why should I use it?
   &lt;BR /&gt;
   &lt;A href="https://myignite.techcommunity.microsoft.com/sessions/66737?source=sessions" target="_blank"&gt;
    Ignite
   &lt;/A&gt;
   ,
   &lt;A href="https://www.youtube.com/watch?v=ZY0VRz78I_8" target="_blank"&gt;
    YouTube
   &lt;/A&gt;
   &lt;BR /&gt;
   Carsten Rachfahl, Microsoft MVP
   &lt;BR /&gt;
   Would you like to have an Azure-like availability set and fault domain across multiple clusters in your private cloud? Do you need to have more than 16 nodes in an hyper-converged infrastructure cluster or want multiple 4-node HCI clusters to behave like one? Then you definitely want to attend this session and learn about Cluster Sets - a new, amazing feature in Windows Server 2019 to solve these problems.
  &lt;/P&gt;
  &lt;BR /&gt;
  &lt;P&gt;
   THR2233 - What is the Windows Server Software Defined (WSSD) program and why does it matter?
   &lt;BR /&gt;
   &lt;A href="https://myignite.techcommunity.microsoft.com/sessions/66733?source=sessions" target="_blank"&gt;
    Ignite
   &lt;/A&gt;
   ,
   &lt;A href="https://www.youtube.com/watch?v=SKJzrsdCaP8" target="_blank"&gt;
    YouTube
   &lt;/A&gt;
   &lt;BR /&gt;
   Carsten Rachfahl, Microsoft MVP
   &lt;BR /&gt;
   The Window Server Software Defined (WSSD) program allows vendors to build and offer a tested end-to-end hyper-converged infrastructure solution. After implementing more than 100 Storage Spaces Direct projects, Carsten think this is more important than ever. Why? In this session, learn the reasons, and get help choosing the right solution for you!
  &lt;/P&gt;
  &lt;BR /&gt;
  &lt;P&gt;
   THR3137 - The case of the shrinking data: Data Deduplication in Windows Server 2019
   &lt;BR /&gt;
   &lt;A href="https://myignite.techcommunity.microsoft.com/sessions/66732?source=sessions" target="_blank"&gt;
    Ignite
   &lt;/A&gt;
   ,
   &lt;A href="https://www.youtube.com/watch?v=pOzvLBhL9hA" target="_blank"&gt;
    YouTube
   &lt;/A&gt;
   &lt;BR /&gt;
   Dave Kawula, Microsoft MVP
   &lt;BR /&gt;
   One of the most requested features for Storage Spaces Direct was ReFS with Data Deduplication. This feature was released over a year ago, but it was only in the Semi-Annual Release which did not include support for Storage Spaces Direct. The IT community has waited patiently, and the time has finally come with Windows Server 2019. This release has added full support for ReFS Data Deduplication into Storage Spaces Direct. What does this mean for you? How about more than 80% space savings on your VMs, Backups, ISO repositories, all running on Cluster Shared Volumes with Storage Spaces Direct. In this session, learn how to set up, configure, and test Data Deduplication with ReFS based on his years of knowledge working with Microsoft storage.
  &lt;/P&gt;
  &lt;BR /&gt;
  These are just the tip of the iceberg with the amount of sessions available to you. We hope you enjoy these sessions and had a great time at Ignite 2018 as we sure did.
  &lt;BR /&gt;
  &lt;BR /&gt;
  I leave you now with two other huge announcements.
  &lt;BR /&gt;
  &lt;BR /&gt;
  First, Ignite will be back in Orlando, Florida for
  &lt;A href="https://www.microsoft.com/ignite" target="_blank"&gt;
   Microsoft Ignite 2019
  &lt;/A&gt;
  . The dates are set for November 4-8, 2019 at the
  &lt;A href="http://www.occc.net/" target="_blank"&gt;
   Orange County Convention Center
  &lt;/A&gt;
  . You can pre-register today!!
  &lt;BR /&gt;
  &lt;BR /&gt;
  Second, Ignite 2018 is hitting the road and going global with "
  &lt;A href="https://www.microsoft.com/ignite-the-tour" target="_blank"&gt;
   Microsoft Ignite | The Tour
  &lt;/A&gt;
  ". Join us at the place where developers and tech professionals continue learning alongside experts. Explore the latest developer tools and cloud technologies and learn how to put your skills to work in new areas. Connect with our community to gain practical insights and best practices on the future of cloud development, data, IT, and business intelligence. Join us for two days of community-building and hands-on learning.
  &lt;BR /&gt;
  &lt;BR /&gt;
  We will be heading to places such as:
  &lt;BR /&gt;
  &lt;UL&gt;
   &lt;BR /&gt;
   &lt;LI&gt;
    Toronto, Canada
   &lt;/LI&gt;
   &lt;BR /&gt;
   &lt;LI&gt;
    Sydney, Australia
   &lt;/LI&gt;
   &lt;BR /&gt;
   &lt;LI&gt;
    Berlin, Germany
   &lt;/LI&gt;
   &lt;BR /&gt;
   &lt;LI&gt;
    Amsterdam, The Netherlands
   &lt;/LI&gt;
   &lt;BR /&gt;
  &lt;/UL&gt;
  &lt;BR /&gt;
  And these are just a few of the places we are going. Head to the "Microsoft Ignite | The Tour" page and find the city near you. Oh, and did I mention it is free!!!
  &lt;BR /&gt;
  &lt;BR /&gt;
  Thanks
  &lt;BR /&gt;
  John Marlin
  &lt;BR /&gt;
  Senior Program Manager
  &lt;BR /&gt;
  High Availability and Storage
  &lt;BR /&gt;
  &lt;BR /&gt;
  Follow me on Twitter
  &lt;A href="https://twitter.com/JohnMarlin_MSFT" target="_blank"&gt;
   @JohnMarlin_MSFT
  &lt;/A&gt;
 
&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 15 Mar 2019 22:17:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/microsoft-ignite-2018-clustering-sessions-available/ba-p/372161</guid>
      <dc:creator>John Marlin</dc:creator>
      <dc:date>2019-03-15T22:17:08Z</dc:date>
    </item>
    <item>
      <title>Cluster Sets in Windows Server 2019 - Hyperscale for Hyperconverged !!</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/cluster-sets-in-windows-server-2019-hyperscale-for/ba-p/372157</link>
      <description>&lt;P&gt;Cluster Sets is a new feature in Windows Server 2019 that was first introduced at &lt;A href="https://channel9.msdn.com/Events/Ignite/Microsoft-Ignite-Orlando-2017/BRK3324" target="_blank" rel="noopener"&gt; Ignite 2017 &lt;/A&gt; .&amp;nbsp; Cluster Sets is the new cloud scale-out technology that increases cluster node count in a single Software Defined Data Center (SDDC) cloud by orders of magnitude. A Cluster Set is a loosely-coupled federated&amp;nbsp;grouping of multiple Failover Clusters: compute, storage or hyper-converged.&amp;nbsp;&amp;nbsp;&amp;nbsp; The&amp;nbsp;Cluster Sets technology enables virtual machine fluidity across member clusters within a cluster set and a unified storage namespace across the set in support of virtual machine fluidity. &lt;BR /&gt;&lt;BR /&gt;Cluster Sets&amp;nbsp;gives you the benefit of hyperscale while continuing to maintain great resiliency.&amp;nbsp; So in more clearer words, you are pseudo clustering clusters together while not putting all your eggs in one basket.&amp;nbsp; You can now have multiple baskets to maintain greater flexibility without sacrificing resiliency. &lt;BR /&gt;&lt;BR /&gt;While preserving existing Failover Cluster management experiences on member clusters, a Cluster Set instance additionally offers key use cases, such as lifecycle management. The Windows Server Preview Scenario Evaluation Guide for Cluster Sets provides you the necessary background information along with step-by-step instructions to evaluate cluster sets technology using PowerShell. &lt;BR /&gt;&lt;BR /&gt;Here is a video providing a&amp;nbsp;brief overview of what Cluster Sets is and can do.&lt;/P&gt;
&lt;P&gt;&lt;IFRAME src="https://www.youtube.com/embed/7eWGnJpf4Fk" width="560" height="315" frameborder="0"&gt;
   &lt;/IFRAME&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;The evaluation guide&amp;nbsp;to read more about Cluster Sets along with information on how to set it up is listed on the &lt;A href="https://docs.microsoft.com" target="_blank" rel="noopener"&gt; Microsoft Docs &lt;/A&gt; page where this, and numerous other Microsoft products are covered.&amp;nbsp; The quick link to the Cluster Sets page is &lt;A href="https://aka.ms/Cluster_Sets" target="_blank" rel="noopener"&gt; https://aka.ms/Cluster_Sets &lt;/A&gt; . &lt;BR /&gt;&lt;BR /&gt;Finally, there is a &lt;A href="https://github.com/Microsoft/WSLab/tree/master/Scenarios/S2D%20and%20Cluster%20Sets" target="_blank" rel="noopener"&gt; GitHub lab scenario &lt;/A&gt; where you can set this up on your own and try it out that gives you additional instructions. &lt;BR /&gt;&lt;BR /&gt;We hope that you try it out and provide feedback.&amp;nbsp; Feedback can be done in two ways: &lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;OL&gt;
&lt;LI&gt;The Feedback Hub on Windows 10&lt;/LI&gt;
&lt;LI&gt;Email &lt;A style="font-family: inherit; background-color: #ffffff;" href="mailto:csrequests@microsoft.com" target="_blank" rel="noopener"&gt; Cluster Sets Feedback &lt;/A&gt;&lt;SPAN style="font-family: inherit;"&gt; .&amp;nbsp; This alias has been set up to provide feedback only.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;/OL&gt;
&lt;P&gt;Thanks, &lt;BR /&gt;John Marlin &lt;BR /&gt;Senior Program Manager &lt;BR /&gt;High Availability and Storage &lt;BR /&gt;&lt;BR /&gt;Follow me on Twitter @JohnMarlin_MSFT &lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 12 Apr 2019 17:54:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/cluster-sets-in-windows-server-2019-hyperscale-for/ba-p/372157</guid>
      <dc:creator>John Marlin</dc:creator>
      <dc:date>2019-04-12T17:54:20Z</dc:date>
    </item>
    <item>
      <title>Scale-Out File Server Improvements in Windows Server 2019</title>
      <link>https://techcommunity.microsoft.com/t5/failover-clustering/scale-out-file-server-improvements-in-windows-server-2019/ba-p/372156</link>
      <description>&lt;P&gt;&lt;STRONG&gt;SMB Connections move on connect &lt;/STRONG&gt; &lt;BR /&gt;&lt;BR /&gt;Scale-Out File Server (SOFS) relies on DNS round robin for inbound connections sent to cluster nodes.&amp;nbsp; When using Storage Spaces on Windows Server 2016 and older, this behavior can be inefficient: if the connection is routed to a cluster node that is not the owner of the Cluster Shared Volume (aka the coordinator node), all data redirects over the network to another node before returning to the client. The SMB Witness service detects this lack of direct I/O and moves the connection to a coordinator.&amp;nbsp; This can lead to delays. &lt;BR /&gt;&lt;BR /&gt;In Windows Server 2019, we are much more efficient.&amp;nbsp; The SMB Server service determines if direct I/O on the volume is possible.&amp;nbsp; If direct I/O is possible, it passes the connection on.&amp;nbsp; If it is redirected I/O, it will move the connection to the coordinator before I/O starts.&amp;nbsp; Synchronous client redirection required changes in the SMB client, so only Windows Server 2019 and Windows 10 Fall 2017 clients can use this new functionality when talking to a Windows 2019 Failover Cluster. &amp;nbsp;SMB clients from older OS versions will&amp;nbsp;continue relying upon the SMB Witness to move to a more optimal server. &lt;BR /&gt;&lt;BR /&gt;&lt;img /&gt; &lt;BR /&gt;As a note here, I wanted to point out when a move would and would not occur in a stretch scenario and it will depend on the storage you are using.&amp;nbsp; So for my example, my Scale-Out File Server is running on NodeA in SiteA.&amp;nbsp;&amp;nbsp;All node's IP Addresses are registered in DNS and it is round robin on where a client connects.&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you have a stretch Failover Cluster and the storage presents itself as symmetric; meaning, all nodes have access to the drives, the client connection will be moved to SiteA as described above.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;But let's say the SAN storage and is asymmetric; meaning, each site has it's own SAN storage and there is replication between them.&amp;nbsp; This is the process that will occur.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;1. A client connection is sent to a node in SiteB&lt;/P&gt;
&lt;P&gt;2. The node in SiteB will retain that connection.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;3. All data requests will be redirected over the CSV network to SiteA.&lt;/P&gt;
&lt;P&gt;4. Data is retrieved and sent back over the CSV network to the node in SiteB.&lt;/P&gt;
&lt;P&gt;5. The node in SiteB then sends the data to the client.&lt;/P&gt;
&lt;P&gt;6. Rinse, repeat for all other data requests.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Infrastructure Scale-Out File Server &lt;/STRONG&gt; &lt;BR /&gt;&lt;BR /&gt;There is a new Scale-Out File Server role in Windows Server 2019 called Infrastructure File Server.&amp;nbsp; When you create an Infrastructure File Server, it will create a single namespace share automatically for the CSV drive (i.e. \\InfraSOFSName\Volume1, etc.).&amp;nbsp; In hyper-converged configurations, an Infrastructure SOFS allows an SMB client (Hyper-V host) to communicate with guaranteed Continuous Availability (CA) to the Infrastructure SOFS SMB server.&amp;nbsp; There can be at most only one infrastructure SOFS cluster role on a Failover Cluster. &lt;BR /&gt;&lt;BR /&gt;To create the Infrastructure SOFS, you would need to use PowerShell.&amp;nbsp; For example: &lt;BR /&gt;Add-ClusterScaleOutFileServerRole -Cluster MyCluster -Infrastructure -Name InfraSOFSName &lt;BR /&gt;&lt;img /&gt; &lt;BR /&gt;&lt;BR /&gt;&lt;img /&gt; &lt;BR /&gt;&lt;BR /&gt;&lt;img /&gt; &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; SMB Loopback &lt;/STRONG&gt; &lt;BR /&gt;&lt;BR /&gt;There is an enhancement made with Server Message Block (SMB) to work properly with SMB local loopback to itself which was previously not supported.&amp;nbsp; This hyper-converged SMB loopback CA is achieved via Virtual Machines accessing their virtual disk (VHDx) files where the owning VM identity is forwarded between the client and server. &lt;BR /&gt;&lt;BR /&gt;&lt;img /&gt; &lt;BR /&gt;&lt;BR /&gt;This is a role that Cluster Sets takes advantage of where the path to the VHD/VHDX is placed as \\InfraSOFSName\Volume1.&amp;nbsp; This \\InfraSOFSName\Volume1 path can then be utilized by the virtual machine&amp;nbsp;if it is local or remote. &lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt; Identity Tunneling &lt;/STRONG&gt; &lt;BR /&gt;&lt;BR /&gt;In Server 2016, if Hyper-V virtual machines are hosted on a SOFS share, you must grant the machine accounts of the Hyper-V compute nodes permission to access the VHD/VHDX files.&amp;nbsp; If the virtual machines and VHD/VHDX is running on the same cluster, then the user must have rights.&amp;nbsp; This can make management difficult as two sets of permissions are needed. &lt;BR /&gt;&lt;BR /&gt;In Windows Server 2019 when using SOFS, we now have “identity tunneling” on Infrastructure shares. When you access Infrastructure Share from the same cluster or Cluster Set, the application token is serialized and tunneled to the server, and VM disk access is done using that token. This works even if your identity is Local System, a service, or&amp;nbsp;virtual machine&amp;nbsp;account. &lt;BR /&gt;&lt;BR /&gt;Thanks, &lt;BR /&gt;John Marlin &lt;BR /&gt;Senior Program Manager &lt;BR /&gt;High Availability and Storage &lt;BR /&gt;&lt;BR /&gt;Follow me on Twitter @JohnMarlin_MSFT&lt;/P&gt;</description>
      <pubDate>Tue, 10 Sep 2019 23:39:08 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/failover-clustering/scale-out-file-server-improvements-in-windows-server-2019/ba-p/372156</guid>
      <dc:creator>John Marlin</dc:creator>
      <dc:date>2019-09-10T23:39:08Z</dc:date>
    </item>
  </channel>
</rss>

