Microsoft Secure Tech Accelerator
Apr 03 2024, 07:00 AM - 11:00 AM (PDT)
Microsoft Tech Community
MAILBAG: Should I use the same network adapters for all interfaces on my cluster?
Published Sep 19 2018 03:33 PM 199 Views
Microsoft

First published on TechNet on Feb 21, 2013

 

Not sure where February went but it sure is flying by in a hurry.  During the month I had another interesting question given to me to answer. 

Question

When building a cluster, is it okay to use the same model network adapters for all interfaces on the same node?

 

Thoughts regarding the question

 

Technically, the answer to this question is that it is perfectly fine to use the same model network adapters for all interfaces in the cluster.  Modern clusters (Windows Server 2008 and newer) need to pass validation in order to be supported...and to avoid issues.  Assuming that validation passes with the network adapters chosen, then you should be good to go.  

 

However, my conservative nature tends to want to take this a step further...not just what is supported, but what might be better and supported.   The validation process tells you that things are working as expected.  But what about later if the single driver that services all network adapters in the system malfunctions and prevents any communication?  A failover cluster may experience a failover that might otherwise be prevented if communication were possible through at least one network interface.

 

With a single driver for all network interfaces it is possible that all communication may be impacted.  I've seen that same scenario play out more times than I can count over the 16 years I've supported clusters at Microsoft.  Those issues typically go away like waving a magic wand when the offending network driver receives the proper update.  Usually the adapters using the same malfunctioning driver don't end up completely non-functional...but when there's a problem it may trigger an otherwise unnecessary failover.   Why?  Because there are timing tolerances for node to node communications as well as global updates and out of tolerance delay on all interfaces looks like a failure of all networks when the single network driver flakes out.   As a result, the cluster has to try to recover from the situation to keep resources highly available.  Thus, the single network driver approach can be vulnerable.

 

When I build a cluster, or when someone asks me about building one, I suggest using slightly different models of network adapters within the same server for the public and private networks.  They can even be from the same manufacturer as long as they use a different driver.  This way, you're using two different adapter drivers.  If one of them fails and renders corresponding adapters useless, you still have the potential for other adapters in the system to function with the other driver(s).   One could argue that other single driver situations for storage or other devices could be considered failure points as well.  When I've seen that happen, typically I/O operations get retried but access to storage may not complexly fail.  Such incidents may be transient, recoverable, and noted in the event log.  

 

Failover Cluster nodes need to be able to communicate which is why I have the opinion I do about network adapters and their corresponding drivers.  It is important to remove as many single points of failure as possible.   When it comes to communication, network adapters are typically inexpensive. 

 

Again…what I’m saying here is not a design requirement.  It’s just an opinion based on experience.

 

Circling back around to the original question, it is perfectly fine to use adapters in the same server that are all the same model and use the same driver.  However, It might be good to consider slightly different network adapters to avoid a single network adapter driver as a single point of failure.  It is also wise to keep hardware configurations as consistent as possible amongst all failover cluster nodes. 

 

Consistency of hardware across nodes is always a plus in my opinion.

 

Until next time! -Martin

Version history
Last update:
‎Feb 10 2020 03:29 PM
Updated by: