Teaming in Azure Stack HCI
Published Feb 03 2020 06:00 AM 134K Views
Microsoft
Spoiler
As we're deprecating the vSwitch attached to an LBFO team, this article introduces a new tool for converting your LBFO team to a SET team. To download this tool, run the following command or see the end of this article.
Install-Module Convert-LBFO2SET​

Windows Server currently has two inbox teaming mechanisms with two very different purposes. In this article, we’ll describe several reasons why you should use Switch Embedded Teaming (SET) for Azure Stack HCI scenarios and we’ll discuss several long-held teaming myths – We’d love to hear your feedback in the comments below. Let's get started!

 

In Windows Server 2012 we released LBFO as an inbox teaming mechanism, with many customers leveraging this technology to provide load-balancing and fail-over between network adapters. Since then, the rise of software-defined storage and software defined networking has brought performance and compatibility challenges to the forefront (outlined in this article) with the LBFO architecture that required a change in direction.

 

This new direction is called Switch Embedded Teaming (SET) and was introduced in Windows Server 2016. SET is available when Hyper-V is installed on any Server OS (Windows Server 2016 and higher) and Windows 10 version 1809 (and higher). You’re not required to run virtual machines to use SET, but the Hyper-V role must be installed.

 

In summary, LBFO is our older teaming technology that will not see future investment, is not compatible with numerous advanced capabilities, and has been exceeded in both performance and stability by our new technology (SET). We'd like to discuss why you should move off LBFO for virtualized and cloud scenarios. Let's dig into this paragraph a bit.

 

 

LBFO is our older teaming technology that will not see future investment

clipboard_image_0.png

With the intent to bring software-defined technologies like SDNv2 and containers to Windows Server, it became clear that we needed an alternative teaming solution and so we set off creating SET, circa 2014. Simultaneously reaching feature parity and stability with LBFO took time; several early adopters of SET will remember some of these pains. However, SET’s stability, performance, and features have now far surpassed LBFO.

 

All new features released since Windows Server 2016 (see below) were developed and tested with SET in mind – This includes all Azure Stack HCI solutions you may have purchased; Azure Stack HCI is not tested or certified with LBFO. This is largely due to development simplicity and testing; without driving too far into unimportant details, LBFO teams adapters inside NDIS which is a large and complex component – its roots date back to Windows 95 (of course updated considerably since then). If your system has a NIC, you’re using NDIS. In the picture shown above, each component below the vSwitch was part of NDIS.

 

The size and complexity of scenarios included in NDIS made for very complex testing requirements that were only compounded by virtualized and software-defined technologies that considerably hampered feature innovation. You might think that this is just a Microsoft problem, but really this affects NIC vendor driver development time and stability as well.

 

All-in-all, we’re not focusing on LBFO much these days, particularly as software-defined Windows Server networking scenarios become more exotic with the rise of containers, software-defined networking, and much more. There’s a faster, more stable, and performant teaming solution, called Switch Embedded Teaming.

LBFO is not compatible with several advanced capabilities

Here’s a smattering of scenarios and features that are supported with SET but NOT LBFO:

 

Windows Admin Center - WAC is the de facto management tool for Windows Server and Azure Stack HCI, with millions of nodes under management. You can create and manage a SET team for a single host, or deploy a SET team to multiple hosts with the new Cluster Creation UI we released to help you deploy Azure Stack HCI solutions at Microsoft Ignite this year (watch the session, try it out, and give us feedback).

 

LBFO is not available for configuration in Windows Admin Center.

 

RDMA Teaming - Only SET can team RDMA adapters. RDMA is used for example with Storage Spaces Direct (S2D) which requires a reliable high bandwidth, low latency network connection between each node. High-bandwidth? Low Latency? That’s RDMA’s bag so it is the recommended pattern with S2D. Reliability? That’s SET's claim-to-fame so these two are a logical pairing.

 

Guest RDMA: SET supports RDMA into a virtual machine. This doesn’t work with LBFO for two reasons:

  • RDMA adapters cannot be teamed with LBFO both host adapters and virtual adapters.
  • RDMA uses SMB multichannel which requires multiple adapters to balance traffic across. Since you can’t assign a vNIC to pNIC affinity with LBFO, neither the SMB nor non-SMB traffic can be made highly available.

Guest Teaming is a strange one; you could add multiple virtual NICs to a Hyper-V VM; inside the VM, you could use LBFO to team the virtual NICs. However, you cannot affinitize a virtual NIC (vNIC) to a physical NIC (pNIC), so it’s possible that both vNICs added in the VM are sending and receiving traffic out of the same pNIC. If that pNIC fails, you lose both of your virtual NICs. 

 

SET allows you to map each vNIC to a pNIC to ensure that they don’t overlap, ensuring that a Guest Team is able to survive an adapter outage.

 

Microsoft Software Defined Networking - (SDN) was first released in its modern form in Windows Server 2016 and requires a virtual switch extension called the Virtual Filtering Platform (VFP). VFP is the brains behind SDN, the same extension that runs our public cloud, Azure. VFP can only be added to a SET team.

 

This means that any of the SDN features (which are included with a Datacenter Edition license) like the Software Load Balancer, Gateways, Distributed Firewall (ACLs), and our modern network QoS capability are also unavailable if you’re using LBFO.

 

Container Networking - Containers relies on a service called the Host Network Service (HNS). HNS also leverages VFP and as mentioned in the SDN section, VFP can only be added to a Switch Embedded Team (SET). For more information on Container Networking, please see this link.

 

Virtual Machine Multi-Queues - VMMQ is a critical performance feature for Azure Stack HCI. VMMQ allows you to assign multiple VMQs to the same virtual NIC without which, you rely on expensive software spreading operations (the OS spreads packets across multiple CPUs without hardware (NIC) assistance) that greatly increases CPU utilization on the host, reducing the number of virtual machines you can run.

 

Moreover, if your vNIC doesn’t get a VMQ, all traffic is processed by the default queue. With SET you can assign multiple VMQs to the default queue which can be shared as needed by any vNIC allowing more VMs to get the bandwidth they need.

 

In this video, you can see the performance (throughput) benefits of Switch Embedded Teaming over that of LBFO.  The video demonstrates a 2x throughput improvement with SET over LBFO, while consuming ~10% additional CPU (a result of double the throughput).

 

 

Dynamic VMMQ -  d.VMMQ won’t work either. Dynamic VMMQ is an intelligent queue scheduling algorithm for VMMQ that recognizes when CPU cores are overworked by network traffic and reassigns that network traffic processing to other cores automatically so your workloads (e.g. VMs, applications, etc.) can run without competing for processor time.

 

Here's an example of some of the benefits of Dynamic VMMQ. In the video, you can see the host, spending CPU resources processing packets for a specific virtual NIC. When a competing workload begins on the system (which would prevent the virtual NIC from reaching maximum performance), we automatically tune the system by moving one of the workloads to an available processor.

 

 

RSC in the vSwitch is an acceleration that coalesces segments destined for the same virtual NIC into a larger segment.

 

Outbound network traffic is slimmed to fit into the mtu size of the physical network (default of ~1500 bytes). However, inbound traffic can be coalesced into one big segment. That one big segment takes far less processing than multiple small segments, so once traffic is received by the host, we can combine them and deliver several segments to the vNIC all at once. SET was made aware of RSC coalescing and supports this acceleration as of Windows Server 2019.

 

We’re continuing to improve this feature for even better performance in the next version of Windows Server and Azure Stack HCI by enabling RSC in the vSwitch to extend over the VMBus. In the video below, we show one VM sending traffic to another VM with the improved acceleration disabled - This is using only the original Windows Server 2019, RSC in the vSwitch capabilities. 

 

Next we enable the Windows Server vNext improvements; throughput is improved by ~17 Gbps while CPU resourcing is reduced by approximately 12% (20 cores on the system). This type of traffic pattern is specifically valuable for container scenarios that reside on the same host.

 

 

LBFO has been exceeded in both performance and stability by SET

Note: Guest RDMA, RSC in the vSwitch, VMMQ, and Dynamic VMMQ belong in this category as well.

 

Certified Azure Stack and Azure Stack HCI solutions test only SET

If all that wasn’t enough, both Microsoft and our partners validate and certify their solutions on SET, not LBFO. If you bought a certified Azure Stack HCI solution from one of our partners OR a standard or premium logo’d NIC, it was tested and validated with Switch Embedded Teaming. That means all certification tests where run with SET.

 

Link Aggregation Control Protocol (LACP)

clipboard_image_0.png

Ok, so this one is a little counter-intuitive. LACP, allows for port-channels or switch-dependent teams to send traffic to the host over more than one physical port simultaneously.

 

For native hosts this means that every port in the port-channel can send traffic simultaneously – for the system on the right with 2 x 50 Gbps NICs, it looks like one big pipe with a native host potentially receiving 100 Gbps. Naturally, you'd expect that this capability could extend to virtual NICs as well.

 

But things change with virtualization. When the traffic gets to the host, the NICs need to interrupt multiple, independent processors to exceed what a single CPU core can process – This is what VMMQ does, and as mentioned, VMMQ does not work with LBFO.

 

LBFO limits you to a single VMQ and despite having (in the picture) 100 Gbps of inbound bandwidth, you would only receive about 5 Gbps per virtual NIC (or up to ~20 Gbps per vNIC at the painful expense of OS-based software spreading that could be used for running virtual machine workloads).

clipboard_image_1.png

 

With SET, switch-independent teaming, and the hardware assistance of VMMQ and enough CPUs in the system, you could receive all 100 Gbps of data into the host.

 

In summary, LACP provides no throughput benefits for Azure Stack HCI scenarios, incurs higher CPU consumption, and cannot auto-tune the system to avoid competition between workloads for virtualized scenarios (Dynamic VMMQ).

 

Asymmetric Adapters

While we're myth-busting, let’s talk about adapter symmetry which describes the length to which adapters have the same make, model, speed, and configuration – SET requires adapter symmetry for Microsoft support. Usually the easiest way to identify this symmetry is by the device Interface Description (with PowerShell, use Get-NetAdapter). If the interface description matches (with exception of the unique number given to each adapter e.g. Intel NIC #1, Intel NIC #2, etc.) then the adapters are symmetric.

 

Prior to Windows Server 2016, conventional wisdom stated that you should use different NICs with different drivers in a team. The thinking was that if one driver had an issue, another team member would survive, and the team would remain up. This is a common benefit customers cite in favor of LBFO: it supports asymmetric adapters.

 

However, two drivers mean twice as many things can go wrong in fact increasing the likelihood of a problem. Instead, a properly designed infrastructure with symmetric adapters are far more stable in our review of customer support cases. As a result, support for asymmetric teams are no longer a differentiator for LBFO nor do we recommend it for Azure Stack HCI scenarios where reliability is the #1 requirement.

 

LBFO for management adapters

Some customers I’ve worked with have asked if they should use LBFO for management adapters when the vSwitch is not attached – Our recommendation is to always use SET whenever available. A management adapter’s goal in life is to be stable and we see less support cases with SET.

 

To be clear, if the adapter is not attached to a virtual switch, LBFO is acceptable however, you should endeavor to use SET whenever possible due to the support reasons outlined in this article.

 

vSwitch on LBFO Deprecation Status

Recently, we publicly announced our plans to deprecate the use of LBFO with the Hyper-V virtual switch. Moving forward, and due to the various reasons outlined in this article, we have decided to block the binding of the vSwitch on LBFO.

 

Prior to upgrading from Windows Server 2019 to vNext or if you have a fresh install of vNext, you will need to convert any LBFO teams to a SET team if it's attached to a Hyper-V virtual switch. To make this simpler, we're releasing a tool (available on the PowerShell gallery) called Convert-LBFO2SET.

 

You can install this tool using the command:

 

 

 

 

 

Install-Module Convert-LBFO2SET

 

 

 

 

 

 

Or for disconnected systems:

 

 

 

 

 

Save-Module Convert-LBFO2SET -Path C:\SomeFolderPath

 

 

 

 

 

 

Please see the wiki for instructions on how to use the tool however here's an example where we convert a system with 10 host vNICs, 10 generation 1 VMs, and 10 generation 2 VMs. 

 

 

Summary

LBFO remains our teaming solution when Hyper-V is not installed. If however, you are running virtualized or cloud scenarios like Azure Stack HCI, you should give Switch Embedded Teaming serious consideration. As we’ve described in this article, SET has been the Microsoft recommended teaming solution and focus since Windows Server 2016 as it brings better performance, stability, and feature support compared to LBFO.

 

Are there other questions you have about SET and LBFO? Please submit your questions in the comments below!

 

Thanks for reading,

Dan “All SET for Azure Stack HCI” Cuomo

 

21 Comments

Thanks for the Awesome blogpost :cool:

Copper Contributor

Sweet post! I love these sorts of insights

Brass Contributor

Multipath TCP ( https://en.wikipedia.org/wiki/Multipath_TCP ) would provide a far simplest and better way to support these scenarios.

WSL team could embrace this as Mutipath TCP is landing into the mainstream Linux kernel in the coming weeks. In the while, the Windows networking team could start thinking to develop a native Multipath TCP implementation into Windows Network stack. 

 

Copper Contributor

Excellent post Dan. This really helps us understand why SET is so much better than LBFO. Keep up the great work! :)

Microsoft

Thanks @James Gandy and @Ben Thomas 

Microsoft

@Olivier Hault - Our current direction for this is QUIC which solves for the same challenges. SET and LBFO are more for traditional teaming mechanisms that allow for only one connection rather than a multipath solution. Thanks for your suggestion!

Copper Contributor

Sadly when attempting to run Convert-LBFO2SET on Windows Server 2022 I'm given the response that this OS hasn't been validated yet...

Brass Contributor

Thanks for the useful information in this post. I was trying to upgrade one of my Server 2019 Hyper-V hosts to Server 2022 and was referred here because LBFO teaming is deprecated.

I tried to run Convert-LBFO2SET and it tells me that LACP is not supported. It is currently using LACP with an Intel quad-port server adapter and on the switch side it's using port-channels on a Cisco 3750X stack. When I originally set it up, LACP was the recommendation, with the relevant config shown below. What's the recommendation now with Cisco switches to match the switch independent teaming this tool does support? Is it a port-channel configured "ON" as opposed to the previous passive (LACP) or active (LACP), or more complicated than that?

 

interface Port-channel5
description Ports41-44 Hyper-V VirtualSwitch
switchport trunk encapsulation dot1q
switchport mode trunk
spanning-tree portfast edge trunk

...

interface GigabitEthernet1/0/41
switchport trunk encapsulation dot1q
switchport mode trunk
channel-group 5 mode passive

...

interface GigabitEthernet2/0/44
switchport trunk encapsulation dot1q
switchport mode trunk
channel-group 5 mode passive

 

 

Microsoft

@bbthiessen - You should migrate before you're on WS22 as it's deprecated there.

 

I would recommend that you just live migrate workloads to another node via a vSwitch of the same name. Once your workloads are off that machine, you can put the node into cluster maintenance mode, and manually do the migration.

Microsoft

@Jerry Locke - As mentioned in the article we get better performance inside the OS without LACP so SET does not support LACP.

 

Using LACP requires special configuration on the switch. Not using LACP does not require any special configuration. Therefore, if your ports are already in a port-channel, you just need to unconfigure your switchports from the port-channel.

 

Since this will immediately break the teaming functionality you have in the OS (LBFO did support LACP), the tool will not allow you to move forward (we didn't someone to accidentally cause an outage). However, you could try the -AllowOutage param and see if that lets you work through this (I personally don't know if this works but you will still need to break the port-channel on the switch first).

 

If you run into trouble, please contact support.

Copper Contributor

I running into the same issue as @Jerry Locke 

 

I'm setting up a new 2022 STD host for our client. And when I run the Convert-LBFO2SET commands, i get the 2022 OS is not supported. 

Brass Contributor

Hey @Dan Cuomo, thanks for the feedback. The conversion to SET went well. I wasn't 100% comfortable with no configuration on the switch (I did take the time to learn Cisco IOS so may as well use that knowledge ;) ) so I simply changed my port-channel from LACP to ON. It's been working great, and I went ahead and did the 2022 upgrade a few days ago.

 

Now I have encountered something that is scaring me: Today I started a 2022 upgrade on another 2019 Hyper-V host, with an existing LACP team, fully expecting to see the same warning about the existence of a virtual switch on a teamed interface. Same install media, same OS build being upgraded, but for some reason I don't get the warning at all. Only difference I see is this machine also had an LBFO team for the management interface. I went ahead and changed the team to switch-independent + dynamic, changed the switch config, and did LBFO2SET which successfully got rid of the LBFO team on the Hyper-V switch.

 

After all of that, upgrade was successful and everything is working well.

 

Question: how to convert the management interface's team to SET?

Copper Contributor

I know this is an old post, but was nobody planning on addressing the question @bbthiessen and @Matt_Johnson5928 brought up?  I also have a brand new Windows 2022 server and when I run the Convert-LBFO2SET utility, it states "This version of Windows is not yet certified for Convert-LBFO2SET."  It will not proceed.  How long do we have to wait for a certified version and/or is there a way to force it?

@MilitantPoet @Matt_Johnson5928 if you're deploying new WS2022 hosts....Why are you deploying them with LBFO and then trying to convert them to SET? Why not just create SET in the first place?

Copper Contributor

@Ben Thomas Yeah, thanks, that never occurred to me.  Oh, wait... it did.  But thanks for the assumption that I'm a moron... but you do ask an interesting question (despite it having an EXTREMELY obvious answer of "because I didn't know how") so let's analyze why I didn't start there:  I tried to find any information on it, but I kept finding my way back to one of a billion forums and blogs like this one that ONLY talk about converting and the conversion tool and don't say a single thing about how to create one from scratch.  Combine that with a billion other red herring blogs about things like Microsoft Teams, every other conceivable hyper-v teaming problem, the acronym "SET"... and the bad assumptions I made (although they seem reasonable even in hindsight) that you would create a SET team in the same place in the GUI we've always created NIC teams and that there would be a GUI way of doing it somewhere in Server or even Hyper-V once I realized I should look there... and I was one frustrated guy by the time I had read yet another overly-long and overly-detailed explanation of SET without a single word of how to create one.

So... for any other frustrated folks who find their way here and are super annoyed... let me sum up the basics: 


1.  Apparently LBFO refers to just about any kind or type of Windows Server NIC Team you could create from the GUI.  I know that NOWHERE in the GUI does it say a single thing about "LBFO", it's apparently something you're just supposed to know.  It doesn't matter how you change the prioritization or protocol... if you're working in the Windows Server GUI or even using the typical powershell commands that come with Windows Server for creating a NIC team, you're already wrong.

 

2.  From my reading, you need to turn off LACP as it won't work with SET, so go ahead and reconfigure your switch ports back to their default.  Go ahead.  I'll wait.


3.  Don't create a NIC team in Windows Server NIC teaming.  Yes, if you already created one in Windows Server 2022, you'll just need to delete it because the conversion tool, like most things Microsoft makes, doesn't work with their own products.  No, I have no idea what that means for the myriad of other applications that access NIC teams, but I can tell you that if you look hard enough, you can find an extensive list of all the applications SET doesn't work with, despite many posts making it seem like a panacea.  Yes, creating a SET team is apparently a function of Hyper-V, not windows server (although it does support SOME other functions, not ALL), so the fact that nobody explains that and none of the error messages you're seeing are remotely helpful will lead you to making an understandable, but wrong, assumption that you create a SET NIC Team the way you'd create any other team.  So make sure you install Hyper-V server functionality first, then worry about creating your team later.  (Yes, I know the hyper-v role installation will try to create your virtual switch, because... well... Microsoft is too big for their own good and the left hand never talks with the right hand... so you can't do it from there.  Just skip it or create a temporary one and come back to teaming later.)

 

4.  You have to create the SET team from Powershell after you have Hyper-V installed.  Apparently, even though all the Microsoft posts and error messages will condescend to you about how they released SET in 2016 and it's now 2022 and what the hell is wrong with you, you're so out of date... they still haven't managed to add it to the GUI in 6 years either, so don't let them make you feel bad.  It's actually not too hard to find the documentation on the command once you realize what you need to search for, so if this link is broken by the time you're reading this, just google "switch embedded team powershell command".  This is the blog I used to get the command info I was looking for:  Switch Embedded Teaming Archives - Working Hard In ITWorking Hard In IT

@MilitantPoet thanks for the response, I was actually genuinely interested in the reasons why you have ended up with an LBFO team on WS2022.

 

You raise a number of good points, it isn't obvious that 'NIC Teaming' in Windows refers to LFBO Teaming, or that you should use Switch Embedded Teaming instead. 

The docs page for LFBO Teaming should probably point to the fact that it's deprecated and replaced by SET, and then to an appropriate page on SET which includes how to set it up correctly and the pre-reqs to do so (Like the Hyper-V role).

 

I agree that the GUI is misleading, but still presenting the traditional and deprecated method in Server Manager, and that there is no GUI for Switch Embedded Teaming.

 

I also agree, that the Convert-LFBO2SET Command should have been updated with the release of WS2022 to enable support for it. I have look and it seems to use some customer binaries for reassigning the physical NICs based on OS Version, so a new binary is probably required from Microsoft to enable this to work.

Brass Contributor

@Ben Thomas and @MilitantPoet - I too am very disappointed, but also not at all surprised, to find that none of the SET stuff is anywhere to be found in the GUI, either in Server Manager where the LBFO is, or in Hyper-V Manager where you'd expect to find it in the Virtual Switch area, and even worse IMO if you already have a SET switch and view it in the Hyper-V GUI you end up with very misleading information and heaven help you if you make any changes to it there. Maybe you're actually blocked from doing that, I did not dare try and find out.

 

I have an LBFO on one of my Server 2022 because it is a dedicated management interface spread across two stacked switches, and my Hyper-V Virtual Switch isn't shared with host management. I would love to convert this LBFO to SET but that doesn't seem exactly possible, or maybe it is just so nuanced I can't figure out.

Microsoft

I understand your need for a GUI in Windows Server, that makes sense, but all of this is exposed in Windows Admin Center, and you can create your Switch Embedded Teaming in Admin Center quite easily in the Cluster Creation Wizard. So while you are disappointed it is not exposed in the Server Manager, I have to ask, why are you using that in the first place. Your Hyper-V hosts should be running Server Core edition of Windows Server, or better yet be running Azure Stack HCI OS on your nodes, which doesn't even have a Desktop Experience. You can also use Virtual Machine Manager to create the Switch Embedded Team, and I have a blog article out there on that exact process. The point is, this process has been around since 2016 and has only become better, the fact that there is no GUI method of management built into the OS really tells you that you should be running your Hyper-V hosts as Server Core, its the default installation. If you are not comfortable with Core, well you have Windows Admin Center as a primary method of Management which gives you the PowerShell processes it is using.

 

@MilitantPoet  you mention that you would assume creating a SET Team would be in the same place as creating a Team before in Server Manager, well I am sure you know that an LBFO team and a SET team are NOT the same thing at all. A NetLBFO team is an OS function and creates a Team of the NICs, where a Switch Embedded Team is a Hyper-V feature, and you create a Virtual Switch which contains your Physical NICs as the uplinks with a Load Balancing setting of Switch Independent, the documentation is clear the LACP is no longer supported. 

 

I would be happy to help you understand how to deploy Switch Embedded Teams both in Admin Center or in Powershell, the process is actually quite easy and as we move more toward a Cloud friendly world, either Private or Public Cloud we need to embrace CLI methods of deployment for speed, consistency and compliance methods. I would encourage you to challenge yourself to do things "outside of the GUI" and always via CLI methods, you will find it slow at first but will gain speed and efficiency with each day. 

 

 

Copper Contributor

@MilitantPoet Thanks for doing more than just ranting: I found your guide very helpful. After removing the LBFO team, I had to delete the NICs in Devmgr. Then the command I used was: 

 

New-VMSwitch -Name vSwitch -NetAdapterName "NIC1","NIC2" -EnableEmbeddedTeaming $true

 

@Ben Thomas You are a better man than I am when it comes to responding to comment criticism.

@Michael Godfrey You make some great points and you are absolutely right, but the world in which you operate is not mine. I use server manager because it is an effective tool for adding roles & features and it opens automatically when I logon - even in server 2022. I know that core is the ideal installation for Hyper-V, but that is too inflexible for my environment and isn't supported for my line of business applications. CLI will always co-exist and by nature is more powerful than the GUI, but it will never replace it and the two options should be kept in sync as much as possible. You are right also that using CLI regularly will increase skills & speed, but in small business, I don't have the luxury of working in PowerShell often enough to outshine the GUI.
Apologies if that doesn't provide anyone useful information.

Copper Contributor

I created an account just to point out that @Michael Godfrey is definitely a Microsoft employee.  Leave it to Microsoft  to tell us how we're doing things wrong,  and they have a "better" way when there's no simple documentation or explanation about it that's easily reachable for us smaller IT shops that don't spend our entire lives inside Azure or Microsoft land.  Did it occur to anyone at Microsoft that there's people who don't like or don't want to use Admin Center?  No GUI= no MSPs, who sell Microsoft OSes, as their Helpdesk staff would be completely useless. 

 

I too landed here trying to figure out what the f a SET is and how to make it after accepting delivery of a brand new 2022 Datacenter server and setting up LACP, as all of our servers and switches are setup.  What a joke.  Half of this article is about VMQueue performance, when ANYONE who's setup an actual physical server in the last 15 years knows that the first thing you need to do with Hyper-V is DISABLE VM Queues so the VMs actually work if you use Broadcom NICs (most widely used NICs). 

I can see why vmware just sold for a pretty penny.  They do LACP like a server OS should. 

 

For anyone needing help doing it the "depreciated" way:

 

New-VMSwitch -Name "ExternalToNICTeam_Virt_Switch" -AllowNetLbfoTeams $true -AllowManagementOS $true -netadaptername "DefaultNICTeam-LACP"

 

For anyone reading this:

Cheaper switches that MSPs love to install due to ease of use such as Ubiquiti, Engenius, and I'm sure MANY others don't do Static Link Aggregation which is required for this SET stuff they're shoving down our throats to work.   Now I keep getting alerts due to our monitoring showing the same IP for multiple MAC addresses. 

Copper Contributor

Also, there's no "Cluster Creation Wizard" in my Server 2022 Admin Center.  I'm not trying to create a cluster anyway, but it's not there to be able to create the special snowflake SET network settings.

Co-Authors
Version history
Last update:
‎Oct 25 2022 12:12 PM
Updated by: