Accelerated Networking on WVD Hosts

Brass Contributor

I've noticed that after deploying hosts to a Host Pool (ARM), none of them have accelerated networking enabled.  I went back through the deployment wizard and confirmed that there are no options for this feature.  I also confirmed that this IS an option when deploying a Classic Host Pool.

 

I can set this post deployment via PowerShell but I'm wondering why its not an option, or even a default, for Host deployment on ARM.

 

The documentation on accelerated networking suggests that the Win10 Multi-Session OS may not be "supported" for this but it's certainly not clear to me.

https://docs.microsoft.com/en-us/azure/virtual-network/create-vm-accelerated-networking-powershell#s...

Any insight on this from anyone?

5 Replies

Bumping my thread...

 

I take it from the lack of any response that nobody has any insight on my specific questions.

 

Reading about Accelerated Networking in general, it sounds like a feature that would be desirable but I'm not sure if there are tradeoffs that would be negative in a WVD scenario.

 

I could assume that since the option it not exposed during deployment that there may be no real benefit of using it in the first place for WVD.

 

I do know that while possible, its a pain to enable it post deployment since the hosts have to be deallocated when turning the feature on.  This is especially true when adding a host to a pool.

 

If there are no benefits, then I'm inclined to leave is off on future deployments.  Just looking for anyone that has any insight on this.

@Nagorg-TerralogicI did tested Accelerated network cards in WVD environement. ARM GUI won't allow you to create the vm with accelerated network, but you can update the NIC via powershell. There is an improvment in latency about 2x using defaults settings with a Network software that is unoptimized in Network IO. I think Microsoft don't support it because Windows 10 is probably not designed to handle RSS. Note that you can also tweak the Hyper-v virtual nic, so depending of your software you might not need it. The only down side I found is accel network cards are SRIOV cards. So every dealocation might get a new hardware in the device manager... not a big deal for 24/7 VMs. In worst case scenario you can just pop a new normal nic or disable the network aceel option in powershell. Important note: if you have Availability sets it wont allow you to change the config one nic for one VM if one of the VM have a different config. (I learn that one the hard way)

@fmartel thanks for the reply.  I guess I'll need to do my own tests to see if its worth the hassle of enabling it post deployment.

 

It seems odd to me that it would be omitted from the deployment if it was a "good thing to do".

@Nagorg-Terralogic Interested to know if you had completed this and what your experience was/is?

While I've not done any in-depth "benchmark" testing, I did simply omit my post deployment efforts of enabling accelerated networking.
If there was any actual improvement from having it enabled, it hasn't been realized from leaving it disabled.

So, a little K.I.S.S. principle has been adopted here with success.