WVD + FSLogix (Azure Files) with 8 x D8S-v3 for 340 users - Major performance issues

Brass Contributor

Hello, 

 

We have this week gone live with a 340 user WVD with FSLogix deployment in Azure on 8 x D8S-v3 size servers. We are having performance issues which we're not able to get to the bottom of. 

 

Server performance looks like it is OK via Azure Portal. Only thing that does look slightly high is Network, however, I'm not certain this this is the issue. 

 

Behaviour

  •  Users are logging in for the first - waiting a long time for the FSLogix profile to be created and hang on the Windows start until it is completed. 
  • Users who do have a profile are having sluggish behaviour, i.e. clicking on start the menu takes some time to open and opening an app seems to hang for a while then the app opens 
  • Users greeted by a black screen upon first creation of their profile, nothing changes, we need to delete the virtual Disk from the FSLogix storage pool. So they can signin again for it to create the profile. 
  • Outlook and OneDrive are taking a awful long time to set up.

During testing about 2 weeks ago, we didn't have a single issue. However, I am not sure with Microsoft's networks being hammered is this perhaps related to performance of the servers (network any way) 

 

Or, it being +/- 280-300 users logging in causing the issues. 

Or, network throughput some how needs to be increased. 

 

We've raised a Critical support case as the project engineers and service desk are being hammered. 

However, our 2 hour SLA has not been met and still 5 hours later, nothing. I assume due to the increase in demand given the current situation of home isolation. 

 

Thank you for reading through this and welcome any feedback/guidance. 

 

Adam

 

7 Replies

@Adam Weldon-Ming What does your Azure Files configuration look like? What performance tier are you using? Are the session hosts and the Azure Files (and VM, if attached) all in the same Azure region? Same virtual network?

 

All of what you're reporting is very likely to be related to the FSLogix user containers and the dynamic process of them being attached to the session host at/during sign-in. The questions above are starting areas though.

@Justin Coffey 

 

Hi Justin,

 

Thanks for responding, 

 

I believe that the Azure File Share is on the Standard Tier.  They are all in the same region and virtual network. We have now added to more VM’s into the host pool so have a total of 10 VM's - the idea here was to see if spreading the load more will improve performance. Users start in about 20 mins so going to see what the outcome is. 

 

Do you think enabling Accelerated Networking on the VM’s help with performance?

 

Kind Regards

Hey,

We noticed a similar performance issue with latency in the d8_vs machines. On a whim, we upgraded to d16 and dropped some hosts from the pool and are not seeing latency issues anymore. My guess is the ‘network’ is throttled or something for the smaller d8. AWS is pretty transparent about available network bandwidth by machine size, I cannot find the same info from Azure.
Hi Adam,

During a POC setup of WVD for one of my clients I've noticed similar issues and changed Azure Files (FSLogix profile store) to a Premium tier. This had a significant positive impact on the user experience in WVD!

Kind regards,
Thomas

@Thomas-DeWitte 

 

Same here, we are using Premium, with larger disk sizes, at least 1024 to get the througput/iops. One file share per host pool. And D16-v3s. Expensive setup, but, figured start big and ratchet it down later if needed.

@FinTechSean The information is located here for the Dv3 and Dsv3-series: https://docs.microsoft.com/en-us/azure/virtual-machines/dv3-dsv3-series

 

It includes:

 

  • Max cached and temp storage throughput: IOPS/MBps (cache size in GiB)
  • Max uncached disk throughput: IOPS/MBps
  • Max NICs/Expected network bandwidth (Mbps)

@Adam Weldon-Ming 

 

Did you change the OS Drive from P10, Consider the following 

 

1. 22 E4sv3 VMs with 4 users per vCPU or 16 users per VM to double the RAM capacity, and to provide appropriate RAM per user. 

2. Consider two separate Azure Premium Files one for Profile Container and another for Office Caching container 

3. Size of Profile Container Azure Premium Files to be atleast 17000 GiB to provide appropriate performance.