May 04 2020 12:38 AM
May 04 2020 12:38 AM
I'm looking through the pre-requisites for deploying ATP sensors to our domain controllers and wanted to get a bit more information around 2 points.
1) Dynamic Memory / Memory Ballooning not supported
In the Sizing tool documentation it is recommended that:
|VMWare||Ensure that the amount of memory configured and the reserved memory are the same, or select the following option in the VM setting – Reserve all guest memory (All locked).|
However, there are other places, such as within vMware tools where a memory ballooning driver can be enabled / disabled, or the windows registry on the domain controller.
Q: Are we required to disabled ALL references to memory ballooning? Or will enabling "Reserver all guest memory (All locked)" on a Per VM basis be enough?
2) In the Sizing tool documentation it is recommended that:
"It's recommended that you don't work with hyper-threaded cores. Working with hyper-threaded cores can result in Azure ATP sensor health issues."
Q: We may not be able to disable Hyperthreading due to some of our domain controllers being hosted in a vMware cluster by a Private cloud provider - are you able to expand on the real world issues we might see when running a domain controller with an ATP sensor with Hyperthreading enabled please?
May 04 2020 01:16 AM - edited May 04 2020 01:17 AMSolution
#1 Enabling "Reserve all guest memory (All locked)" on a Per VM basis is enough and even preferred.
#2 You can try running with Hyper Threading, it might be OK, depending on how strong the server is, but if you encounter health alerts about overloads, it might be related to Hyper threading turned on.
May 10 2020 12:18 PM
@EdLarge the guidance you received from @Eli Ofek is solid, and accurate. Just to add some additional comments, keep in mind that when you reserve the memory, the memory can only be used by that VM, so the trickle down impact (especially if you apply reservations to many VMs in the same cluster) would be performance impact to other VMs. Because memory swap is disabled by default you of course don't get that advantage either.
Of course, if you don't have more VM vMem deployed than you have physical you are good, but if you have overcommitted memory keep and eye out on your other VMs.
If you start running into performance issues across the cluster, look at share value as an alternative or add memory.