sizing tool vs (not so) busy domain controllers

Copper Contributor

I came across a relatively small environment (4 DCs, 1000 users) and I tried using sizing tools just to learn that I will need extra 3.5 cores and 9 GBs of RAM per domain controller. That does not seem correct, as these DCs are flat idle most of the day. There are multiple other agents (MMA reporting to SCOM/OMS), snare, others - can it offset the calculation so much? The traffic measured in pps is not the indication of what's going on on these DCs... What's the rule of thumb here? Set it as is and observe the behavior? Or scale up and follow guidance strictly?

1 Reply

The sizing tool takes into account roughly 3 inputs:
1) The traffic on the DC (PPS)
2) The minimal (base) traffic independent RAM and CPU requirements for AATP sensors
3) The free (available, not maximum) RAM and CPU resources on the DC without AATP

Based on your description it sounds like the typical RAM and CPU consumption of your DCs is higher than what would permit AATP sensors to run properly.
Low available CPU and RAM can definitely offset the calculations a lot.
You didn’t share any numbers (the Azure ATP Summary sheet’s DC table has in each row detailed information about PPS and typical CPU and RAM consumption – perhaps if you shared one of these rows we could make a more educated guess), but offhand it wouldn’t be a good idea to ignore the Sizing Tool’s recommendations.