Oct 21 2021 09:37 AM
I would greatly appreciate some advice on a situation that we recently had of which I am trying to better understand.
We have two Windows Server 2019 Standard with SQL Server 2019 Web Edition running as Gen 1 virtual machines at a hosting provider.
Server A (Prod):
6 logical processors (as seen by Windows)
24GB RAM
225GB disk
Windows Server 2019 Standard (1809)
SQL Server 2019 Web Edition (15.0.2080.9)
Server B (QA):
6 logical processors (as seen by Windows)
16GB RAM
200GB disk
Windows Server 2019 Standard (1809)
SQL Server 2019 Web Edition (15.0.2080.9)
Initially, the QA server was configured with SQL Server set at max memory 8GB and MAXDOP of 0. When a search for customers is run from the corresponding web site, it triggers 3 large queries that comb through about 900,000 records. This query takes about 3 seconds to return and we can see that the SQL Engine is triggering use of parallelism.
The Prod server was configured with SQL Server set at max memory 16GB and MAXDOP of 1. When a search for customers is run from the corresponding web site, it triggers 3 large queries that comb through about 900,000 records. This query takes about 15 seconds to return and we can see that the SQL Engine is not triggering use of parallelism. We changed the MAXDOP to 0 to match the QA server and this did trigger the use of parallelism but the query time was only marginally better. When we set the MAXDOP to 0 and the max memory down to 8GB the query now runs with parallelism in about 3 seconds.
I am not understanding how lowering the maximum amount of memory available to SQL Server improves the performance of the query. What am I missing? Any advice would be greatly appreciated.