Forum Discussion
SQL 2019 Std Edition high recompilation and CPU spike
Do you think there could be anything I am still missing? Did anything change between SQL 2016 and SQL 2019 DB engine.
with all factors/setup/configurations fully identical see below how each edition behaved.
Microsoft SQL Server 2019 (RTM-CU15) (KB5008996) - 15.0.4198.2 (X64) Jan 12 2022 22:30:08 Copyright (C) 2019 Microsoft Corporation Standard Edition (64-bit) on Windows Server 2016 Datacenter 10.0 <X64> (Build 14393: ) (Hypervisor)-->>100% CPU
Microsoft SQL Server 2019 (RTM-CU16) (KB5011644) - 15.0.4223.1 (X64) Apr 11 2022 16:24:07
Copyright (C) 2019 Microsoft Corporation Developer Edition (64-bit)
on Windows Server 2016 Datacenter 10.0 <X64> (Build 1U4393: )(Hypervisor)-->>100% CPU
Microsoft SQL Server 2019 (RTM-GDR) (KB4583458) - 15.0.2080.9 (X64) Nov 6 2020 16:50:01 Copyright (C) 2019 Microsoft Corporation Standard Edition (64-bit) on Windows Server 2016 Datacenter 10.0 <X64> (Build 14393: ) (Hypervisor) -->~40% CPU
KanishkaB As Ronen mentioned, there are always subtle differences between how queries get optimized from version to version. You asked if anything changed in the engine between 2016 and 2019. A significant number of changes happened between those versions, as you'd hope. In general, we always end up better off overall, but there's nothing to stop you falling into an edge case, where that is not the case. That's why it's so important to test in detail before upgrading.
Some code is more susceptible to becoming an edge case than other code. And so when we find these cases, we try to modify the code to avoid what's triggering the issues.
If the temp tables are created, and indexes, only once, then just used, that's not the issue.
Temp tables trigger recompilations much more quickly than other tables. A previous recommendation of Microsoft's best practices analyzer used to suggest replacing temp tables with table variables every time it saw one. That was very poor advice as a general rule. It avoids the recompiles because it doesn't have the same level of statistics available.
In your case though, with a small number of rows, I'd be interested to see if using table variables would have an overall positive effect on the code. Other questions I'd have are around the other recompilation triggers that are mentioned in the whitepaper that I linked.
In particular, are you always calling the code with well-defined parameter data types?
- KanishkaBJun 01, 2022Copper ContributorI did run a test with table variables and was hopeful to see some positive results, however it was not the case. The CPU still spiked and one thing that I saw with the wait types was WAIT_ON_SYNC_STATISTICS_REFRESH, which perhaps I did not see previously. We have Auto Update Stats ON for the DB and he Asynchronous Stat update as OFF.
To answer your last question yes we are.
Regarding the white paper link that you have sent I am trying to see if all my tests adheres to that- which mostly I think it is.
On the 2019 RTM edition without CU the CPU remains around 40%, but on 2019 CU (15 or 16) it spikes to 100%- with everything being same.