SQL 2019 Std Edition high recompilation and CPU spike

Copper Contributor

Hello,

I am having an issue in SQL 2019 Std edition where I am seeing high compilation across multiple stored procedure. The reason for the recompilation as per the extended event trace is 'temp table changed’. However, when the same code is run across SQL 2016 Std edition the system is behaving normally. This behavior is similar for the Developer edition as well.

 

The database and the tests that are being run are identical across to servers. The CPU and memory are too identical for both the servers and before running the tests index are rebuilt too.

Below are some of the information;

SQL edition.
Microsoft SQL Server 2019 (RTM-CU15) (KB5008996) - 15.0.4198.2 (X64) Jan 12 2022 22:30:08 Copyright (C) 2019 Microsoft Corporation Standard Edition (64-bit) on Windows Server 2016 Datacenter 10.0 <X64> (Build 14393: ) (Hypervisor)
Microsoft SQL Server 2016 (SP3) (KB5003279) - 13.0.6300.2 (X64) Aug 7 2021 01:20:37 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows Server 2016 Datacenter 10.0 <X64> (Build 14393: ) (Hypervisor)

Common settings for both the servers
Min Server Memory 51200 MB

Max Server Memory 112860 MB

Processor Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz, 2500 Mhz, 8 Core(s), 16 Logical Processor(s)

RAM 128 GB

MAXDOP 1

CTP 5
If any further info is needed will be glad to provide. 
Can any one share any info on this as to how to investigate it or get to the root cause? or has someone come across a similar issue with SQL 2019.

6 Replies
I know it's the obvious question, but does your code modify the temp table? (eg. creating/dropping indexes on relevant columns, etc.) Most structural changes will likely trigger recompiles. How many rows are typically in the temp tables?
Hi @KanishkaB
Let's start with fact that this is not so rare case. In most cases new versions bring better performance but there are many changes in any new versions which might lead to worse performance in rare specific cases comparing to the previous version.

This related to new features, improving and new internal algorithm and more.

In most cases, we can see the source of the issue in the Execution Plan, but with the information you (did not) provided we have nothing to discuss about

If you want to discuss the issue and find best practice then you should start with providing minimum information to preproduce the scenario. Please provide:
1) Queries to CREATE your table(s) including indexes
2) Queries to INSERT sample data.
3) The desired result given the sample, as text or image of excel for example.
4) A short description of the business rules, and how you got 1-2 of the results
5) Which version of SQL Server you are using (this will help to fit the query to your version).
6) the full Execution Plans XML (not screenshot but the full XML) for both databases

With this information we will be able to start the discussion
I agree with you. The temp tables do create index at the time when the table is created and the rows are mostly within 50, however, the question is, similar type of test when conducted on SQL 2016 does not show a CPU spike/ recompilations. Pls note the pattern of test, the configuration and the database are all identical in SQL 2019 and SQL 2016 environments. So, I am trying to understand what is that key factor in SQL 2019 that may be driving the spike and the recompilation.
Do you think there could be anything I am still missing? Did anything change between SQL 2016 and SQL 2019 DB engine.
with all factors/setup/configurations fully identical see below how each edition behaved.
Microsoft SQL Server 2019 (RTM-CU15) (KB5008996) - 15.0.4198.2 (X64) Jan 12 2022 22:30:08 Copyright (C) 2019 Microsoft Corporation Standard Edition (64-bit) on Windows Server 2016 Datacenter 10.0 <X64> (Build 14393: ) (Hypervisor)-->>100% CPU
Microsoft SQL Server 2019 (RTM-CU16) (KB5011644) - 15.0.4223.1 (X64) Apr 11 2022 16:24:07
Copyright (C) 2019 Microsoft Corporation Developer Edition (64-bit)
on Windows Server 2016 Datacenter 10.0 <X64> (Build 1U4393: )(Hypervisor)-->>100% CPU
Microsoft SQL Server 2019 (RTM-GDR) (KB4583458) - 15.0.2080.9 (X64) Nov 6 2020 16:50:01 Copyright (C) 2019 Microsoft Corporation Standard Edition (64-bit) on Windows Server 2016 Datacenter 10.0 <X64> (Build 14393: ) (Hypervisor) -->~40% CPU
hello Ronen,
While I agree that you need the info for better insights, however what I am still not able to figure out is a same test with identical configuration, DB settings, the database itself is behaving differently between SQL 2016 and SQL 2019. Pls note that what ever is being done on SQL 2016, a similar approach is being followed in SQL 2019. So, the question remains why SQL 2019 is behaving this odd. Let me know if I am missing anything here.
with all factors/setup/configurations fully identical see below how each edition behaved.
Microsoft SQL Server 2019 (RTM-CU15) (KB5008996) - 15.0.4198.2 (X64) Jan 12 2022 22:30:08 Copyright (C) 2019 Microsoft Corporation Standard Edition (64-bit) on Windows Server 2016 Datacenter 10.0 <X64> (Build 14393: ) (Hypervisor)-->>100% CPU
Microsoft SQL Server 2019 (RTM-CU16) (KB5011644) - 15.0.4223.1 (X64) Apr 11 2022 16:24:07
Copyright (C) 2019 Microsoft Corporation Developer Edition (64-bit)
on Windows Server 2016 Datacenter 10.0 <X64> (Build 1U4393: )(Hypervisor)-->>100% CPU
Microsoft SQL Server 2019 (RTM-GDR) (KB4583458) - 15.0.2080.9 (X64) Nov 6 2020 16:50:01 Copyright (C) 2019 Microsoft Corporation Standard Edition (64-bit) on Windows Server 2016 Datacenter 10.0 <X64> (Build 14393: ) (Hypervisor) -->~40% CPU

@KanishkaB As Ronen mentioned, there are always subtle differences between how queries get optimized from version to version. You asked if anything changed in the engine between 2016 and 2019. A significant number of changes happened between those versions, as you'd hope. In general, we always end up better off overall, but there's nothing to stop you falling into an edge case, where that is not the case. That's why it's so important to test in detail before upgrading. 

 

Some code is more susceptible to becoming an edge case than other code. And so when we find these cases, we try to modify the code to avoid what's triggering the issues.

 

If the temp tables are created, and indexes, only once, then just used, that's not the issue. 

 

Temp tables trigger recompilations much more quickly than other tables. A previous recommendation of Microsoft's best practices analyzer used to suggest replacing temp tables with table variables every time it saw one. That was very poor advice as a general rule. It avoids the recompiles because it doesn't have the same level of statistics available. 

 

In your case though, with a small number of rows, I'd be interested to see if using table variables would have an overall positive effect on the code. Other questions I'd have are around the other recompilation triggers that are mentioned in the whitepaper that I linked.

 

In particular, are you always calling the code with well-defined parameter data types?

I did run a test with table variables and was hopeful to see some positive results, however it was not the case. The CPU still spiked and one thing that I saw with the wait types was WAIT_ON_SYNC_STATISTICS_REFRESH, which perhaps I did not see previously. We have Auto Update Stats ON for the DB and he Asynchronous Stat update as OFF.
To answer your last question yes we are.
Regarding the white paper link that you have sent I am trying to see if all my tests adheres to that- which mostly I think it is.
On the 2019 RTM edition without CU the CPU remains around 40%, but on 2019 CU (15 or 16) it spikes to 100%- with everything being same.