Forum Discussion
Compatibility change from 110 to 160 doubles the CPU
We are running SQL Server 2022, when we change our database from compatibility level 110 to 160, the CPU utilization eventually doubles. After switching to compatibility 160 I do run update statistics FULL on the whole database but I am unable to figure out which queries are the ones that jump to using more CPUs than on compatibility level of 110. Our frequently used queries are dynamic SQL so I am having difficulty identifying their execution stats before and after.
I need some help.
Thank you.
4 Replies
You might be getting higher CPU usage put maybe for some time, basically when all calls get recompiled.
I'll suggest you to use Query Store and maybe try to identify sessions with high cpu.
You can also enable Query store that will help
Additionally you can try sp_whoisactive to detect sessions taking most of the resources
One more thing, check optimize for ad hoc workloads configuration that might help
Regards
Javier
- mssqluserCopper Contributor
No, even after hours, days, the CPU stays at a double utilization pre-change to compatibility level to 160.
I have been trying to use the Query Store, even though this is new to me, but I haven't been able to target or figure out which queries are slowing their execution time after the compatibility change.
I use sp_whoisactive, so far I have not been able to figure out the culprits.
I just checked "optimize for ad hoc workloads" and it is set to 0. Ok, this is something I'll look into, read about it again and probably turn it on, and later change the compatibility level back to 160 again.
Thank you.
- alyzopairCopper Contributor
When you change the database compatibility level from 110 to 160 in SQL Server 2022, CPU usage can increase because the query optimizer behaves differently. It uses new rules for estimating data size, choosing join types, and enabling parallelism. These changes can cause execution plans to change, which may lead to higher CPU consumption.
The problem becomes harder to trace when your queries are dynamic, as it's difficult to compare performance before and after the change.
To solve this, you should use the built-in SQL Server feature called Query Store. It lets you see which queries are consuming more CPU after the change. You can compare average CPU usage, execution time, and see which execution plans changed.
If certain queries are significantly worse, you might need to:
- Revert them to use the older behavior temporarily.
- Force them to use the older estimation model.
- Analyze their execution plans to see what changed.
- Consider using stored procedures instead of dynamic SQL to stabilize performance.
- mssqluserCopper Contributor
The most important queries are dynamic and there are good reasons they are and we cannot change them. They have 100s of different execution plans and I have not been able to figure out whether through Query Store or sys.dm_exec_query_stats their stats before and after the compatibility change. I have tried different approaches with not a lot of success. I am trying to see if someone had experienced similar performance problems and put down a strategy to capture the stats of all queries before and after.
I have enabled Query Store but I have little knowledge and experience with this. The time I have spent so far with the Query Store hasn't helped me enough to figure out which queries have gone worse in performance. Maybe the answer is there, I will dig more and try to better understand how to use the Query Store.
We cannot use stored procedures instead of dynamic SQL.
The dynamic SQL is generated inside .NET code so I cannot easily make SQL changes but I will consider asking the developers to add the parameter, whatever it was, I don't remember now, querytraceon(number) and force at least this heavily used dynamic SQL to use the older cardinality model. Maybe the CPU performance overall will not double like it usually does, then that would at least prove that the major performance hit is indeed caused by this dynamic SQL.
Thank you.