Forum Discussion
Compatibility change from 110 to 160 doubles the CPU
When you change the database compatibility level from 110 to 160 in SQL Server 2022, CPU usage can increase because the query optimizer behaves differently. It uses new rules for estimating data size, choosing join types, and enabling parallelism. These changes can cause execution plans to change, which may lead to higher CPU consumption.
The problem becomes harder to trace when your queries are dynamic, as it's difficult to compare performance before and after the change.
To solve this, you should use the built-in SQL Server feature called Query Store. It lets you see which queries are consuming more CPU after the change. You can compare average CPU usage, execution time, and see which execution plans changed.
If certain queries are significantly worse, you might need to:
- Revert them to use the older behavior temporarily.
- Force them to use the older estimation model.
- Analyze their execution plans to see what changed.
- Consider using stored procedures instead of dynamic SQL to stabilize performance.
- mssqluserApr 21, 2025Copper Contributor
The most important queries are dynamic and there are good reasons they are and we cannot change them. They have 100s of different execution plans and I have not been able to figure out whether through Query Store or sys.dm_exec_query_stats their stats before and after the compatibility change. I have tried different approaches with not a lot of success. I am trying to see if someone had experienced similar performance problems and put down a strategy to capture the stats of all queries before and after.
I have enabled Query Store but I have little knowledge and experience with this. The time I have spent so far with the Query Store hasn't helped me enough to figure out which queries have gone worse in performance. Maybe the answer is there, I will dig more and try to better understand how to use the Query Store.
We cannot use stored procedures instead of dynamic SQL.
The dynamic SQL is generated inside .NET code so I cannot easily make SQL changes but I will consider asking the developers to add the parameter, whatever it was, I don't remember now, querytraceon(number) and force at least this heavily used dynamic SQL to use the older cardinality model. Maybe the CPU performance overall will not double like it usually does, then that would at least prove that the major performance hit is indeed caused by this dynamic SQL.
Thank you.