Forum Discussion

iotsakp's avatar
iotsakp
Copper Contributor
Mar 15, 2024
Solved

Possible GENERATE_SERIES memory leak?

Hey all,

 

We've been experimenting with SQL Server 2022 features and stumbled upon what looks like a memory leak on every execution of the GENERATE_SERIES() function. A script that demonstrates it:

 

SELECT * FROM sys.dm_os_memory_clerks WHERE [type] = 'MEMORYCLERK_SOSNODE'

DECLARE @i INT = 1
WHILE @i <= 100000 BEGIN
    IF (SELECT COUNT(*) FROM GENERATE_SERIES(1, 1, 1)) != 1
		BREAK
    SET @i += 1
END

-- ~9ΜΒ leaked, ~96 bytes per iteration, on SOS_Node 0
SELECT * FROM sys.dm_os_memory_clerks WHERE [type] = 'MEMORYCLERK_SOSNODE'

 

It can be reproduced on fresh SQL Server installations, in an empty database with default settings, on:

 

  • Windows
  • Linux (Ubuntu/docker)
  • SQL Managed Instance on Azure

It does NOT reproduce on Azure SQL.

 

Using docker, we've tried every build from RTM up to CU12 that was just released, the behavior is exactly the same. The memory does not seem to ever be released. With enough iterations, SQL Server reaches a point where it cannot execute anything anymore. (when testing, lower the max memory setting to get there faster).

 

Also captured a trace (with https://gist.github.com/arvindshmicrosoft/37758f4dcc3dc233c98b254357c6c492) and using SQLCallStackResolver this call stack appears thousands of times:

 

00 SqlDK!GenericEvent::PublishAndCallAction
01 SqlDK!XeSosPkg::page_allocated::Publish
02 SqlDK!MemoryClerkInternal::AllocatePagesWithFailureMode
03 SqlDK!MemoryClerkInternal::AllocatePages
04 SqlDK!CMemThread<CMemObj>::PbGetNewPages
05 SqlDK!TVarPageMgr<0>::PbAllocate
06 SqlDK!CMemObj::Alloc
07 SqlDK!CMemThread<CMemObj>::Alloc
08 SqlDK!operator new
09 sqllang!CTVFInfoGenSeries::PtiUserArgumentType
0a sqllang!CSTVFGenSeries::Init
0b sqlmin!CQScanTVFStreamNew::Open
0c sqlmin!CQScanNew::OpenHelper
0d sqlmin!CQScanStreamAggregateNew::Open
0e sqlmin!CQueryScan::UncacheQuery
0f sqllang!CXStmtQuery::SetupQueryScanAndExpression
10 sqllang!CXStmtQuery::InitForExecute
11 sqllang!CXStmtQuery::ErsqExecuteQuery
12 sqllang!CXStmtCondWithQuery::XretExecute
13 sqllang!CExecStmtLoopVars::ExecuteXStmtAndSetXretReturn
14 sqllang!CMsqlExecContext::ExecuteStmts<1,1>
15 sqllang!CMsqlExecContext::FExecute
16 sqllang!CSQLSource::Execute
17 sqllang!process_request
18 sqllang!process_commands_internal
19 sqllang!process_messages
1a SqlDK!SOS_Task::Param::Execute
1b SqlDK!SOS_Scheduler::RunTask
1c SqlDK!SOS_Scheduler::ProcessTasks
1d SqlDK!Worker::EntryPoint

 

Note the CSTVFGenSeries::Init call. There's no corresponding free operation logged.

 

Anyone else seeing this?

  • Also The_Mike​ and iotsakp​ - I can confirm the fix for this issue has been delivered in SQL Server 2022 CU20, build 16.0.4205.1 (https://learn.microsoft.com/en-us/troubleshoot/sql/releases/sqlserver-2022/cumulativeupdate20).

4 Replies

  • Hello iotsakp​ - thank you for reporting this issue and sharing the call stack. I can confirm that this issue is addressed. The fix has been included in SQL Server 2025 and had been included even earlier in Azure SQL Database. If I have any further information to share regarding the fix for SQL Server 2022, I will update this thread.

    • The_Mike's avatar
      The_Mike
      Copper Contributor

      Hey Arvind, thank you for reply. I have faced the very same issue on SQL 2022 (16.0.4151.1) And it leaks ~ 400GB of RAM a week and it force us to restart the server. Do you have any details, when can we expect the fix?

      The callstack is below

      00 SqlDK!GenericEvent::PublishAndCallAction
      01 SqlDK!XeSosPkg::page_allocated::Publish
      02 SqlDK!MemoryClerkInternal::AllocatePagesWithFailureMode
      03 SqlDK!MemoryClerkInternal::AllocatePages
      04 SqlDK!CMemThread<CMemObj>::PbGetNewPages
      05 SqlDK!TVarPageMgr<0>::PbAllocate
      06 SqlDK!CMemObj::Alloc
      07 SqlDK!CMemThread<CMemObj>::Alloc
      08 SqlDK!operator new
      09 sqllang!CTVFInfoGenSeries::PtiUserArgumentType
      0a sqllang!CSTVFGenSeries::Init
      0b sqlmin!CQScanTVFStreamNew::Open
      0c sqlmin!CQScanNew::OpenHelper
      0d sqlmin!CQScanStreamAggregateNew::Open
      0e sqlmin!CQScanNLJoinNew::GetRow
      0f sqlmin!CQScanLightProfileNew::GetRow
      10 sqlmin!CQScanFilterNew::GetRowHelper
      11 sqlmin!CBpQScanAdapter::BpGetNextBatchInternal
      12 sqlmin!CBpQScan::GetNextBatch
      13 sqlmin!CBpQScanSort::Accumulate
      14 sqlmin!CBpQScanSort::BpOpen
      15 sqlmin!CQScanBatchHelper::Open
      16 sqlmin!CQScanNew::OpenHelper
      17 sqlmin!CQScanStreamAggregateNew::Open
      18 sqlmin!CQScanNew::OpenHelper
      19 sqlmin!CQScanNew::OpenHelper
      1a sqlmin!CQScanUpdateNew::Open
      1b sqlmin!CQueryScan::StartupQuery
      1c sqllang!CXStmtQuery::SetupQueryScanAndExpression



       

      Also, i would recommend to use XE session with pairing to catch if you have same problem:

      CREATE EVENT SESSION [Pairing_Mem_Leak_SOS] ON SERVER 
      ADD EVENT sqlos.page_allocated(
          ACTION(package0.callstack)
          WHERE ([memory_clerk_name]='MEMORYCLERK_SOSNODE')),
      ADD EVENT sqlos.page_freed(
          ACTION(package0.callstack)
          WHERE ([memory_clerk_name]='MEMORYCLERK_SOSNODE'))
      ADD TARGET package0.event_counter,
      ADD TARGET package0.pair_matching(SET begin_event=N'sqlos.page_allocated',begin_matching_columns=N'memory_clerk_address,page_location',end_event=N'sqlos.page_freed',end_matching_columns=N'memory_clerk_address,page_location')
      WITH (MAX_MEMORY=32768 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=5 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=PER_CPU,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)
      GO

       

      • Arvind_Shyamsundar's avatar
        Arvind_Shyamsundar
        Icon for Microsoft rankMicrosoft

        Also The_Mike​ and iotsakp​ - I can confirm the fix for this issue has been delivered in SQL Server 2022 CU20, build 16.0.4205.1 (https://learn.microsoft.com/en-us/troubleshoot/sql/releases/sqlserver-2022/cumulativeupdate20).

Resources