How It Works: Temporary Objects - User Transaction Scope
Published Apr 20 2022 08:15 AM 4,332 Views

Temporary tables (create table #tmp, select into #tmp, declare @t table, internal worktables from sorts, internal spills from hashes, table valued parameters, cursors, etc.) are commonplace for SQL Server applications. There are many discussions pertaining to temporary table caching, recompile and such aspects. This post discusses physical metadata storage as it relates to performance.


There are many ways to create and use temporary tables, here is one example:


declare @t table (id int NOT NULL PRIMARY KEY CLUSTERED, name sysname)

insert into @t select object_id, name from sys.objects
select * from @t


The example uses a temporary table variable. The behavior is similar, often the same, for a traditional temporary table like #t or a table declared from a table type, worktable, temporary index, etc.  The key point in this discussion is the transaction declaration scope.

When a temporary table is declared in a batch (dynamic) the table is bound to the user transaction lifetime whereas when the table is part of a T-SQL procedure or function the table is bound to a system transaction and can be cached/reused.


When the temporary table is bound to the user transaction it is both created and destroyed as part of the transaction.  The same logic in a procedure attempts to avoid the creation and destruction, for each execution, by using temporary table caching.


From the issue I was debugging, the user transaction scope mattered because creation and destruction of metadata may be an expensive operation.  The reported behavior was heavy latch contention in tempdb, usually on a single page.  Debugging the problem revealed the page contended in tempdb was associated with the internal (sysobjvalues) table, which tracks the metadata (table, columns, indexes, temporary statistics, etc.)




Here is a simplified reproduction of the contention:


ostress -E -S. -Q"sp_executesql N'declare @t table (c1 varchar(100)); select * from @t';declare @t table (c1 varchar(100)); select * from @t" -dtest -n256 -r9999999 -q


It turns out the application declared and used several temporary table variables, with lots of columns, as part of the same batch request.  Furthermore, the workload could have 800+ concurrent session executions, of the common batch request, resulting in the contention to create and destroy the tempdb metadata.  In fact, the workload also included indexes for the temporary table variable, adding to the metadata overhead.


The heavy reliance on dynamic, temporary objects resulted in the tempdb, metadata contention as each of the 800+ sessions issued create and drop operations, traversing the same sysobjvalues information and serializing on the latch.


To reduce the tempdb, latch contention and improve performance the application changed the batch to a stored procedure, allowing SQL Server to use temporary table caching when possible and eliminating the need to perform create and drop activities for each execution.

Temporary objects can be helpful and a necessary part of query logic.  However, be aware that constant creation and destruction activities may be expensive, may limit scalability, and impacts performance.


Review your use of temporary objects as the use can often be eliminated or reduced (cursor turned into select statement, sort and hash spills to tempdb may be avoiding with proper indexing, etc.)

Version history
Last update:
‎Apr 20 2022 08:15 AM
Updated by: