Oftentimes when people include actual execution plan (Ctrl-M, see
for a good primer on execution plans) while executing a batch in SQL Server Management Studio, and they see this "Query cost (relative to the batch)" thing on top of each query in the batch, they start to ask: What does this mean? But when I run this batch, the first query runs a faster than the second, and yet this crazy SQL Server says the second has a higher cost, what are they talking about?
Before I attempt an explanation, let me quickly outline an example to illustrate this. First, let's create a table with some rows to experiment with:
If Object_ID('BigT', 'U') Is Not Null
Drop Table BigT
Create Table BigT(pk Int Identity Primary Key Clustered, string Varchar(1000), number Int);
-- Insert 10,000 rows
WITH Digits(D) AS
Select 0 Union All Select 1 Union All Select 2 Union All Select 3 Union All Select 4 Union All
Select 5 Union All Select 6 Union All Select 7 Union All Select 8 Union All Select 9
Insert Into BigT
Select 'Some text', A.D * 1000 + B.D * 100 + C.D * 10 + D.D
From Digits A Cross Join Digits B Cross Join Digits C Cross Join Digits D
Now, with Include Actual Execution Plan on, we can run the following two-query batch:
Select string, Max(number) From BigT Group By string Order By Max(number)
Select A.string, B.string From BigT A Join BigT B on A.number = B.number * 2
On my machine, I get the attached plan. The main thing to notice here is that the first query has a query cost of 4% (relative to the batch), and the second one has 96% cost. Actually, if you want the numerical basis for these percentages and you're running this experiment yourself (you'll probably have different numbers depending on your machine), just hover over the root SELECT operator in the plans. I have Estimated Subtree Cost of 0.135437 for the first SELECT and 3.154 for the second SELECT, which have the ratio ~4:96. Notice the lack of units for these costs: they're not in seconds, MB or anything.
The reason these costs exist is because of the query optimization SQL Server does: it does cost-based optimization, which means that the optimizer formulates a lot of different ways to execute the query, assigns a cost to each of these alternatives, and chooses the one with the least cost. The cost tagged on each alternative is heurestically calculated, and is supposed to roughly reflect the amount of processing and I/O this alternative is going to take. For example, based on the statistical information we automatically collect on the number column, we estimated that we'll have 5000.5 rows coming out of the join in the second query (a really good guess in this case, the join produces 5000 rows), and based on this we assigned a cost 1.8 on the Merge Join operator in my machine. This number, again, is unit-less and is meant only for comparison purposes against other alternatives.
So when you get the execution plan in the end, we display this relative costs of queries in a batch, and for different operators within the same query, for informational purposes. They are not meant as accurate representations of query run-times. The actual run-time of a query depends on many things: whether the buffers are warm (i.e. the contents required by the query are cached by the server), what other queries are running on the server, locking, etc. Also, the cost estimation is a heurestic process based on statistical sampling and various best-effort guesses, and could easily go wrong especially for large queries.
For example, setting Statistics Time on:
Set Statistics Time On
And running these queries again I get a run time of 15 ms for the first query and 70 ms for the second. The second did take longer, but not a 96:4 ratio. Then again, these are really short times and hardly precise. For example, a second run yielded run times of 16 ms & 111 ms.