Query
2 TopicsSQL queries have become extremely slow compared to nearly identical queries on the same tables
Hi all. I've been drafting a SQL query from within MS SSMS, as a .SQL file, and then running it from within SSMS, on the same system that is running the 2019 developer edition server. The query file contains around 100 or so queries, virtually all UPDATE queries, which populate rows in a SUMMARYTABLE with results taken from the WORKTABLE. I've noticed that recently, the last 10% or so of these queries have become exceptionally slow - around 500 times slower than comparable queries. Here is an example of one of the fast ones, and one of the slow ones. The first takes less than a second: UPDATE database1.dbo.SUMMARYTABLE SET database1.dbo.SUMMARYTABLE.SalesTierCount = (select count(*) from database1.dbo.WORKTABLE as T1 WHERE T1.Manager = database1.dbo.SUMMARYTABLE.Manager and T1.SalesTier =1 and T1.Zdate < database1.dbo.SUMMARYTABLE.Zdate) the second one takes about seven minutes: UPDATE database1.dbo.SUMMARYTABLE SET database1.dbo.SUMMARYTABLE.LifetimeCount = (select count(*) from database1.dbo.WORKTABLE as T1 WHERE T1.retailer = database1.dbo.SUMMARYTABLE.retailer and T1.SalesTier =1 and T1.Zdate < database1.dbo.SUMMARYTABLE.Zdate) Does anyone have any suggestions for rectifying this extremely slow issue? 2 of the 3 WHERE/AND clauses are identical. The third clause differs, however both queries query the same type of data for that clause - both are varchar(50), in both tables. This is a link to the execution plan for both queries: Execution plan The final part of both plans shows an INDEX SCAN (non-clustered). The first, fast query, has the "Number of rows read" of around 200,000, the second, around 436 million. The total time to run the entire .SQL query has blown out from a minute or so, to nearly 4 hours! Any suggestions greatly appreciated.1.2KViews0likes2CommentsCounting occurrences of Record ID number for use in a calculated column
In my SQL Server db, I have a field that contains a record ID that is repeated on each line of a transaction. Another field contains total cost for the associated transaction that is also repeated. Within the Excel file from which the data is imported, I can use the following equation to create a new column from which I can then use SUMIFS to get the total cost (SUMIFS on the original yields total costs * the number of lines in which the amounts are repeated). The Excel equation is =(SUMIFS(Cost column, ID column, ID number)/countifs(ID column,ID number))*(1/countifs(ID column,ID number). Here is a sample from Excel Equation =(SUMIFS(Q:Q,P:P,P7)/COUNTIFS(P:P,P7))*(1/COUNTIFS(P:P,P7)) ID Number Cost Calc Column Trx Total Proof 12345 1,250.00 416.67 1250 - 12345 1,250.00 416.67 1250 - 12345 1,250.00 416.67 1250 - 67891 1,422.00 284.40 1422 - 67891 1,422.00 284.40 1422 - 67891 1,422.00 284.40 1422 - 67891 1,422.00 284.40 1422 - 67891 1,422.00 284.40 1422 - I would like to add a calculated column to the table that accomplishes the same thing. Each row would then produce a fraction of the total that will add up to the correct amount when aggregated at the ID number level. Is there a way to pass the number of times a field includes a particular result (ID number), to the calculation?977Views0likes0Comments